qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
69,334,001
|
When i am using "optimizer = keras.optimizers.Adam(learning\_rate)" i am getting this error
"AttributeError: module 'keras.optimizers' has no attribute 'Adam". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !
|
2021/09/26
|
[
"https://Stackoverflow.com/questions/69334001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17007363/"
] |
Use `tf.keras.optimizers.Adam(learning_rate)` instead of `keras.optimizers.Adam(learning_rate)`
|
There are ways to solve your problem as you are using keras 2.6 and tensorflow too:
* use (from keras.optimizer\_v2.adam import Adam as Adam) but go through the function documentation once to specify your learning rate and beta values
* you can also use (Adam = keras.optimizers.Adam).
* (import tensorflow as tf) then (Adam = tf.keras.optimizers.Adam)
Use the form that is useful for the environment you set
|
69,334,001
|
When i am using "optimizer = keras.optimizers.Adam(learning\_rate)" i am getting this error
"AttributeError: module 'keras.optimizers' has no attribute 'Adam". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !
|
2021/09/26
|
[
"https://Stackoverflow.com/questions/69334001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17007363/"
] |
As per the [documentation](https://keras.io/api/optimizers/) , try to import `keras` into your code like this,
```
>>> from tensorflow import keras
```
This has helped me as well.
|
Make sure you've imported tensorflow:
```
import tensorflow as tf
```
Then use
```
tf.optimizers.Adam(learning_rate)
```
|
69,334,001
|
When i am using "optimizer = keras.optimizers.Adam(learning\_rate)" i am getting this error
"AttributeError: module 'keras.optimizers' has no attribute 'Adam". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !
|
2021/09/26
|
[
"https://Stackoverflow.com/questions/69334001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17007363/"
] |
As per the [documentation](https://keras.io/api/optimizers/) , try to import `keras` into your code like this,
```
>>> from tensorflow import keras
```
This has helped me as well.
|
There are ways to solve your problem as you are using keras 2.6 and tensorflow too:
* use (from keras.optimizer\_v2.adam import Adam as Adam) but go through the function documentation once to specify your learning rate and beta values
* you can also use (Adam = keras.optimizers.Adam).
* (import tensorflow as tf) then (Adam = tf.keras.optimizers.Adam)
Use the form that is useful for the environment you set
|
69,334,001
|
When i am using "optimizer = keras.optimizers.Adam(learning\_rate)" i am getting this error
"AttributeError: module 'keras.optimizers' has no attribute 'Adam". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !
|
2021/09/26
|
[
"https://Stackoverflow.com/questions/69334001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17007363/"
] |
Make sure you've imported tensorflow:
```
import tensorflow as tf
```
Then use
```
tf.optimizers.Adam(learning_rate)
```
|
There are ways to solve your problem as you are using keras 2.6 and tensorflow too:
* use (from keras.optimizer\_v2.adam import Adam as Adam) but go through the function documentation once to specify your learning rate and beta values
* you can also use (Adam = keras.optimizers.Adam).
* (import tensorflow as tf) then (Adam = tf.keras.optimizers.Adam)
Use the form that is useful for the environment you set
|
60,631,553
|
I would like to parametize the columns and my dataframe in an cursor.execute function. I'm using pymssql, because I like the fact that I can name the parametized columns. Yet I still don't know how to properly tell python that I'm referring to a specific dataframe and I would like to use this columns. Here is the last part of my code (I already tested the connection to my database etc. and it works):
```
with conn:
cursor = conn.cursor()
cursor.execute("""insert into [dbo].[testdb] (day, revenue) values (%(day)s, %(revenue)s)""", dataframe)
result = cursor.fetchall()
for row in result:
print(list(row))
```
I'm getting this error:
```
ValueError Traceback (most recent call last)
<ipython-input-52-037e289ce76e> in <module>
10 with conn:
11 cursor = conn.cursor()
---> 12 cursor.execute("""insert into [dbo].[testdb] (day, revenue) values (%(day)s, %(revenue)s""", dataframe)
13 result= cursor.fetchall()
14
src\pymssql.pyx in pymssql.Cursor.execute()
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\generic.py in __nonzero__(self)
1477 def __nonzero__(self):
1478 raise ValueError(
-> 1479 f"The truth value of a {type(self).__name__} is ambiguous. "
1480 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1481 )
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
|
2020/03/11
|
[
"https://Stackoverflow.com/questions/60631553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12790189/"
] |
I had the same issue. Try to add after your Proxy:
`RequestHeader set X-Forwarded-Proto https` to your `...ssl.conf` which is in sites-available folder.
|
I had same issue, I was trying to setup a SSL termination reverse proxy with apache. I followed this [article](https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension).
Using `0.0.0.0` instead of localhost worked for me.
```
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName exemple.com
SSLCertificateFile /path/fullchain.pem
SSLCertificateKeyFile /path/privkey.pem
ProxyPass / http://0.0.0.0:80/
ProxyPassReverse / http://0.0.0.0:80/
</VirtualHost>
</IfModule>
```
|
1,383,863
|
If you install multiple versions of python (I currently have the default 2.5, installed 3.0.1 and now installed 2.6.2), it automatically puts stuff in `/usr/local`, and it also adjusts the path to include the `/Library/Frameworks/Python/Versions/theVersion/bin`, but whats the point of that when `/usr/local` is already on the PATH, and all installed versions (except the default 2.5, which is in `/usr/bin`) are in there? I removed the python framework paths from my PATH in `.bash_profile`, and I can still type `"python -V" => "Python 2.5.1"`, `"python2.6 -V" => "Python 2.6.2"`,`"python3 -V" => "Python 3.0.1"`. Just wondering why it puts it in `/usr/local`, and also changes the PATH. And is what I did fine? Thanks.
Also, the 2.6 installation made it the 'current' one, having `.../Python.framework/Versions/Current` point to 2.6., So plain 'python' things in `/usr/local/bin` point to 2.6, but it doesn't matter because `usr/bin` comes first and things with the same name in there point to 2.5 stuff.. Anyway, 2.5 comes with leopard, I installed 3.0.1 just to have the latest version (that has a dmg file), and now I installed 2.6.2 for use with pygame.
EDIT: OK, here's how I understand it. When you install, say, Python 2.6.2:
A bunch of symlinks are added to `/usr/local/bin`, so when there's a `#! /usr/local/bin/python` shebang in a python script, it will run, and in `/Applications/Python 2.6`, the Python Launcher is made default application to run .py files, which uses `/usr/local/bin/pythonw`, and `/Library/Frameworks/Python.framework/Versions/2.6/bin` is created and added to the front of the path, so `which python` will get the python in there, and also `#! /usr/bin/env python` shebang's will run correctly.
|
2009/09/05
|
[
"https://Stackoverflow.com/questions/1383863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/148195/"
] |
There's no a priori guarantee that /usr/local/bin will stay on the PATH (especially it will not necessarily stay "in front of" /usr/bin!-), so it's perfectly reasonable for an installer to ensure the specifically needed /Library/.../bin directory does get on the PATH. Plus, it may be the case that the /Library/.../bin has supplementary stuff that doesn't get symlinked into /usr/local/bin, although I believe that's not currently the case with recent Mac standard distributions of Python.
If you know that the way you'll arrange your path, and the exact set of executables you'll be using, are entirely satisfied from /usr/local/bin, then it's quite OK for you to remove the /Library/etc directories from your own path, of course.
|
I just noticed/encountered this issue on my Mac. I have Python 2.5.4, 2.6.2, and 3.1.1 on my machine, and was looking for a way to easily change between them at will. That is when I noticed all the symlinks for the executables, which I found in both '/usr/bin' and '/usr/local/bin'. I ripped all the non-version specific symlinks out, leaving python2.5, python2.6, etc, and wrote a bash shell script that I can run as root to change one symlink I use to direct the path to the version of my choice
'/Library/Frameworks/Python.framework/Versions/Current'
The only bad thing about ripping the symlinks out, is if some other application needed them for some reason. My opinion as to why these symlinks are created is similar to Alex's assessment, the installer is trying to cover all of the bases. All of my versions have been installed by an installer, though I've been trying to compile my own to enable full 64-bit support, and when compiling and installing your own you can choose to not have the symlinks created or the PATH modified during installation.
|
58,113,118
|
I have a python script that connect PostgresSQL.
Below is the script.
```
import psycopg2
conn = psycopg2.connect('connection string')
try:
curr = conn.cursor()
sql_strng = "SELECT * FROM tbl"
### Further operations###
except(Exception, psycopg2.Error) as error:
print("error",error)
finally:
if (conn):
conn.close()
```
The above code works well when I run it from Spyder. But when I try to run this from command prompt using a batch script it gives error as shown below.
My batch script:
```
C:\Users\Anaconda3\python.exe \path\to\python\file
```
The above batch script is throwing error as follows.
```
if(conn):
NameError: name 'conn' is not defined
```
Where I am missing out?
|
2019/09/26
|
[
"https://Stackoverflow.com/questions/58113118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7290715/"
] |
Just use `this.types.filter(({id}) => !this.selected_types.includes(id))`:
```js
let types = [{
id: 1,
name: "Hello"
},
{
id: 2,
name: "World"
},
{
id: 3,
name: "Jon Doe"
}
]
let selected_types = [1, 2];
let resArr = types.filter(({id}) => !selected_types.includes(id));
console.log(resArr);
```
|
**You can achieve Javascript's native method filter, which finally returns a new object**
```
let types = [{
id: 1,
name: "Hello"
},
{
id: 2,
name: "World"
},
{
id: 3,
name: "Jon Doe"
}
]
let selected_types = [1, 2];
types = types.filter(obj => {
if (selected_types.indexOf(obj.id) === -1) {
return obj
}
});
```
|
37,442,993
|
I have a csv file I wish to load into pandas, but the formatting is giving me some problems. The file is such:
>
> Version 1
>
>
> ,Date Time,Name,Value
>
>
> ,26/Jan/2016 07:35:52,Name1,340rqi
>
>
> ,26/Jan/2016 07:00:00,Name2,1.00E+005
>
>
> ,26/Jan/2016 07:00:00,Name3,pulled\_9
>
>
>
(It's a mess of a file, but the main point is that there is an empty 1st column and an empty 1st row with just 'Version 1' in position 0,0)
I am using the following code to get it into my DF:
```
filename_cv = '123456789.csv'
sheet_cv = filename_cv[:-4] #trimming off the .csv part
df_cv = pandas.read_csv(filename_cv, sheet_cv,engine='python')
```
But the output is not desirable. This is what I get:
>
> df\_cv
>
>
> Out[4]:
>
>
> Version 1
>
>
> 0 ,26/Jan/2016 07:35:52,Name1,340rqi
>
>
> 1 ,26/Jan/2016 07:00:00,Name2,1.00E+005
>
>
> 2 ,26/Jan/2016 07:00:00,Name3,pulled\_9
>
>
>
I think those leading commas are my problem, but is there a good way to get rid of them?
I know I can trim rows and change the index (skiprows), but those leading commas are the source of my issue I am sure.
I want the comma separate values to go into their own columns like normal.
What's wrong?
|
2016/05/25
|
[
"https://Stackoverflow.com/questions/37442993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6382244/"
] |
>
> When is DbConnection.StateChange called?
>
>
>
You can find out by looking at the Microsoft reference source code.
The [`StateChange`](http://referencesource.microsoft.com/#System.Data/System/Data/Common/DBConnection.cs,191b2f1d559f7e8d) event is raised by the [`DbConnection.OnStateChange`](http://referencesource.microsoft.com/#System.Data/System/Data/Common/DBConnection.cs,bc9689aff46dc5bc,references) function. Looking for references to this function yields only a few instances:
Firstly, in the `SqlConnection` class, `OnStateChange` is called only in the [`Close`](http://referencesource.microsoft.com/#System.Data/System/Data/SqlClient/SqlConnection.cs,9414944f8fe487ef,references) method.
Then in the [`DbConnectionHelper.cs`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionHelper.cs,9f986d2fc8fa438a) file, there's a partial class called `DBCONNECTIONOBJECT`. It looks like it's used for all `DbConnection`-derived classes using some build-time shenanigans. So you can consider it to be part of `SqlConnection`. In any case, it calls `OnStateChange` only from within the [`SetInnerConnectionEvent`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionHelper.cs,324) function.
As far as I can tell (the partial class nonsense makes it difficult), the `SqlConnection.SetInnerConnectionEvent` is only called from [`SqlConnectionFactory.SetInnerConnectionEvent`](http://referencesource.microsoft.com/#System.Data/System/Data/SqlClient/SqlConnectionFactory.cs,049addc994565208,references). And *that* is called from:
* [`DbConnectionClosed.TryOpenConnection`](http://referencesource.microsoft.com/System.Data/System/Data/ProviderBase/DbConnectionClosed.cs.html#131)
* [`DbConnectionInternal.TryOpenConnectionInternal`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionInternal.cs,d0c96b80d6d7cbb0,references)
* [`DbConnectionInternal.CloseConnection`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionInternal.cs,478)
So, in summary - the event is only raised in response to client-side actions - there does not appear to be any polling of the connection-state built into `SQLConnection`.
>
> Is there a way this behavior can be changed?
>
>
>
Looking at the source code, I can't see one. As others have suggested, you could implement your own polling, of course.
|
The `StateChange` event is meant for the state of the connection, not the instance of the database server. To get the state of the database server,
>
> The StateChange event occurs when the state of the event changes from
> closed to opened, or opened to closed.
>
>
>
From MSDN: <https://msdn.microsoft.com/en-us/library/system.data.common.dbconnection.statechange(v=vs.110).aspx>
If you're going to roll your own monitor for the database, then you may consider using a method that returns true/false if the connection is available and ping that method on a schedule. You could even wrap a method to do this in an endless loop repeating after a duration of time and raise it's own event when this "state" really changes then.
Here's a quick method from another SO answer that is a simple approach:
```
/// <summary>
/// Test that the server is connected
/// </summary>
/// <param name="connectionString">The connection string</param>
/// <returns>true if the connection is opened</returns>
private static bool IsServerConnected(string connectionString)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
try
{
connection.Open();
return true;
}
catch (SqlException)
{
return false;
}
}
}
```
Source: <https://stackoverflow.com/a/9943871/4154421>
|
12,850,550
|
I'm reading conflicting reports about using PostgreSQL on Amazon's Elastic Beanstalk for python (Django).
Some sources say it isn't possible: (http://www.forbes.com/sites/netapp/2012/08/20/amazon-cloud-elastic-beanstalk-paas-python/). I've been through a dummy app setup, and it does seem that MySQL is the only option (amongst other ones that aren't Postgres).
However, I've found fragments around the place mentioning that it is possible - even if they're very light on detail.
I need to know the following:
1. Is it possible to run a PostgreSQL database with a Django app on Elastic Beanstalk?
2. If it's possible, is it worth the trouble?
3. If it's possible, how would you set it up?
|
2012/10/12
|
[
"https://Stackoverflow.com/questions/12850550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1133318/"
] |
Postgre is now selectable from the AWS RDS configurations. Validated through Elastic Beanstalk application setup 2014-01-27.
|
>
> Is it possible to run a PostgreSQL database with a Django app on
> Elastic Beanstalk?
>
>
>
Yes. The dummy app setup you mention refers to the use of an Amazon Relational Database Service. At the moment PostgreSQL is not available as an Amazon RDS, but you can configure your beanstalk AMI to act as a local PostgreSQL server or set up your own PostgreSQL RDS.
>
> If it's possible, is it worth the trouble?
>
>
>
This is really a question about whether it is worth using an RDS or going it alone, which boils down to questions of cost, effort, usage, required efficiency etc. It is very simple to switch database engines serving django so if you change your mind it is easy to switch set up.
>
> If it's possible, how would you set it up?
>
>
>
Essentially you need to customise your beanstalk AMI by installing a PostgreSQL database server on an Amazon linux EB backed AMI instance.
Advice / instructions regarding customising beanstalk AMIs are here:
<http://blog.jetztgrad.net/2011/02/how-to-customize-an-amazon-elastic-beanstalk-instance/>
<https://forums.aws.amazon.com/thread.jspa?messageID=219630𵧮>
[Customizing an Elastic Beanstalk AMI](https://stackoverflow.com/questions/12002445/customizing-an-elastic-beanstalk-ami?rq=1)
|
65,238,577
|
I have a simple Users resource with a put method to update all user information except user password. According to Flask-Restx docs when a model has set the strict and validation params to true, a validation error will be thrown if an unspecified param is provided in the request. However, this doesn't seem to be working for me.
Model definition:
```py
from flask_restx import Namespace, Resource, fields, marshal
users_ns = Namespace("users")
user = users_ns.model(
"user",
{
"user_name": fields.String(example="some_user", required=True),
"email": fields.String(example="some.user@email", required=True),
"is_admin": fields.Boolean(example="False"),
"is_deactivated": fields.Boolean(example="False"),
"created_date": fields.DateTime(example="2020-12-01T01:59:39.297904"),
"last_modified_date": fields.DateTime(example="2020-12-01T01:59:39.297904"),
"uri": fields.Url("api.user"),
},
strict=True,
)
user_post = users_ns.inherit(
"user_post", user, {"password": fields.String(required=True)}
) # Used for when
```
Resource and method definition:
```py
from api.models import Users
class User(Resource):
@users_ns.marshal_with(user)
@users_ns.expect(user, validate=True)
def put(self, id):
"""
Update a specified user.
"""
user = Users.query.get_or_404(id)
body = request.get_json()
user.update(body)
return user
```
Failing Test:
```py
def test_update_user_invalid_password_param(self, client, db):
""" User endpoint should return 400 when user attempts to pass password param to update. """
data = {
"user_name": "some_user",
"email": "some.user@email.com",
"password": "newpassword",
}
response = client.put(url_for("api.user", id=1), json=data)
assert response.status_code == 400
```
The response.status\_code here is 200 because no validation error is thrown for the unspecified param passed in the body of the request.
Am I using the strict param improperly? Am I misunderstanding the behavior of strict?
UPDATED: I've added the test for strict model param from Flask-RestX repo (can be found [here](https://github.com/python-restx/flask-restx/blob/master/tests/test_namespace.py)) for more context on expected behavior:
```py
def test_api_payload_strict_verification(self, app, client):
api = restx.Api(app, validate=True)
ns = restx.Namespace("apples")
api.add_namespace(ns)
fields = ns.model(
"Person",
{
"name": restx.fields.String(required=True),
"age": restx.fields.Integer,
"birthdate": restx.fields.DateTime,
},
strict=True,
)
@ns.route("/validation/")
class Payload(restx.Resource):
payload = None
@ns.expect(fields)
def post(self):
Payload.payload = ns.payload
return {}
data = {
"name": "John Doe",
"agge": 15, # typo
}
resp = client.post_json("/apples/validation/", data, status=400)
assert re.match("Additional properties are not allowed \(u*'agge' was unexpected\)", resp["errors"][""])
```
|
2020/12/10
|
[
"https://Stackoverflow.com/questions/65238577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8184470/"
] |
I resolved my issue by pulling the latest version of Flask-RESTX from Github. The strict parameter for models was merged after Flask-RESTX version 0.2.0 was released on Pypi in March of 2020 (see the closed [issue](https://github.com/python-restx/flask-restx/issues/264) in Flask-RESTX repo for more context). My confusion arose because the documentation appears to represent the latest state of master and not the last Pypi release.
|
It's been a while since I touched on this but from what I can tell, I don't think you are using the strict param correctly. From the documentation [here](https://flask-restx.readthedocs.io/en/latest/_modules/flask_restx/model.html), the `:param bool strict` is defined as
>
> :param bool strict: validation should raise an error when there is param not provided in schema
>
>
>
But in your last snippet of code, you are trying to validate with the dictionary `data` with the body of the request.
If I recall well again, for this sort of task you need to use (and as you mentioned) a [RequestParser](https://flask-restplus.readthedocs.io/en/stable/api.html#flask_restplus.reqparse.RequestParser). There's a good example of it here - [flask - something more strict than @api.expect for input data?](https://stackoverflow.com/a/41250621/6505847)
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
Another possibility that works for an arbitrary number of arguments:
```
from collections import Counter
def lone_sum(*args):
return sum(x for x, c in Counter(args).items() if c == 1)
```
Note that in Python 2, you should use `iteritems` to avoid building a temporary list.
|
```
def lone_sum(a, b, c):
z = (a,b,c)
x = []
for item in z:
if z.count(item)==1:
x.append(item)
return sum(x)
```
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
Had a very similar approach to what you had:
```
def lone_sum(a, b, c):
if a != b and b != c and c != a:
return a + b + c
elif a == b == c:
return 0
elif a == b:
return c
elif b == c:
return a
elif c == a:
return b
```
Since if 2 values are the same the code will automatically return the
remaining value.
|
I tried this on Codingbat but it doesn`t work, although it does on the code editor.
def lone\_sum(a, b, c):
s = set([a,b,c])
return sum(s)
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
A more general solution for any number of arguments is
```
def lone_sum(*args):
seen = set()
summands = set()
for x in args:
if x not in seen:
summands.add(x)
seen.add(x)
else:
summands.discard(x)
return sum(summands)
```
|
Could use a defaultdict to screen out any elements appearing more than once.
```
from collections import defaultdict
def lone_sum(*args):
d = defaultdict(int)
for x in args:
d[x] += 1
return sum( val for val, apps in d.iteritems() if apps == 1 )
```
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
Could use a defaultdict to screen out any elements appearing more than once.
```
from collections import defaultdict
def lone_sum(*args):
d = defaultdict(int)
for x in args:
d[x] += 1
return sum( val for val, apps in d.iteritems() if apps == 1 )
```
|
I tried this on Codingbat but it doesn`t work, although it does on the code editor.
def lone\_sum(a, b, c):
s = set([a,b,c])
return sum(s)
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
Another possibility that works for an arbitrary number of arguments:
```
from collections import Counter
def lone_sum(*args):
return sum(x for x, c in Counter(args).items() if c == 1)
```
Note that in Python 2, you should use `iteritems` to avoid building a temporary list.
|
A more general solution for any number of arguments is
```
def lone_sum(*args):
seen = set()
summands = set()
for x in args:
if x not in seen:
summands.add(x)
seen.add(x)
else:
summands.discard(x)
return sum(summands)
```
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
A more general solution for any number of arguments is
```
def lone_sum(*args):
seen = set()
summands = set()
for x in args:
if x not in seen:
summands.add(x)
seen.add(x)
else:
summands.discard(x)
return sum(summands)
```
|
Had a very similar approach to what you had:
```
def lone_sum(a, b, c):
if a != b and b != c and c != a:
return a + b + c
elif a == b == c:
return 0
elif a == b:
return c
elif b == c:
return a
elif c == a:
return b
```
Since if 2 values are the same the code will automatically return the
remaining value.
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
How about:
```
def lone_sum(*args):
return sum(v for v in args if args.count(v) == 1)
```
|
```
def lone_sum(a, b, c):
z = (a,b,c)
x = []
for item in z:
if z.count(item)==1:
x.append(item)
return sum(x)
```
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
Another possibility that works for an arbitrary number of arguments:
```
from collections import Counter
def lone_sum(*args):
return sum(x for x, c in Counter(args).items() if c == 1)
```
Note that in Python 2, you should use `iteritems` to avoid building a temporary list.
|
Had a very similar approach to what you had:
```
def lone_sum(a, b, c):
if a != b and b != c and c != a:
return a + b + c
elif a == b == c:
return 0
elif a == b:
return c
elif b == c:
return a
elif c == a:
return b
```
Since if 2 values are the same the code will automatically return the
remaining value.
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
How about:
```
def lone_sum(*args):
return sum(v for v in args if args.count(v) == 1)
```
|
Could use a defaultdict to screen out any elements appearing more than once.
```
from collections import defaultdict
def lone_sum(*args):
d = defaultdict(int)
for x in args:
d[x] += 1
return sum( val for val, apps in d.iteritems() if apps == 1 )
```
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) β 6
lone_sum(3, 2, 3) β 2
lone_sum(3, 3, 3) β 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
A more general solution for any number of arguments is
```
def lone_sum(*args):
seen = set()
summands = set()
for x in args:
if x not in seen:
summands.add(x)
seen.add(x)
else:
summands.discard(x)
return sum(summands)
```
|
I tried this on Codingbat but it doesn`t work, although it does on the code editor.
def lone\_sum(a, b, c):
s = set([a,b,c])
return sum(s)
|
47,286,349
|
i need a simple python code which makes a number menu, that doesn't take up many lines
```
print ("Pick an option")
menu =0
Menu = input("""
1. Check Password
2. Generate Password
3. Quit
""")
if (menu) == 1:
Password = input("Please enter the password you want to check")
points =0
```
i tried this but it did not work how i thought it would. i thought this code would work as i have tried it before and it worked but i must have made a mistake in this one.
anyone have any suggestions?
thanks
this is my full code:
```
print ("Pick an option")
menu =0
Menu = input("""
1. Check Password
2. Generate Password
3. Quit
""")
if (menu) == 1:
Password = input("Please enter the password you want to check")
points =0
smybols = ['!','%','^','&','*','(',')','-','_','=','+',]
querty =
["qwertyuiop","QWERTYUIOP","asdfghjl","ASDFGHJKL","zxcvbnm","ZXCVBNM"]
if len(password) >24:
print ('password is too long It must be between 8 and 24 characters')
elif len(password) <8:
print ('password is too short It must be between 8 and 24 characters')
elif len(password) >=8 and len(password) <= 24:
print ('password ok\n')
```
|
2017/11/14
|
[
"https://Stackoverflow.com/questions/47286349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8938872/"
] |
You problem seems to arise from the fact that you use `flatMap` so if there is no data in the DB for a given `id` and you get an empty `Observable`, `flatMap` just produces no output for such `id`. So it looks like what you need is [defaultIfEmpty](http://reactivex.io/documentation/operators/defaultifempty.html) which is translated to Scala's [`orElse`](http://reactivex.io/rxscala/scaladoc/index.html#rx.lang.scala.Observable@orElse[U>:T](default:=>U):rx.lang.scala.Observable[U]). You can use `orElse` to return some default value inside `flatMap`. So to modify your example:
```
def fetchFromDb(id: Int): Observable[String] = {
if (id <= 3)
Observable.just(s"Document #$id")
else
Observable.empty
}
def gotValue(idsToFetch: Seq[Int]): Observable[(Int, Boolean)] = {
Observable.from(idsToFetch).flatMap((id: Int) => fetchFromDb(id).map(_ => (id, true)).orElse((id, false)))
}
println(gotValue(Seq(1, 2, 3, 4, 5, 6)).toBlocking.toList)
```
which prints
>
> List((1,true), (2,true), (3,true), (4,false), (5,false), (6,false))
>
>
>
Or you can use `Option` to return `Some(JsonDocument)` or `None` such as
```
def gotValueEx(idsToFetch: Seq[Int]): Observable[(Int, Option[String])] = {
Observable.from(idsToFetch).flatMap((id: Int) => fetchFromDb(id).map(doc => (id, Option(doc))).orElse((id, None)))
}
println(gotValueEx(Seq(1, 2, 3, 4, 5, 6)).toBlocking.toList)
```
which prints
>
> List((1,Some(Document #1)), (2,Some(Document #2)), (3,Some(Document #3)), (4,None), (5,None), (6,None))
>
>
>
|
One way of doing this is the following:
**(1)** convert sequence of ids to `Observable` and `map` it with
```
id => (id, false)
```
... so you'll get an observable of type `Observable[(Int, Boolean)]` (lets call this new observable `first`).
**(2)** fetch data from database and `map` every fetched row to from:
```
(some_id, true)
```
... inside `Observable[(Int, Boolean)]` (lets call this observable `last`)
**(3)** [concat](http://reactivex.io/documentation/operators/concat.html) `first` and `last`.
**(4)** [toMap](http://reactivex.io/rxscala/scaladoc/index.html#rx.lang.scala.Observable@toMap[K,V](keySelector:T=%3EK,valueSelector:T=%3EV):rx.lang.scala.Observable[Map[K,V]]) result of (3). Duplicate elements coming from `first` will be dropped in process. (this will be your `resultObsrvable`)
**(5)** (possibly) collect the first and only element of the observable (your map). You might not want to do this at all, but if you do, you should really understand implications of blocking to collect result at this point. In any case, this step really depends on your application specifics (how threading\scheduling\io is organized) but brute-force approach should look something like this (refer to [this demo](https://github.com/ReactiveX/RxScala/blob/0.x/examples/src/test/scala/examples/RxScalaDemo.scala) for more specifics):
```
Await.result(resultObsrvable.toBlocking.toFuture, 2 seconds)
```
|
47,286,349
|
i need a simple python code which makes a number menu, that doesn't take up many lines
```
print ("Pick an option")
menu =0
Menu = input("""
1. Check Password
2. Generate Password
3. Quit
""")
if (menu) == 1:
Password = input("Please enter the password you want to check")
points =0
```
i tried this but it did not work how i thought it would. i thought this code would work as i have tried it before and it worked but i must have made a mistake in this one.
anyone have any suggestions?
thanks
this is my full code:
```
print ("Pick an option")
menu =0
Menu = input("""
1. Check Password
2. Generate Password
3. Quit
""")
if (menu) == 1:
Password = input("Please enter the password you want to check")
points =0
smybols = ['!','%','^','&','*','(',')','-','_','=','+',]
querty =
["qwertyuiop","QWERTYUIOP","asdfghjl","ASDFGHJKL","zxcvbnm","ZXCVBNM"]
if len(password) >24:
print ('password is too long It must be between 8 and 24 characters')
elif len(password) <8:
print ('password is too short It must be between 8 and 24 characters')
elif len(password) >=8 and len(password) <= 24:
print ('password ok\n')
```
|
2017/11/14
|
[
"https://Stackoverflow.com/questions/47286349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8938872/"
] |
You problem seems to arise from the fact that you use `flatMap` so if there is no data in the DB for a given `id` and you get an empty `Observable`, `flatMap` just produces no output for such `id`. So it looks like what you need is [defaultIfEmpty](http://reactivex.io/documentation/operators/defaultifempty.html) which is translated to Scala's [`orElse`](http://reactivex.io/rxscala/scaladoc/index.html#rx.lang.scala.Observable@orElse[U>:T](default:=>U):rx.lang.scala.Observable[U]). You can use `orElse` to return some default value inside `flatMap`. So to modify your example:
```
def fetchFromDb(id: Int): Observable[String] = {
if (id <= 3)
Observable.just(s"Document #$id")
else
Observable.empty
}
def gotValue(idsToFetch: Seq[Int]): Observable[(Int, Boolean)] = {
Observable.from(idsToFetch).flatMap((id: Int) => fetchFromDb(id).map(_ => (id, true)).orElse((id, false)))
}
println(gotValue(Seq(1, 2, 3, 4, 5, 6)).toBlocking.toList)
```
which prints
>
> List((1,true), (2,true), (3,true), (4,false), (5,false), (6,false))
>
>
>
Or you can use `Option` to return `Some(JsonDocument)` or `None` such as
```
def gotValueEx(idsToFetch: Seq[Int]): Observable[(Int, Option[String])] = {
Observable.from(idsToFetch).flatMap((id: Int) => fetchFromDb(id).map(doc => (id, Option(doc))).orElse((id, None)))
}
println(gotValueEx(Seq(1, 2, 3, 4, 5, 6)).toBlocking.toList)
```
which prints
>
> List((1,Some(Document #1)), (2,Some(Document #2)), (3,Some(Document #3)), (4,None), (5,None), (6,None))
>
>
>
|
how about this
```
Observable.from(idsToFetch)
.filterNot(x => x._1 == 4 || x._1 == 5 || x._1 == 6)
.foldLeft(idToFetch.map{_->false}.toMap){(m,id)=>m+(id->true)}
```
|
13,047,458
|
I'm trying to set speed limits on downloading/uploading files and found that twisted provides [twisted.protocols.policies.ThrottlingFactory](http://twistedmatrix.com/documents/current/api/twisted.protocols.policies.ThrottlingFactory.html) to handle this job, but I can't get it right. I set `readLimit` and `writeLimit`, but file is still downloading on a maximum speed. What am I doing wrong?
```py
from twisted.protocols.basic import FileSender
from twisted.protocols.policies import ThrottlingFactory
from twisted.web import server, resource
from twisted.internet import reactor
import os
class DownloadPage(resource.Resource):
isLeaf = True
def __init__(self, producer):
self.producer = producer
def render(self, request):
size = os.stat(somefile).st_size
request.setHeader('Content-Type', 'application/octet-stream')
request.setHeader('Content-Length', size)
request.setHeader('Content-Disposition', 'attachment; filename="' + somefile + '"')
request.setHeader('Accept-Ranges', 'bytes')
fp = open(somefile, 'rb')
d = self.producer.beginFileTransfer(fp, request)
def err(error):
print "error %s", error
def cbFinished(ignored):
fp.close()
request.finish()
d.addErrback(err).addCallback(cbFinished)
return server.NOT_DONE_YET
producer = FileSender()
root_resource = resource.Resource()
root_resource.putChild('download', DownloadPage(producer))
site = server.Site(root_resource)
tsite = ThrottlingFactory(site, readLimit=10000, writeLimit=10000)
tsite.protocol.producer = producer
reactor.listenTCP(8080, tsite)
reactor.run()
```
**UPDATE**
So sometime after I run it:
```
2012-10-25 09:17:03+0600 [-] Unhandled Error
Traceback (most recent call last):
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/application/app.py", line 402, in startReactor
self.config, oldstdout, oldstderr, self.profiler, reactor)
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/application/app.py", line 323, in runReactorWithLogging
reactor.run()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/internet/base.py", line 1169, in run
self.mainLoop()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/internet/base.py", line 1178, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/internet/base.py", line 800, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/policies.py", line 334, in unthrottleWrites
p.unthrottleWrites()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/policies.py", line 225, in unthrottleWrites
self.producer.resumeProducing()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/basic.py", line 919, in resumeProducing
self.consumer.unregisterProducer()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/web/http.py", line 811, in unregisterProducer
self.transport.unregisterProducer()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/policies.py", line 209, in unregisterProducer
del self.producer
exceptions.AttributeError: ThrottlingProtocol instance has no attribute 'producer'
```
I see that I'm not supposed to assign producer like I do know `tsite.protocol.producer = producer`, I'm new to Twisted and I don't know how to do that another way.
|
2012/10/24
|
[
"https://Stackoverflow.com/questions/13047458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1770691/"
] |
This does **not** look like **clustering** to me.
Instead, I figure you want a simple **decision tree classification**.
It should already be available in Rapidminer.
|
You could use the "Generate Attributes" operator.
This creates new attributes from existing ones.
It would be relatively tiresome to create all the rules but they would be something like
cluster : if (((A==0)&&(B==0)&&(C==0)),1,0)
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
Did you try to escape percents with `%%`?
|
There's `RawConfigParser` which is like `ConfigParser` without the interpolation behaviour. If you don't use the interpolation feature in any other part of the configuration file, you can simply replace `ConfigParser` with `RawConfigParser` in your code.
See the documentation of [RawConfigParser](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser) for more details.
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
Did you try to escape percents with `%%`?
|
You can also use interpolation set to `None`.
```
config = ConfigParser(strict=False, interpolation=None)
```
(I am using Python `3.6.0`)
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
You can also use interpolation set to `None`.
```
config = ConfigParser(strict=False, interpolation=None)
```
(I am using Python `3.6.0`)
|
There's `RawConfigParser` which is like `ConfigParser` without the interpolation behaviour. If you don't use the interpolation feature in any other part of the configuration file, you can simply replace `ConfigParser` with `RawConfigParser` in your code.
See the documentation of [RawConfigParser](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser) for more details.
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
You might wanna use `ConfigParser.RawConfigParser` instead of `ConfigParser.ConfigParser`. Only the latter does magical interpolation on config values.
**EDIT:**
Actually, using `ConfigParser.SafeConfigParser` you'll able to escape format strings with an additional `%` percent sign. This example should be working then:
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%%(asctime)s %%(levelname)s: %%(module)s, line %%(lineno)d - %%(message)s"
}
```
|
What is the problem with the code above? Interpolation is only performed if the `%` operator is applied to the string. If you don't use `%` you can use the formatting string like any other string.
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
Did you try to escape percents with `%%`?
|
What is the problem with the code above? Interpolation is only performed if the `%` operator is applied to the string. If you don't use `%` you can use the formatting string like any other string.
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
You can also use interpolation set to `None`.
```
config = ConfigParser(strict=False, interpolation=None)
```
(I am using Python `3.6.0`)
|
What version of Python are you using? An upgrade to 2.6.4 may help, see
<http://bugs.python.org/issue5741>
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
There's `RawConfigParser` which is like `ConfigParser` without the interpolation behaviour. If you don't use the interpolation feature in any other part of the configuration file, you can simply replace `ConfigParser` with `RawConfigParser` in your code.
See the documentation of [RawConfigParser](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser) for more details.
|
What version of Python are you using? An upgrade to 2.6.4 may help, see
<http://bugs.python.org/issue5741>
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
You can also use interpolation set to `None`.
```
config = ConfigParser(strict=False, interpolation=None)
```
(I am using Python `3.6.0`)
|
What is the problem with the code above? Interpolation is only performed if the `%` operator is applied to the string. If you don't use `%` you can use the formatting string like any other string.
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
Did you try to escape percents with `%%`?
|
You might wanna use `ConfigParser.RawConfigParser` instead of `ConfigParser.ConfigParser`. Only the latter does magical interpolation on config values.
**EDIT:**
Actually, using `ConfigParser.SafeConfigParser` you'll able to escape format strings with an additional `%` percent sign. This example should be working then:
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%%(asctime)s %%(levelname)s: %%(module)s, line %%(lineno)d - %%(message)s"
}
```
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
There's `RawConfigParser` which is like `ConfigParser` without the interpolation behaviour. If you don't use the interpolation feature in any other part of the configuration file, you can simply replace `ConfigParser` with `RawConfigParser` in your code.
See the documentation of [RawConfigParser](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser) for more details.
|
What is the problem with the code above? Interpolation is only performed if the `%` operator is applied to the string. If you don't use `%` you can use the formatting string like any other string.
|
15,863,657
|
Kinda a newbie in python, starting to lean on how python works with strings and iteration over strings.
Had worked on a chunk of so-called code 'Palindrome', would you take a look at which part exactly it is going wrong?
```
def palindrome(s):
if len(s) < 1:
return True
else:
i = 0
j = len(s) - 1
r = s[::-1]
print "s is %s" % s,
print "r is %s" % r
while s[j] == r[i] and j != 0:
print "s[j] is %s" % s[j],
print "; r[i] is %s" % r[i]
i += 1
j -= 1
return True
return False
```
I've used all those print statements to make sure where does the code going. The program is supposed to compare a string's and its reversed defining whether it is a palindrome or not.
|
2013/04/07
|
[
"https://Stackoverflow.com/questions/15863657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1540033/"
] |
Below you have a little less complicated solution:
```
def is_palindrome(s):
return s == s[::-1]
```
In your version you are always returning `True` for all strings with `len(s) >= 1`
|
I may be missing something obvious, but shouldn't this be enough?
```
def ispalindrome(s):
return s == s[::-1]
```
|
23,554,644
|
I was able to get my flask app running as a service thanks to [Is it possible to run a Python script as a service in Windows? If possible, how?](https://stackoverflow.com/questions/32404/is-it-possible-to-run-a-python-script-as-a-service-in-windows-if-possible-how), but when it comes to stopping it i cannot. I have to terminate the process in task manager.
Here's my run.py which I turn into a service via run.py install:
```
from app import app
from multiprocessing import Process
import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
class AppServerSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "CCApp"
_svc_display_name_ = "CC App"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
server.terminate()
server.join()
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
self.main()
def main(self):
server = Process(app.run(host = '192.168.1.6'))
server.start()
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AppServerSvc)
```
I got the process stuff from this post: <http://librelist.com/browser/flask/2011/1/10/start-stop-flask/#a235e60dcaebaa1e134271e029f801fe> but unfortunately it doesn't work either.
The log file in Event Viewer says that the global variable 'server' is not defined. However, i've made server a global variable and it still gives me the same error.
|
2014/05/09
|
[
"https://Stackoverflow.com/questions/23554644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2917993/"
] |
You can stop the Werkzeug web server gracefully before you stop the Win32 server. Example:
```
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
@app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
```
If you add this to your Flask server you can then request a graceful server shutdown by sending a `POST` request to `/shutdown`. You can use requests or urllib2 to do this. Depending on your situation you may need to protect this route against unauthorized access.
Once the server has stopped I think you will have to no problem stopping the Win32 service.
Note that the shutdown code above appears in this [Flask snippet](http://flask.pocoo.org/snippets/67/).
|
I recommend you use <http://supervisord.org/>. Actually not work in Windows, but with Cygwin you can run supervisor as in Linux, including run as service.
For install Supervisord: <https://stackoverflow.com/a/18032347/3380763>
After install you must configure the app, here an example: <http://flaviusim.com/blog/Deploying-Flask-with-nginx-uWSGI-and-Supervisor/> (Is not necessary that you use Nginx with the Supervisor's configuration is enough)
|
23,554,644
|
I was able to get my flask app running as a service thanks to [Is it possible to run a Python script as a service in Windows? If possible, how?](https://stackoverflow.com/questions/32404/is-it-possible-to-run-a-python-script-as-a-service-in-windows-if-possible-how), but when it comes to stopping it i cannot. I have to terminate the process in task manager.
Here's my run.py which I turn into a service via run.py install:
```
from app import app
from multiprocessing import Process
import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
class AppServerSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "CCApp"
_svc_display_name_ = "CC App"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
server.terminate()
server.join()
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
self.main()
def main(self):
server = Process(app.run(host = '192.168.1.6'))
server.start()
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AppServerSvc)
```
I got the process stuff from this post: <http://librelist.com/browser/flask/2011/1/10/start-stop-flask/#a235e60dcaebaa1e134271e029f801fe> but unfortunately it doesn't work either.
The log file in Event Viewer says that the global variable 'server' is not defined. However, i've made server a global variable and it still gives me the same error.
|
2014/05/09
|
[
"https://Stackoverflow.com/questions/23554644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2917993/"
] |
You can stop the Werkzeug web server gracefully before you stop the Win32 server. Example:
```
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
@app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
```
If you add this to your Flask server you can then request a graceful server shutdown by sending a `POST` request to `/shutdown`. You can use requests or urllib2 to do this. Depending on your situation you may need to protect this route against unauthorized access.
Once the server has stopped I think you will have to no problem stopping the Win32 service.
Note that the shutdown code above appears in this [Flask snippet](http://flask.pocoo.org/snippets/67/).
|
You could also trick Flask into believing you pressed `Ctrl` + `C`:
```
def shutdown_flask(self):
from win32api import GenerateConsoleCtrlEvent
CTRL_C_EVENT = 0
GenerateConsoleCtrlEvent(CTRL_C_EVENT, 0)
```
Then simply call `shutdown_flask()` in your `SvcStop()`:
```
try:
# try to exit gracefully
self.shutdown_flask()
except Exception as e:
# force quit
os._exit(0)
```
Should `shutdown_flask()` fail for some reason, `os._exit()` makes sure your service will end (albeit with a nasty warning), by halting the interpreter.
|
1,817,780
|
I have created a python script which pulls data out of OLE streams in Word documents, but am having trouble converting the OLE2-formatted timestamp to something more human-readable :(
The timestamp which is pulled out is 12760233021 but I cannot for the life of me convert this to a date like 12 Mar 2007 or similar.
Any help is greatly appreciated.
EDIT:
OK I have ran the script over one of my word documents, which was created on **31/10/2009, 10:05:00**. The Create Date in the OLE DocumentSummaryInformation stream is **12901417500**.
Another example is a word doc created on 27/10/2009, 15:33:00, gives the Create Date of 12901091580 in the OLE DocumentSummaryInformation stream.
The MSDN documentation on the properties of these OLE streams is <http://msdn.microsoft.com/en-us/library/aa380376%28VS.85%29.aspx>
The def which pulls these streams out is given below:
```
import OleFileIO_PL as ole
def enumerateStreams(item):
# item is an arbitrary file
if ole.isOleFile('%s' % item):
loader = ole.OleFileIO('%s' % item)
# enumerate all the OLE streams in the office file
streams = loader.listdir()
streamProps = []
for stream in streams:
if stream[0] == '\x05SummaryInformation':
# get all the properties fro the SummaryInformation OLE stream
streamProps.append(loader.getproperties(stream))
elif stream[0] == '\x05DocumentSummaryInformation':
# get all the properties from the DocumentSummaryInformation stream
streamProps.append(loader.getproperties(stream))
return streamProps
```
|
2009/11/30
|
[
"https://Stackoverflow.com/questions/1817780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Well, Python 3.0 and 3.1 are already released, so you can check this out for yourself. The end result was that map and filter were kept as built-ins, and lambda was also kept. The only change was that reduce was moved to the functools module; you just need to do
```
from functools import reduce
```
to use it.
Future 3.x releases can be expected to remain backwards-compatible with 3.0 and 3.1 in this respect.
|
In Python 3.x, Python continues to have a rich set of functional-ish tools built in: list comprehensions, generator expressions, iterators and generators, and functions like `any()` and `all()` that have short-circuit evaluation wherever possible.
Python's "Benevolent Dictator For Life" floated the idea of removing `map()` because you can trivially reproduce its effects with a list comprehension:
```
lst2 = map(foo, lst)
lst3 = [foo(x) for x in lst]
lst2 == lst3 # evaluates True
```
Python's `lambda` feature has not been removed or renamed, and likely never will be. However, it will likely never become more powerful, either. Python's `lambda` is restricted to a single expression; it cannot include statements and it cannot include multiple lines of Python code.
Python's plain-old-standard `def` defines a function object, which can be passed around just as easily as a `lambda` object. You can even unbind the name of the function after you define it, if you really want to do so.
Example:
```
# NOT LEGAL PYTHON
lst2 = map(lambda x: if foo(x): x**2; else: x, lst)
# perfectly legal Python
def lambda_function(x):
if foo(x):
return x**2
else:
return x
lst2 = map(lambda_function, lst)
del(lambda_function) # can unbind the name if you wish
```
Note that you could actually use the "ternary operator" in a `lambda` so the above example is a bit contrived.
```
lst2 = map(lambda x: x**2 if foo(x) else x, lst)
```
But some multiline functions are difficult to force into a `lambda` and are better handled as simple ordinary multiline functions.
Python 3.x has lost none of its functional power. There is some general feeling that list comprehensions and generator expressions are probably preferable to `map()`; in particular, generator expressions can sometimes be used to do the equivalent of a `map()` but without allocating a list and then freeing it again. For example:
```
total = sum(map(lst, foo))
total2 = sum(foo(x) for x in lst)
assert total == total2 # same result
```
In Python 2.x, the `map()` allocates a new list, which is summed and immediately freed. The generator expression gets the values one at a time and never ties up the memory of a whole list of values.
In Python 3.x, `map()` is "lazy" so both are about equally efficient. But as a result, in Python 3.x the ternary lambda example needs to be forced to expand into a list:
```
lst2 = list(map(lambda x: x**2 if foo(x) else x, lst))
```
Easier to just write the list comprehension!
```
lst2 = [x**2 if foo(x) else x for x in lst]
```
|
55,118,630
|
Is there a way to start a python script on a server from a webpage?
At work I've made a simple python script using selenium to do a routine job (open a webpage and click a few buttons).
I want to be able to start this remotely (still on company network) but due to security/permissions here at work I can't use telnet/shh/etc or install PHP (which I've read is one way to do it).
I've made simple python servers earlier, which can send/receive REST request, but I can't find a way to use javascript (which I'm somewhat comfortable with) to send from a webpage.
I've found several search results where people suggest AJAX, but I can't manage to make it work
It doesn't have to be anything more than a blank page with a single button on that sends a request to my server which in turn starts the script.
Thanks for any help.
|
2019/03/12
|
[
"https://Stackoverflow.com/questions/55118630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3379555/"
] |
Strings are immutable. They cannot change. Any action you perform on them will result in a "new" string that is returned by the method you called upon it.
Read more about it: [Immutability of Strings in Java](https://stackoverflow.com/questions/1552301/immutability-of-strings-in-java)
So in your example, if you wish to change your string you need to do to overwrite the variable that references your original string, to point to the new string value.
```
String tempString= "abc is very easy";
tempString = tempString.replace("very","not");
System.out.println("tempString is "+tempString);
```
Under the hood for your understanding:
>
> Assign String "abc is very easy" to memory address 0x03333
>
> Assign address 0x03333 to variable tempString
>
> Call method replace, create new modified string at address 0x05555
>
> Assign address 0x05555 to temp String variable
>
> Return temp String variable from method replace
>
> Assign address from temp String variable to variable tempString
>
> Address from tempString is now 0x05555.
>
> If string at 0x03333 was not in code defined string and in StringPool String at address 0x03333 will be garbage collected.
>
>
>
|
You can reassign `tempString` with its new value :
```
String tempString= "abc is very easy";
tempString = tempString.replace("very","not");
System.out.println("tempString is "+tempString);
```
Result is :
```
tempString is abc is not easy
```
Best
|
29,999,482
|
I try to "click" Javascript alert for reboot confirmation in DSL modem with a Python script as follows:
```
#!/usr/bin/env python
import selenium
import time
from selenium import webdriver
cap = {u'acceptSslCerts': True,
u'applicationCacheEnabled': True,
u'browserConnectionEnabled': True,
u'browserName': u'phantomjs',
u'cssSelectorsEnabled': True,
u'databaseEnabled': False,
u'driverName': u'ghostdriver',
u'driverVersion': u'1.1.0',
u'handlesAlerts': True,
u'javascriptEnabled': True,
u'locationContextEnabled': False,
u'nativeEvents': True,
u'platform': u'linux-unknown-64bit',
u'proxy': {u'proxyType': u'direct'},
u'rotatable': False,
u'takesScreenshot': True,
u'version': u'1.9.8',
u'webStorageEnabled': False}
driver = webdriver.PhantomJS('/usr/lib/node_modules/phantomjs/bin/phantomjs', desired_capabilities=cap)
driver.get('http://username:passwd@192.168.1.254')
sbtn = driver.find_element_by_id('reboto_btn')
sbtn.click()
time.sleep(4)
al = driver.switch_to_alert()
print al.accept()
```
However, I get exception pasted below even though I do set `handlesAlerts` in `desired_capabilities`.
How can I fix that? What's the reason for the exception?
Exception:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
/usr/local/bin/pjs/asus_reboot.py in <module>()
36 #ipdb.set_trace()
37
---> 38 print al.accept()
39
40 #print al.text
/usr/local/venvs/asusreboot/local/lib/python2.7/site-packages/selenium/webdriver/common/alert.pyc in accept(self)
76 Alert(driver).accept() # Confirm a alert dialog.
77 """
---> 78 self.driver.execute(Command.ACCEPT_ALERT)
79
80 def send_keys(self, keysToSend):
/usr/local/venvs/asusreboot/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.pyc in execute(self, driver_command, params)
173 response = self.command_executor.execute(driver_command, params)
174 if response:
--> 175 self.error_handler.check_response(response)
176 response['value'] = self._unwrap_value(
177 response.get('value', None))
/usr/local/venvs/asusreboot/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.pyc in check_response(self, response)
134 if exception_class == ErrorInResponseException:
135 raise exception_class(response, value)
--> 136 raise exception_class(value)
137 message = ''
138 if 'message' in value:
WebDriverException: Message: Invalid Command Method - {"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"53","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:36590","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"sessionId\": \"fc97c240-f098-11e4-ae53-e17f38effd6c\"}","url":"/accept_alert","urlParsed":{"anchor":"","query":"","file":"accept_alert","directory":"/","path":"/accept_alert","relative":"/accept_alert","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/accept_alert","queryKey":{},"chunks":["accept_alert"]},"urlOriginal":"/session/fc97cea0-f098-11e4-ae53-e17f38eaad6c/accept_alert"}
```
|
2015/05/02
|
[
"https://Stackoverflow.com/questions/29999482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2022518/"
] |
As PhantomJs has no support for Alert boxes .you need to use executor for this.
```
driver.execute_script("window.confirm = function(msg) { return true; }");
```
|
In java, driver.switchTo().alert().accept(); will do the job.
I am not sure, why you are using "print al.accept()", probably are you trying to print text? then alert.getText() will do in java, sorry if i am wrong, because i am sure in python.
Thank You,
Murali
<http://seleniumtrainer.com/>
|
69,694,596
|
This is my code. It should send message to the channel when user join the server.
```py
@client.event
async def on_member_join(member):
print('+') #this works perfectly
ch = client.get_channel(84319995256905728)
await ch.send(f"{member.name} has joined")
```
But error was occur. This is the output:
```py
Ignoring exception in on_member_join
Traceback (most recent call last):
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 351, in _run_event
await coro(*args, **kwargs)
File "main.py", line 30, in on_member_join
await ch.send(f"{member.name} has joined")
AttributeError: 'NoneType' object has no attribute 'send'
```
I had enabled server member intents in dev portal.
```py
intents = nextcord.Intents.default()
intents.members = True
client = commands.Bot(command_prefix='=',intents=intents)
```
May I know how to fix it?
|
2021/10/24
|
[
"https://Stackoverflow.com/questions/69694596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17067135/"
] |
You can approximate the requirement this way:
```
from collections.abc import Iterable
def f2(sequence_type, *elements):
if isinstance(elements[0],Iterable):
return sequence_type(elements[0])
else:
return sequence_type(elements[:1])
```
which is close, but fails for `f(list, 'ab')` which returns `['a', 'b']` not `['ab']`
It is hard to see why one would expect a *Python* function to treat strings of length 2 differently from strings of length 1. The language itself says that `list('ab') == ['a', 'b']`.
I suspect that is an expectation imported from languages like C that treat characters and strings as different datatypes, in other words I have reservations about that aspect of the question.
But saying you don't like the spec isn't a recipe for success, so that special treatment has to be coded as such:
```
def f(sequence_type, elements):
if isinstance(elements, str) and len(elements) > 1 and sequence_type != str:
return sequence_type([elements])
else:
return f2(sequence_type, elements)
```
The result is generic but the special-casing can't really be called clean.
|
The examples for `str` doesn't really fit the description of the function, but here are some implementations that pass your test cases - you want to wrap the element into a collection first for tuple/list/set before converting:
```py
def f(sequence_type, element):
return sequence_type([element] if sequence_type != str else element)
```
```py
def f(sequence_type, element):
return sequence_type((element,) if sequence_type != str else element)
```
```py
def f(sequence_type, element):
return sequence_type({element} if sequence_type != str else element)
```
|
69,694,596
|
This is my code. It should send message to the channel when user join the server.
```py
@client.event
async def on_member_join(member):
print('+') #this works perfectly
ch = client.get_channel(84319995256905728)
await ch.send(f"{member.name} has joined")
```
But error was occur. This is the output:
```py
Ignoring exception in on_member_join
Traceback (most recent call last):
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 351, in _run_event
await coro(*args, **kwargs)
File "main.py", line 30, in on_member_join
await ch.send(f"{member.name} has joined")
AttributeError: 'NoneType' object has no attribute 'send'
```
I had enabled server member intents in dev portal.
```py
intents = nextcord.Intents.default()
intents.members = True
client = commands.Bot(command_prefix='=',intents=intents)
```
May I know how to fix it?
|
2021/10/24
|
[
"https://Stackoverflow.com/questions/69694596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17067135/"
] |
You can approximate the requirement this way:
```
from collections.abc import Iterable
def f2(sequence_type, *elements):
if isinstance(elements[0],Iterable):
return sequence_type(elements[0])
else:
return sequence_type(elements[:1])
```
which is close, but fails for `f(list, 'ab')` which returns `['a', 'b']` not `['ab']`
It is hard to see why one would expect a *Python* function to treat strings of length 2 differently from strings of length 1. The language itself says that `list('ab') == ['a', 'b']`.
I suspect that is an expectation imported from languages like C that treat characters and strings as different datatypes, in other words I have reservations about that aspect of the question.
But saying you don't like the spec isn't a recipe for success, so that special treatment has to be coded as such:
```
def f(sequence_type, elements):
if isinstance(elements, str) and len(elements) > 1 and sequence_type != str:
return sequence_type([elements])
else:
return f2(sequence_type, elements)
```
The result is generic but the special-casing can't really be called clean.
|
You can try this one:
```
def f(sequence_type, element):
g = lambda *args: args
return str(g(element)[0]) if str==sequence_type else sequence_type(g(element))
```
|
69,694,596
|
This is my code. It should send message to the channel when user join the server.
```py
@client.event
async def on_member_join(member):
print('+') #this works perfectly
ch = client.get_channel(84319995256905728)
await ch.send(f"{member.name} has joined")
```
But error was occur. This is the output:
```py
Ignoring exception in on_member_join
Traceback (most recent call last):
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 351, in _run_event
await coro(*args, **kwargs)
File "main.py", line 30, in on_member_join
await ch.send(f"{member.name} has joined")
AttributeError: 'NoneType' object has no attribute 'send'
```
I had enabled server member intents in dev portal.
```py
intents = nextcord.Intents.default()
intents.members = True
client = commands.Bot(command_prefix='=',intents=intents)
```
May I know how to fix it?
|
2021/10/24
|
[
"https://Stackoverflow.com/questions/69694596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17067135/"
] |
You can approximate the requirement this way:
```
from collections.abc import Iterable
def f2(sequence_type, *elements):
if isinstance(elements[0],Iterable):
return sequence_type(elements[0])
else:
return sequence_type(elements[:1])
```
which is close, but fails for `f(list, 'ab')` which returns `['a', 'b']` not `['ab']`
It is hard to see why one would expect a *Python* function to treat strings of length 2 differently from strings of length 1. The language itself says that `list('ab') == ['a', 'b']`.
I suspect that is an expectation imported from languages like C that treat characters and strings as different datatypes, in other words I have reservations about that aspect of the question.
But saying you don't like the spec isn't a recipe for success, so that special treatment has to be coded as such:
```
def f(sequence_type, elements):
if isinstance(elements, str) and len(elements) > 1 and sequence_type != str:
return sequence_type([elements])
else:
return f2(sequence_type, elements)
```
The result is generic but the special-casing can't really be called clean.
|
I think that you want to change the behavior of the literals, in particular for the case `str`-`list`. Since the changes are decided by you it means that you have to make either a case-study with `if`-`else` or by bypassing it in some tricky way. In my solution I choose the latter but I required only a conditional for the `str`-case to normalized its behavior. The mapping is string-based that should be evaluated.
```
def f(sequence_type, element):
type_literal_map = {tuple: '({},)', set: '{{{}}}', list: '[{}]', str: '"{}"'}
if isinstance(element, str) and not isinstance(sequence_type(), str):
element = f'"{element}"'
return eval(type_literal_map[sequence_type].format(element))
l = [(str, 'a'), (list, 'a'), (list, 'ab'), (list, 1), (tuple, 1), (set, 1), (str, 1), (str, [1])]
for t in l:
print(t[0].__name__, t[1],'->', f(*t))
```
Output
```
str a -> a
list a -> ['a']
list ab -> ['ab']
list 1 -> [1]
tuple 1 -> (1,)
set 1 -> {1}
str 1 -> 1
str [1] -> [1]
```
|
52,879,261
|
I have a spring boot application and i am trying to implement the spring security to override the default username and password generated by spring .but its not working. spring still using the default user credentials.
```
@EnableWebSecurity
@Configuration
public class WebSecurityConfigurtion extends WebSecurityConfigurerAdapter {
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser("demo").password("demo").roles("USER").and().withUser("admin")
.password("admin").roles("USER", "ADMIN");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().anyRequest().authenticated().and().httpBasic();
}
```
and this is my main spring boot class
```
@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class ,SecurityAutoConfiguration.class})
@EnableAsync
@EntityScan({ "com.demo.entity" })
@EnableJpaRepositories(basePackages = {"com.demo.repositories"})
@ComponentScan({ "com.demo.controller", "com.demo.service" })
@Import({SwaggerConfig.class})
public class UdeManagerServiceApplication extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(UdeManagerServiceApplication.class);
}
public static void main(String[] args) {
SpringApplication.run(UdeManagerServiceApplication.class, args);
}
```
This is the logs, and it shows the generated password
```
22:26:48.537 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Included patterns for restart : []
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Excluded patterns for restart : [/spring-boot-actuator/target/classes/, /spring-boot-devtools/target/classes/, /spring-boot/target/classes/, /spring-boot-starter-[\w-]+/, /spring-boot-autoconfigure/target/classes/, /spring-boot-starter/target/classes/]
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.restart.ChangeableUrls - Matching URLs for reloading : [file:/C:/Work/Spring/UdeManagerService/target/classes/]
2018-10-18 22:26:48.797 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'configurationProperties' with highest search precedence
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.4.RELEASE)
2018-10-18 22:26:49.218 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletConfigInitParams' with lowest search precedence
2018-10-18 22:26:49.219 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletContextInitParams' with lowest search precedence
2018-10-18 22:26:49.220 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemProperties' with lowest search precedence
2018-10-18 22:26:49.221 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemEnvironment' with lowest search precedence
2018-10-18 22:26:49.222 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Initialized StandardServletEnvironment with PropertySources [StubPropertySource {name='servletConfigInitParams'}, StubPropertySource {name='servletContextInitParams'}, MapPropertySource {name='systemProperties'}, SystemEnvironmentPropertySource {name='systemEnvironment'}]
2018-10-18 22:26:49.967 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Starting UdeManagerServiceApplication on TRV-LT-ANSAR with PID 7652 (C:\Work\Spring\UdeManagerService\target\classes started by Ansar.Samad in C:\Work\Spring\UdeManagerService)
2018-10-18 22:26:49.973 DEBUG 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Running with Spring Boot v2.0.4.RELEASE, Spring v5.0.8.RELEASE
2018-10-18 22:26:49.976 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : No active profile set, falling back to default profiles: default
2018-10-18 22:26:50.614 INFO 7652 --- [ restartedMain] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@23daa694: startup date [Thu Oct 18 22:26:50 IST 2018]; root of context hierarchy
2018-10-18 22:26:53.258 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Removing PropertySource 'defaultProperties'
2018-10-18 22:26:53.428 INFO 7652 --- [ restartedMain] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$82d409fb] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-10-18 22:26:53.936 INFO 7652 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2018-10-18 22:26:53.959 INFO 7652 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-10-18 22:26:53.960 INFO 7652 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.32
2018-10-18 22:26:53.966 INFO 7652 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [C:\Program Files\Java\jdk1.8.0_162\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:/Program Files (x86)/Java/jre1.8.0_181/bin/client;C:/Program Files (x86)/Java/jre1.8.0_181/bin;C:/Program Files (x86)/Java/jre1.8.0_181/lib/i386;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\phython3.6\Scripts\;C:\phython3.6\;C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Java\jre1.8.0_131\bin;C:\cuia\bat;C:\apache-maven-3.5.0-bin\apache-maven-3.5.0\bin;JAVA_HOME;SPRING_HOME;C:\Program Files (x86)\Symantec\VIP Access Client\;C:\phython3.6;C:\Program Files\Git\cmd;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;TESTNG_HOME;C:\TestNg\testng-6.8.jar;C:\python-3.6.5\Scripts;C:\python-3.6.5;C:\Program Files (x86)\Yarn\bin\;C:\Program Files\nodejs\;;C:\Program Files\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Local\Yarn\bin;C:\Users\Ansar.Samad\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Roaming\npm;C:\spring-tool-suite-3.9.5.RELEASE-e4.8.0-win32\sts-bundle\sts-3.9.5.RELEASE;;.]
2018-10-18 22:26:54.081 INFO 7652 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-10-18 22:26:54.082 DEBUG 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Published root WebApplicationContext as ServletContext attribute with name [org.springframework.web.context.WebApplicationContext.ROOT]
2018-10-18 22:26:54.082 INFO 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3468 ms
2018-10-18 22:26:54.192 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet mapped to [/]
2018-10-18 22:26:54.274 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Replacing PropertySource 'servletContextInitParams' with 'servletContextInitParams'
2018-10-18 22:26:54.308 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : Driver class com.mysql.jdbc.Driver found in Thread context class loader org.springframework.boot.devtools.restart.classloader.RestartClassLoader@4e843ef9
2018-10-18 22:26:54.407 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : HikariPool-1 - configuration:
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : allowPoolSuspension.............false
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : autoCommit......................true
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : catalog.........................none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionInitSql...............none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTestQuery.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTimeout...............30000
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSource......................none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceClassName.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceJNDI..................none
2018-10-18 22:26:54.411 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceProperties............{password=<masked>}
2018-10-18 22:22018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationAuthReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHiddenReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHttpMethodReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParameterReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParametersReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNicknameIntoUniqueIdReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNotesReader': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemProperties': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemEnvironment': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'org.springframework.context.annotation.ConfigurationClassPostProcessor.importRegistry': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'messageSource': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'applicationEventMulticaster': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'servletContext': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextParameters': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextAttributes': no URL paths identified
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.271 DEBUG 6720 --- [ restartedMain] .m.m.a.ExceptionHandlerExceptionResolver : Looking for exception mappings: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@275b060b: startup date [Thu Oct 18 22:35:35 IST 2018]; root of context hierarchy
2018-10-18 22:35:45.599 INFO 6720 --- [ restartedMain] .s.s.UserDetailsServiceAutoConfiguration :
Using generated security password: e67dca07-3bc3-49e2-bc33-84c9643ef09a
2018-10-18 22:35:45.740 INFO 6720 --- [ restartedMain] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher@1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6abf32c0, org.springframework.security.web.context.SecurityContextPersistenceFilter@b2b202e, org.springframework.security.web.header.HeaderWriterFilter@23ff67f5, org.springframework.security.web.csrf.CsrfFilter@a493c40, org.springframework.security.web.authentication.logout.LogoutFilter@776a660, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@334ce4e9, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@1f4c6b47, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4a4c14a3, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7793ba1a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4edda6d8, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3e492163, org.springframework.security.web.session.SessionManagementFilter@68661010, org.springframework.security.web.access.ExceptionTranslationFilter@6920b121, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@35279a68]
2018-10-18 22:35:45.836 INFO 6720 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Bean with name 'dataSource' has been autodetected for JMX exposure
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource]
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2018-10-18 22:35:45.899 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2018-10-18 22:35:45.914 INFO 6720 --- [ restartedMain] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2018-10-18 22:35:46.086 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: testUsingPOST_1
2018-10-18 22:35:46.101 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: getDataUsingGET_1
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Looking for resource handler mappings
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**/favicon.ico", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/], class path resource []], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3914bc92]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/webjars/**", locations=[class path resource [META-INF/resources/webjars/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3dc6f1f0]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@315362d4]
2018-10-18 22:35:47.196 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@edd709c
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2018-10-18 22:35:49.180 DEBUG 6720 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'server.ports' with highest search precedence
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Started UdeManagerServiceApplication in 14.132 seconds (JVM running for 14.884)
2018-10-18 22:35:50.929 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@7a5e8413
```
|
2018/10/18
|
[
"https://Stackoverflow.com/questions/52879261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10525271/"
] |
This can be done using following properties on `application.properties` or `application.yaml` file.
```
spring.security.user.name
spring.security.user.password
spring.security.user.roles
```
Take a look at [this.](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html)
|
Issue fixed by adding the config package where the security classes are availble into the component scanning of the main class.
|
52,879,261
|
I have a spring boot application and i am trying to implement the spring security to override the default username and password generated by spring .but its not working. spring still using the default user credentials.
```
@EnableWebSecurity
@Configuration
public class WebSecurityConfigurtion extends WebSecurityConfigurerAdapter {
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser("demo").password("demo").roles("USER").and().withUser("admin")
.password("admin").roles("USER", "ADMIN");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().anyRequest().authenticated().and().httpBasic();
}
```
and this is my main spring boot class
```
@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class ,SecurityAutoConfiguration.class})
@EnableAsync
@EntityScan({ "com.demo.entity" })
@EnableJpaRepositories(basePackages = {"com.demo.repositories"})
@ComponentScan({ "com.demo.controller", "com.demo.service" })
@Import({SwaggerConfig.class})
public class UdeManagerServiceApplication extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(UdeManagerServiceApplication.class);
}
public static void main(String[] args) {
SpringApplication.run(UdeManagerServiceApplication.class, args);
}
```
This is the logs, and it shows the generated password
```
22:26:48.537 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Included patterns for restart : []
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Excluded patterns for restart : [/spring-boot-actuator/target/classes/, /spring-boot-devtools/target/classes/, /spring-boot/target/classes/, /spring-boot-starter-[\w-]+/, /spring-boot-autoconfigure/target/classes/, /spring-boot-starter/target/classes/]
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.restart.ChangeableUrls - Matching URLs for reloading : [file:/C:/Work/Spring/UdeManagerService/target/classes/]
2018-10-18 22:26:48.797 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'configurationProperties' with highest search precedence
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.4.RELEASE)
2018-10-18 22:26:49.218 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletConfigInitParams' with lowest search precedence
2018-10-18 22:26:49.219 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletContextInitParams' with lowest search precedence
2018-10-18 22:26:49.220 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemProperties' with lowest search precedence
2018-10-18 22:26:49.221 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemEnvironment' with lowest search precedence
2018-10-18 22:26:49.222 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Initialized StandardServletEnvironment with PropertySources [StubPropertySource {name='servletConfigInitParams'}, StubPropertySource {name='servletContextInitParams'}, MapPropertySource {name='systemProperties'}, SystemEnvironmentPropertySource {name='systemEnvironment'}]
2018-10-18 22:26:49.967 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Starting UdeManagerServiceApplication on TRV-LT-ANSAR with PID 7652 (C:\Work\Spring\UdeManagerService\target\classes started by Ansar.Samad in C:\Work\Spring\UdeManagerService)
2018-10-18 22:26:49.973 DEBUG 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Running with Spring Boot v2.0.4.RELEASE, Spring v5.0.8.RELEASE
2018-10-18 22:26:49.976 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : No active profile set, falling back to default profiles: default
2018-10-18 22:26:50.614 INFO 7652 --- [ restartedMain] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@23daa694: startup date [Thu Oct 18 22:26:50 IST 2018]; root of context hierarchy
2018-10-18 22:26:53.258 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Removing PropertySource 'defaultProperties'
2018-10-18 22:26:53.428 INFO 7652 --- [ restartedMain] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$82d409fb] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-10-18 22:26:53.936 INFO 7652 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2018-10-18 22:26:53.959 INFO 7652 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-10-18 22:26:53.960 INFO 7652 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.32
2018-10-18 22:26:53.966 INFO 7652 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [C:\Program Files\Java\jdk1.8.0_162\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:/Program Files (x86)/Java/jre1.8.0_181/bin/client;C:/Program Files (x86)/Java/jre1.8.0_181/bin;C:/Program Files (x86)/Java/jre1.8.0_181/lib/i386;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\phython3.6\Scripts\;C:\phython3.6\;C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Java\jre1.8.0_131\bin;C:\cuia\bat;C:\apache-maven-3.5.0-bin\apache-maven-3.5.0\bin;JAVA_HOME;SPRING_HOME;C:\Program Files (x86)\Symantec\VIP Access Client\;C:\phython3.6;C:\Program Files\Git\cmd;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;TESTNG_HOME;C:\TestNg\testng-6.8.jar;C:\python-3.6.5\Scripts;C:\python-3.6.5;C:\Program Files (x86)\Yarn\bin\;C:\Program Files\nodejs\;;C:\Program Files\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Local\Yarn\bin;C:\Users\Ansar.Samad\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Roaming\npm;C:\spring-tool-suite-3.9.5.RELEASE-e4.8.0-win32\sts-bundle\sts-3.9.5.RELEASE;;.]
2018-10-18 22:26:54.081 INFO 7652 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-10-18 22:26:54.082 DEBUG 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Published root WebApplicationContext as ServletContext attribute with name [org.springframework.web.context.WebApplicationContext.ROOT]
2018-10-18 22:26:54.082 INFO 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3468 ms
2018-10-18 22:26:54.192 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet mapped to [/]
2018-10-18 22:26:54.274 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Replacing PropertySource 'servletContextInitParams' with 'servletContextInitParams'
2018-10-18 22:26:54.308 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : Driver class com.mysql.jdbc.Driver found in Thread context class loader org.springframework.boot.devtools.restart.classloader.RestartClassLoader@4e843ef9
2018-10-18 22:26:54.407 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : HikariPool-1 - configuration:
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : allowPoolSuspension.............false
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : autoCommit......................true
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : catalog.........................none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionInitSql...............none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTestQuery.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTimeout...............30000
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSource......................none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceClassName.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceJNDI..................none
2018-10-18 22:26:54.411 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceProperties............{password=<masked>}
2018-10-18 22:22018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationAuthReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHiddenReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHttpMethodReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParameterReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParametersReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNicknameIntoUniqueIdReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNotesReader': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemProperties': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemEnvironment': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'org.springframework.context.annotation.ConfigurationClassPostProcessor.importRegistry': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'messageSource': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'applicationEventMulticaster': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'servletContext': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextParameters': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextAttributes': no URL paths identified
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.271 DEBUG 6720 --- [ restartedMain] .m.m.a.ExceptionHandlerExceptionResolver : Looking for exception mappings: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@275b060b: startup date [Thu Oct 18 22:35:35 IST 2018]; root of context hierarchy
2018-10-18 22:35:45.599 INFO 6720 --- [ restartedMain] .s.s.UserDetailsServiceAutoConfiguration :
Using generated security password: e67dca07-3bc3-49e2-bc33-84c9643ef09a
2018-10-18 22:35:45.740 INFO 6720 --- [ restartedMain] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher@1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6abf32c0, org.springframework.security.web.context.SecurityContextPersistenceFilter@b2b202e, org.springframework.security.web.header.HeaderWriterFilter@23ff67f5, org.springframework.security.web.csrf.CsrfFilter@a493c40, org.springframework.security.web.authentication.logout.LogoutFilter@776a660, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@334ce4e9, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@1f4c6b47, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4a4c14a3, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7793ba1a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4edda6d8, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3e492163, org.springframework.security.web.session.SessionManagementFilter@68661010, org.springframework.security.web.access.ExceptionTranslationFilter@6920b121, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@35279a68]
2018-10-18 22:35:45.836 INFO 6720 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Bean with name 'dataSource' has been autodetected for JMX exposure
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource]
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2018-10-18 22:35:45.899 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2018-10-18 22:35:45.914 INFO 6720 --- [ restartedMain] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2018-10-18 22:35:46.086 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: testUsingPOST_1
2018-10-18 22:35:46.101 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: getDataUsingGET_1
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Looking for resource handler mappings
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**/favicon.ico", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/], class path resource []], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3914bc92]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/webjars/**", locations=[class path resource [META-INF/resources/webjars/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3dc6f1f0]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@315362d4]
2018-10-18 22:35:47.196 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@edd709c
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2018-10-18 22:35:49.180 DEBUG 6720 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'server.ports' with highest search precedence
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Started UdeManagerServiceApplication in 14.132 seconds (JVM running for 14.884)
2018-10-18 22:35:50.929 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@7a5e8413
```
|
2018/10/18
|
[
"https://Stackoverflow.com/questions/52879261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10525271/"
] |
In your application.properties file add this code
`spring.security.user.name = root` -- this is your username
`spring.security.user.password = root` -- this is your password
`spring.security.user.roles = user` --this is your role
|
Issue fixed by adding the config package where the security classes are availble into the component scanning of the main class.
|
52,879,261
|
I have a spring boot application and i am trying to implement the spring security to override the default username and password generated by spring .but its not working. spring still using the default user credentials.
```
@EnableWebSecurity
@Configuration
public class WebSecurityConfigurtion extends WebSecurityConfigurerAdapter {
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser("demo").password("demo").roles("USER").and().withUser("admin")
.password("admin").roles("USER", "ADMIN");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().anyRequest().authenticated().and().httpBasic();
}
```
and this is my main spring boot class
```
@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class ,SecurityAutoConfiguration.class})
@EnableAsync
@EntityScan({ "com.demo.entity" })
@EnableJpaRepositories(basePackages = {"com.demo.repositories"})
@ComponentScan({ "com.demo.controller", "com.demo.service" })
@Import({SwaggerConfig.class})
public class UdeManagerServiceApplication extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(UdeManagerServiceApplication.class);
}
public static void main(String[] args) {
SpringApplication.run(UdeManagerServiceApplication.class, args);
}
```
This is the logs, and it shows the generated password
```
22:26:48.537 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Included patterns for restart : []
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Excluded patterns for restart : [/spring-boot-actuator/target/classes/, /spring-boot-devtools/target/classes/, /spring-boot/target/classes/, /spring-boot-starter-[\w-]+/, /spring-boot-autoconfigure/target/classes/, /spring-boot-starter/target/classes/]
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.restart.ChangeableUrls - Matching URLs for reloading : [file:/C:/Work/Spring/UdeManagerService/target/classes/]
2018-10-18 22:26:48.797 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'configurationProperties' with highest search precedence
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.4.RELEASE)
2018-10-18 22:26:49.218 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletConfigInitParams' with lowest search precedence
2018-10-18 22:26:49.219 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletContextInitParams' with lowest search precedence
2018-10-18 22:26:49.220 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemProperties' with lowest search precedence
2018-10-18 22:26:49.221 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemEnvironment' with lowest search precedence
2018-10-18 22:26:49.222 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Initialized StandardServletEnvironment with PropertySources [StubPropertySource {name='servletConfigInitParams'}, StubPropertySource {name='servletContextInitParams'}, MapPropertySource {name='systemProperties'}, SystemEnvironmentPropertySource {name='systemEnvironment'}]
2018-10-18 22:26:49.967 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Starting UdeManagerServiceApplication on TRV-LT-ANSAR with PID 7652 (C:\Work\Spring\UdeManagerService\target\classes started by Ansar.Samad in C:\Work\Spring\UdeManagerService)
2018-10-18 22:26:49.973 DEBUG 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Running with Spring Boot v2.0.4.RELEASE, Spring v5.0.8.RELEASE
2018-10-18 22:26:49.976 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : No active profile set, falling back to default profiles: default
2018-10-18 22:26:50.614 INFO 7652 --- [ restartedMain] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@23daa694: startup date [Thu Oct 18 22:26:50 IST 2018]; root of context hierarchy
2018-10-18 22:26:53.258 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Removing PropertySource 'defaultProperties'
2018-10-18 22:26:53.428 INFO 7652 --- [ restartedMain] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$82d409fb] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-10-18 22:26:53.936 INFO 7652 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2018-10-18 22:26:53.959 INFO 7652 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-10-18 22:26:53.960 INFO 7652 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.32
2018-10-18 22:26:53.966 INFO 7652 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [C:\Program Files\Java\jdk1.8.0_162\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:/Program Files (x86)/Java/jre1.8.0_181/bin/client;C:/Program Files (x86)/Java/jre1.8.0_181/bin;C:/Program Files (x86)/Java/jre1.8.0_181/lib/i386;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\phython3.6\Scripts\;C:\phython3.6\;C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Java\jre1.8.0_131\bin;C:\cuia\bat;C:\apache-maven-3.5.0-bin\apache-maven-3.5.0\bin;JAVA_HOME;SPRING_HOME;C:\Program Files (x86)\Symantec\VIP Access Client\;C:\phython3.6;C:\Program Files\Git\cmd;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;TESTNG_HOME;C:\TestNg\testng-6.8.jar;C:\python-3.6.5\Scripts;C:\python-3.6.5;C:\Program Files (x86)\Yarn\bin\;C:\Program Files\nodejs\;;C:\Program Files\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Local\Yarn\bin;C:\Users\Ansar.Samad\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Roaming\npm;C:\spring-tool-suite-3.9.5.RELEASE-e4.8.0-win32\sts-bundle\sts-3.9.5.RELEASE;;.]
2018-10-18 22:26:54.081 INFO 7652 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-10-18 22:26:54.082 DEBUG 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Published root WebApplicationContext as ServletContext attribute with name [org.springframework.web.context.WebApplicationContext.ROOT]
2018-10-18 22:26:54.082 INFO 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3468 ms
2018-10-18 22:26:54.192 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet mapped to [/]
2018-10-18 22:26:54.274 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Replacing PropertySource 'servletContextInitParams' with 'servletContextInitParams'
2018-10-18 22:26:54.308 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : Driver class com.mysql.jdbc.Driver found in Thread context class loader org.springframework.boot.devtools.restart.classloader.RestartClassLoader@4e843ef9
2018-10-18 22:26:54.407 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : HikariPool-1 - configuration:
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : allowPoolSuspension.............false
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : autoCommit......................true
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : catalog.........................none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionInitSql...............none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTestQuery.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTimeout...............30000
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSource......................none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceClassName.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceJNDI..................none
2018-10-18 22:26:54.411 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceProperties............{password=<masked>}
2018-10-18 22:22018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationAuthReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHiddenReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHttpMethodReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParameterReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParametersReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNicknameIntoUniqueIdReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNotesReader': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemProperties': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemEnvironment': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'org.springframework.context.annotation.ConfigurationClassPostProcessor.importRegistry': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'messageSource': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'applicationEventMulticaster': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'servletContext': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextParameters': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextAttributes': no URL paths identified
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.271 DEBUG 6720 --- [ restartedMain] .m.m.a.ExceptionHandlerExceptionResolver : Looking for exception mappings: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@275b060b: startup date [Thu Oct 18 22:35:35 IST 2018]; root of context hierarchy
2018-10-18 22:35:45.599 INFO 6720 --- [ restartedMain] .s.s.UserDetailsServiceAutoConfiguration :
Using generated security password: e67dca07-3bc3-49e2-bc33-84c9643ef09a
2018-10-18 22:35:45.740 INFO 6720 --- [ restartedMain] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher@1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6abf32c0, org.springframework.security.web.context.SecurityContextPersistenceFilter@b2b202e, org.springframework.security.web.header.HeaderWriterFilter@23ff67f5, org.springframework.security.web.csrf.CsrfFilter@a493c40, org.springframework.security.web.authentication.logout.LogoutFilter@776a660, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@334ce4e9, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@1f4c6b47, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4a4c14a3, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7793ba1a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4edda6d8, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3e492163, org.springframework.security.web.session.SessionManagementFilter@68661010, org.springframework.security.web.access.ExceptionTranslationFilter@6920b121, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@35279a68]
2018-10-18 22:35:45.836 INFO 6720 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Bean with name 'dataSource' has been autodetected for JMX exposure
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource]
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2018-10-18 22:35:45.899 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2018-10-18 22:35:45.914 INFO 6720 --- [ restartedMain] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2018-10-18 22:35:46.086 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: testUsingPOST_1
2018-10-18 22:35:46.101 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: getDataUsingGET_1
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Looking for resource handler mappings
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**/favicon.ico", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/], class path resource []], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3914bc92]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/webjars/**", locations=[class path resource [META-INF/resources/webjars/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3dc6f1f0]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@315362d4]
2018-10-18 22:35:47.196 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@edd709c
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2018-10-18 22:35:49.180 DEBUG 6720 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'server.ports' with highest search precedence
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Started UdeManagerServiceApplication in 14.132 seconds (JVM running for 14.884)
2018-10-18 22:35:50.929 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@7a5e8413
```
|
2018/10/18
|
[
"https://Stackoverflow.com/questions/52879261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10525271/"
] |
You need to register a UserDetailsService bean.
In this case, just expose the InMemoryUserDetailsManager as a bean in your WebSecurityConfiguration.class, like thisοΌ
```java
// expose the UserDetailsService as a bean
@Bean
@Override
public UserDetailsService userDetailsServiceBean() throws Exception {
return super.userDetailsServiceBean();
}
```
|
Issue fixed by adding the config package where the security classes are availble into the component scanning of the main class.
|
26,374,866
|
I'm using the [Unirest library](http://unirest.io/python.html) for making async web requests with Python. I've read the documentation, but I wasn't able to find if I can use proxy with it. Maybe I'm just blind and there's a way to use it with Unirest?
Or is there some other way to specify proxy for Python? Proxies should be changed from script itself after making some requests, so this way should allow me to do it.
Thanks in advance.
|
2014/10/15
|
[
"https://Stackoverflow.com/questions/26374866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2298183/"
] |
I know nothing about Unirest, but, In all the scripts I wrote that requierd proxy support I used SocksiPy (<http://socksipy.sourceforge.net>) module. It support HTTP, SOCKS4 and SOCKS5 and it s really easy to use. :)
|
Would something like this work for you?
[1] <https://github.com/obriencj/python-promises>
|
23,543,202
|
While reviewing the system library `socket.py` implementation I came across this code
```
try:
import errno
except ImportError:
errno = None
EBADF = getattr(errno, 'EBADF', 9)
EINTR = getattr(errno, 'EINTR', 4)
```
Is this code just a relic of a bygone age, or there are platforms/implementations out there for which there is no `errno` module?
To be more explicit, is it safe to `import errno` without an exception handler?
I'm interested in answers for both python 2.x and 3.x.
**Edit**
To clarify the question: I have to test error codes encoded in `IOError` exceptions raised inside the `socket` module. The above code, which is in the [cpython code base](http://hg.python.org/cpython/file/219502cf57eb/Lib/socket.py), made me suspicious about known situations in which `socket` is available, but `import errno` would fail. Maybe a minor question, but I would like to avoid unnecessary code.
|
2014/05/08
|
[
"https://Stackoverflow.com/questions/23543202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1499402/"
] |
The reason this might be a bit tricky is because the parent nodes have some set defaults.
Set width and height to 100% in initialization of `Reveal`:
```
Reveal.initialize({
width: "100%",
height:"100%"
});
```
Ensure that the slide (ie. `section`) uses the whole space:
```
.full {
height:100%;
width:100%;
}
```
and finally set the position of the caption:
```
.caption {
bottom:0px;
right:0px;
position:absolute;
}
```
[full code](http://plnkr.co/edit/ojxLHQr8WI7np8XkQkpA?p=preview)
|
Can you provide an example of the HTML that has the class `.reveal`? Add `position:absolute` to your selector rule. If you want the caption to sit flush at the bottom corner of each slide, it's best to set the position to `bottom:0px` instead of `top`.
For example:
```
<style type="text/css">
.reveal .reveal_section {
position: relative;
}
.reveal .caption {
position: absolute;
bottom: 0px;
left: 50px;
}
.reveal .caption a {
background: #FFF;
color: #666;
padding: 10px;
border: 1px dashed #999;
}
</style>
<div class="reveal">
<section class="reveal_section" data-background="latimes.png">
<small class="caption">[Some Text](http://a.link.com/whatever)</small>
<aside class="notes">some notes</aside>
</section>
</div>
```
|
34,624,964
|
I want to extract certain information from the output of a program. But my method does not work. I write a rather simple script.
```
#!/usr/bin/env python
print "first hello world."
print "second"
```
After making the script executable, I type `./test | grep "first|second"`. I expect it to show the two sentences. But it does not show anything. Why?
|
2016/01/06
|
[
"https://Stackoverflow.com/questions/34624964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5719744/"
] |
Having researched this myself just now, it looks as though the Meteor version 1.4 release will be updated to version 3.2 of MongoDB, in which "*32-bit binaries are deprecated*"
* [Github ticket for the updating of MongoDB](https://github.com/meteor/meteor/issues/6957)
* [MongoDB declaration that 3.2 has deprecated 32-bit binaries](https://docs.mongodb.com/manual/installation/#deprecation-of-32-bit-versions)
* [Meteor 1.4 announcement](https://forums.meteor.com/t/coming-soon-meteor-1-4/23260)
If upgrading your MongoDB instance is mandatory **now**, then unfortunately it looks like the only way is to manually upgrade the binaries yourself. If you do this, I would suggest you make a backup of them just in-case it messes up.
To upgrade to version 3.2 you first need to [upgrade to version 3.0](https://docs.mongodb.com/manual/release-notes/3.0-upgrade/), then you can [upgrade to version 3.2](https://docs.mongodb.com/manual/release-notes/3.2-upgrade/)
|
I guess you didn't install the right version of Mongo if you have a 32 bits version.
check out their installation guide:
<https://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/>
First download the right 64 bits version for Windows:
<https://www.mongodb.org/downloads#production>
and follow the instructions:
>
> Install MongoDB
>
>
> Interactive Installation 1 Install MongoDB for Windows. In Windows
> Explorer, locate the downloaded MongoDB .msi file, which typically is
> located in the default Downloads folder. Double-click the .msi file. A
> set of screens will appear to guide you through the installation
> process.
>
>
> You may specify an installation directory if you choose the βCustomβ
> installation option.
>
>
> NOTE These instructions assume that you have installed MongoDB to
> C:\mongodb. MongoDB is self-contained and does not have any other
> system dependencies. You can run MongoDB from any folder you choose.
> You may install MongoDB in any folder (e.g. D:\test\mongodb).
>
>
>
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
This answer was accurate at the time it was written but is now out of date. The AWS API's and Libraries (such as boto3) can now take a "TagSpecification" parameter that allows you to specify tags when running the "create\_instances" call.
---
Tags cannot be made until the instance has been created. Even though the function is called create\_instance, what it's really doing is reserving and instance. Then that instance may or may not be launched. (Usually it is, but sometimes...)
So, you cannot add a tag until it's been launched. And there's no way to tell if it's been launched without polling for it. Like so:
```
reservation = conn.run_instances( ... )
# NOTE: this isn't ideal, and assumes you're reserving one instance. Use a for loop, ideally.
instance = reservation.instances[0]
# Check up on its status every so often
status = instance.update()
while status == 'pending':
time.sleep(10)
status = instance.update()
if status == 'running':
instance.add_tag("Name","{{INSERT NAME}}")
else:
print('Instance status: ' + status)
return None
# Now that the status is running, it's not yet launched. The only way to tell if it's fully up is to try to SSH in.
if status == "running":
retry = True
while retry:
try:
# SSH into the box here. I personally use fabric
retry = False
except:
time.sleep(10)
# If we've reached this point, the instance is up and running, and we can SSH and do as we will with it. Or, there never was an instance to begin with.
```
|
This method has worked for me:
```
rsvn = image.run(
... standard options ...
)
sleep(1)
for instance in rsvn.instances:
instance.add_tag('<tag name>', <tag value>)
```
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
Using boto 2.9.6, I'm able to add tags to an instance immediately after getting my response back from run\_instances. Something like this works without sleep:
```
reservation = my_connection.run_instances(...)
for instance in reservation.instances:
instance.add_tag('Name', <whatever>)
```
I verified that the instance was still in pending state after successfully adding the tag. It would be easy to wrap this logic in a function similar to that requested by the original post.
|
This method has worked for me:
```
rsvn = image.run(
... standard options ...
)
sleep(1)
for instance in rsvn.instances:
instance.add_tag('<tag name>', <tag value>)
```
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
[You can tag instance or volume on creation](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html)
From [run\_instances docs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html):
>
> You can tag instances and EBS volumes during launch, after launch, or both. For more information, see [CreateTags](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTags.html) and [Tagging Your Amazon EC2 Resources](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).
>
>
>
[Using Tags](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) AWS doc includes a table with resources supporting tagging and supporting tagging on creation (Instance and EBS Volume support both as of 01-MAY-2017)
Here is a code snippet to tag instance at creation time in Python (other SDK references are listed on [this page](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html)):
```
from pkg_resources import parse_version
import boto3
assert parse_version(boto3.__version__) >= parse_version('1.4.4'), \
"Older version of boto3 installed {} which doesn't support instance tagging on creation. Update with command 'pip install -U boto3>=1.4.4'".format(boto3.__version__)
import botocore
assert parse_version(botocore.__version__) >= parse_version('1.5.63'), \
"Older version of botocore installed {} which doesn't support instance tagging on creation. Update with command 'pip install -U botocore>=1.5.63'".format(botocore.__version__)
ec2 = boto3.resource('ec2')
tag_purpose_test = {"Key": "Purpose", "Value": "Test"}
instance = ec2.create_instances(
ImageId=EC2_IMAGE_ID,
MinCount=1,
MaxCount=1,
InstanceType=EC2_INSTANCE_TYPE,
KeyName=EC2_KEY_NAME,
SecurityGroupIds=[EC2_DEFAULT_SEC_GROUP],
SubnetId=EC2_SUBNET_ID,
TagSpecifications=[{'ResourceType': 'instance',
'Tags': [tag_purpose_test]}])[0]
```
I used
```
Python 2.7.13
boto3 (1.4.4)
botocore (1.5.63)
```
|
This method has worked for me:
```
rsvn = image.run(
... standard options ...
)
sleep(1)
for instance in rsvn.instances:
instance.add_tag('<tag name>', <tag value>)
```
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
This answer was accurate at the time it was written but is now out of date. The AWS API's and Libraries (such as boto3) can now take a "TagSpecification" parameter that allows you to specify tags when running the "create\_instances" call.
---
Tags cannot be made until the instance has been created. Even though the function is called create\_instance, what it's really doing is reserving and instance. Then that instance may or may not be launched. (Usually it is, but sometimes...)
So, you cannot add a tag until it's been launched. And there's no way to tell if it's been launched without polling for it. Like so:
```
reservation = conn.run_instances( ... )
# NOTE: this isn't ideal, and assumes you're reserving one instance. Use a for loop, ideally.
instance = reservation.instances[0]
# Check up on its status every so often
status = instance.update()
while status == 'pending':
time.sleep(10)
status = instance.update()
if status == 'running':
instance.add_tag("Name","{{INSERT NAME}}")
else:
print('Instance status: ' + status)
return None
# Now that the status is running, it's not yet launched. The only way to tell if it's fully up is to try to SSH in.
if status == "running":
retry = True
while retry:
try:
# SSH into the box here. I personally use fabric
retry = False
except:
time.sleep(10)
# If we've reached this point, the instance is up and running, and we can SSH and do as we will with it. Or, there never was an instance to begin with.
```
|
Using boto 2.9.6, I'm able to add tags to an instance immediately after getting my response back from run\_instances. Something like this works without sleep:
```
reservation = my_connection.run_instances(...)
for instance in reservation.instances:
instance.add_tag('Name', <whatever>)
```
I verified that the instance was still in pending state after successfully adding the tag. It would be easy to wrap this logic in a function similar to that requested by the original post.
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
This answer was accurate at the time it was written but is now out of date. The AWS API's and Libraries (such as boto3) can now take a "TagSpecification" parameter that allows you to specify tags when running the "create\_instances" call.
---
Tags cannot be made until the instance has been created. Even though the function is called create\_instance, what it's really doing is reserving and instance. Then that instance may or may not be launched. (Usually it is, but sometimes...)
So, you cannot add a tag until it's been launched. And there's no way to tell if it's been launched without polling for it. Like so:
```
reservation = conn.run_instances( ... )
# NOTE: this isn't ideal, and assumes you're reserving one instance. Use a for loop, ideally.
instance = reservation.instances[0]
# Check up on its status every so often
status = instance.update()
while status == 'pending':
time.sleep(10)
status = instance.update()
if status == 'running':
instance.add_tag("Name","{{INSERT NAME}}")
else:
print('Instance status: ' + status)
return None
# Now that the status is running, it's not yet launched. The only way to tell if it's fully up is to try to SSH in.
if status == "running":
retry = True
while retry:
try:
# SSH into the box here. I personally use fabric
retry = False
except:
time.sleep(10)
# If we've reached this point, the instance is up and running, and we can SSH and do as we will with it. Or, there never was an instance to begin with.
```
|
[You can tag instance or volume on creation](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html)
From [run\_instances docs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html):
>
> You can tag instances and EBS volumes during launch, after launch, or both. For more information, see [CreateTags](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTags.html) and [Tagging Your Amazon EC2 Resources](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).
>
>
>
[Using Tags](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) AWS doc includes a table with resources supporting tagging and supporting tagging on creation (Instance and EBS Volume support both as of 01-MAY-2017)
Here is a code snippet to tag instance at creation time in Python (other SDK references are listed on [this page](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html)):
```
from pkg_resources import parse_version
import boto3
assert parse_version(boto3.__version__) >= parse_version('1.4.4'), \
"Older version of boto3 installed {} which doesn't support instance tagging on creation. Update with command 'pip install -U boto3>=1.4.4'".format(boto3.__version__)
import botocore
assert parse_version(botocore.__version__) >= parse_version('1.5.63'), \
"Older version of botocore installed {} which doesn't support instance tagging on creation. Update with command 'pip install -U botocore>=1.5.63'".format(botocore.__version__)
ec2 = boto3.resource('ec2')
tag_purpose_test = {"Key": "Purpose", "Value": "Test"}
instance = ec2.create_instances(
ImageId=EC2_IMAGE_ID,
MinCount=1,
MaxCount=1,
InstanceType=EC2_INSTANCE_TYPE,
KeyName=EC2_KEY_NAME,
SecurityGroupIds=[EC2_DEFAULT_SEC_GROUP],
SubnetId=EC2_SUBNET_ID,
TagSpecifications=[{'ResourceType': 'instance',
'Tags': [tag_purpose_test]}])[0]
```
I used
```
Python 2.7.13
boto3 (1.4.4)
botocore (1.5.63)
```
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
[You can tag instance or volume on creation](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html)
From [run\_instances docs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html):
>
> You can tag instances and EBS volumes during launch, after launch, or both. For more information, see [CreateTags](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTags.html) and [Tagging Your Amazon EC2 Resources](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).
>
>
>
[Using Tags](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) AWS doc includes a table with resources supporting tagging and supporting tagging on creation (Instance and EBS Volume support both as of 01-MAY-2017)
Here is a code snippet to tag instance at creation time in Python (other SDK references are listed on [this page](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html)):
```
from pkg_resources import parse_version
import boto3
assert parse_version(boto3.__version__) >= parse_version('1.4.4'), \
"Older version of boto3 installed {} which doesn't support instance tagging on creation. Update with command 'pip install -U boto3>=1.4.4'".format(boto3.__version__)
import botocore
assert parse_version(botocore.__version__) >= parse_version('1.5.63'), \
"Older version of botocore installed {} which doesn't support instance tagging on creation. Update with command 'pip install -U botocore>=1.5.63'".format(botocore.__version__)
ec2 = boto3.resource('ec2')
tag_purpose_test = {"Key": "Purpose", "Value": "Test"}
instance = ec2.create_instances(
ImageId=EC2_IMAGE_ID,
MinCount=1,
MaxCount=1,
InstanceType=EC2_INSTANCE_TYPE,
KeyName=EC2_KEY_NAME,
SecurityGroupIds=[EC2_DEFAULT_SEC_GROUP],
SubnetId=EC2_SUBNET_ID,
TagSpecifications=[{'ResourceType': 'instance',
'Tags': [tag_purpose_test]}])[0]
```
I used
```
Python 2.7.13
boto3 (1.4.4)
botocore (1.5.63)
```
|
Using boto 2.9.6, I'm able to add tags to an instance immediately after getting my response back from run\_instances. Something like this works without sleep:
```
reservation = my_connection.run_instances(...)
for instance in reservation.instances:
instance.add_tag('Name', <whatever>)
```
I verified that the instance was still in pending state after successfully adding the tag. It would be easy to wrap this logic in a function similar to that requested by the original post.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
Are you trying download this from Github? Especially on Google Chrome browsers, I've had issues download .ipynb files using right click > **Save link as...** I'm not sure if other browsers have this issue (Microsoft Edge, Mozilla Firefox, Safari, etc.).
This causes issues since when downloading, it doesn't completely download the file usually and it becomes corrupted so you can't download an IPython notebook that may run properly. One trick when trying to download .ipynb files on Github is to click it, click **Raw**, then copy everything (Ctrl + A) and paste it into a blank file (using text editors such as Notepad, Notepad++, Vim, etc.) and save it as "whatever\_file\_name\_you\_choose.ipynb". Then you should be able to properly run this file, assuming a non-corrupted file was uploaded to Github.
A lot of people with very large, complicated IPython notebooks on Github will inevitably run into this issue when simply trying to download with **Save link as...**. Hopefully this helps!
|
I opened it as/with nbviewer and then selected it all and saved it as a "txt" file that I then opened in Notepad++. I then resaved it as a file with the extension ipynb and opened it in my jupyter notebook ok.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
On the Mac you could go and
1. Right click on the `filename.ipynb.json`
2. Click on `Get Info` from the list.
3. From the `Get Info` window, find the section `Name&Extension` remove the extension/suffix `.json` from the file name.
Hope that helps!
|
Are you trying download this from Github? Especially on Google Chrome browsers, I've had issues download .ipynb files using right click > **Save link as...** I'm not sure if other browsers have this issue (Microsoft Edge, Mozilla Firefox, Safari, etc.).
This causes issues since when downloading, it doesn't completely download the file usually and it becomes corrupted so you can't download an IPython notebook that may run properly. One trick when trying to download .ipynb files on Github is to click it, click **Raw**, then copy everything (Ctrl + A) and paste it into a blank file (using text editors such as Notepad, Notepad++, Vim, etc.) and save it as "whatever\_file\_name\_you\_choose.ipynb". Then you should be able to properly run this file, assuming a non-corrupted file was uploaded to Github.
A lot of people with very large, complicated IPython notebooks on Github will inevitably run into this issue when simply trying to download with **Save link as...**. Hopefully this helps!
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
My Solution: just remove the filename extension **.json**. for example, change **myfile.ipynb.json** to **myfile.ipynb**. Then, you can open it by a click in jupyter notebook !
I have encounter the same problem as you did. I found a link that describe what ipynb exactly is. see here <http://ipython.org/ipython-doc/rel-1.0.0/interactive/nbconvert.html>. It says ipynb file is actually json file. Hope this
|
The easy thing to do is to copy the JSON contents into a notepad and save it again with .ipynb extension
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
Are you trying download this from Github? Especially on Google Chrome browsers, I've had issues download .ipynb files using right click > **Save link as...** I'm not sure if other browsers have this issue (Microsoft Edge, Mozilla Firefox, Safari, etc.).
This causes issues since when downloading, it doesn't completely download the file usually and it becomes corrupted so you can't download an IPython notebook that may run properly. One trick when trying to download .ipynb files on Github is to click it, click **Raw**, then copy everything (Ctrl + A) and paste it into a blank file (using text editors such as Notepad, Notepad++, Vim, etc.) and save it as "whatever\_file\_name\_you\_choose.ipynb". Then you should be able to properly run this file, assuming a non-corrupted file was uploaded to Github.
A lot of people with very large, complicated IPython notebooks on Github will inevitably run into this issue when simply trying to download with **Save link as...**. Hopefully this helps!
|
i tried this method and it worked. Just copy, paste it in notepad and save as "file\_name.ipynb". hope this works for you too.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
On the Mac you could go and
1. Right click on the `filename.ipynb.json`
2. Click on `Get Info` from the list.
3. From the `Get Info` window, find the section `Name&Extension` remove the extension/suffix `.json` from the file name.
Hope that helps!
|
Just remove the `.json` file extension leaving the `.ipynb` one, as pointed out by the following related post: <https://superuser.com/questions/1497243/why-cant-i-save-a-jupyter-notebook-as-a-ipynb>. As @jackie already said, you should consider them as `.json` files meant only to be edited by the IPython Notebook app itself, not for hand-editing.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
My Solution: just remove the filename extension **.json**. for example, change **myfile.ipynb.json** to **myfile.ipynb**. Then, you can open it by a click in jupyter notebook !
I have encounter the same problem as you did. I found a link that describe what ipynb exactly is. see here <http://ipython.org/ipython-doc/rel-1.0.0/interactive/nbconvert.html>. It says ipynb file is actually json file. Hope this
|
Just remove the `.json` file extension leaving the `.ipynb` one, as pointed out by the following related post: <https://superuser.com/questions/1497243/why-cant-i-save-a-jupyter-notebook-as-a-ipynb>. As @jackie already said, you should consider them as `.json` files meant only to be edited by the IPython Notebook app itself, not for hand-editing.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
My Solution: just remove the filename extension **.json**. for example, change **myfile.ipynb.json** to **myfile.ipynb**. Then, you can open it by a click in jupyter notebook !
I have encounter the same problem as you did. I found a link that describe what ipynb exactly is. see here <http://ipython.org/ipython-doc/rel-1.0.0/interactive/nbconvert.html>. It says ipynb file is actually json file. Hope this
|
After downloading the file with ipynb.json, Take the following steps:
1. Go your terminal/command line window
2. Navigate to the directory where your file is
3. Type:
windows OS: rename yourfile.ipynb.json to yourfile.ipynb
Unix/Linux: mv yourfile.ipynb.json to yourfile.ipynb
This work perfectly for me.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
My Solution: just remove the filename extension **.json**. for example, change **myfile.ipynb.json** to **myfile.ipynb**. Then, you can open it by a click in jupyter notebook !
I have encounter the same problem as you did. I found a link that describe what ipynb exactly is. see here <http://ipython.org/ipython-doc/rel-1.0.0/interactive/nbconvert.html>. It says ipynb file is actually json file. Hope this
|
Are you trying download this from Github? Especially on Google Chrome browsers, I've had issues download .ipynb files using right click > **Save link as...** I'm not sure if other browsers have this issue (Microsoft Edge, Mozilla Firefox, Safari, etc.).
This causes issues since when downloading, it doesn't completely download the file usually and it becomes corrupted so you can't download an IPython notebook that may run properly. One trick when trying to download .ipynb files on Github is to click it, click **Raw**, then copy everything (Ctrl + A) and paste it into a blank file (using text editors such as Notepad, Notepad++, Vim, etc.) and save it as "whatever\_file\_name\_you\_choose.ipynb". Then you should be able to properly run this file, assuming a non-corrupted file was uploaded to Github.
A lot of people with very large, complicated IPython notebooks on Github will inevitably run into this issue when simply trying to download with **Save link as...**. Hopefully this helps!
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
Use a simple trick. Let that file get downloaded automatically. Re-download it again then it will prompt you to download and replace that file. At that time, you save that by replacing .json to .ipynb
|
i tried this method and it worked. Just copy, paste it in notepad and save as "file\_name.ipynb". hope this works for you too.
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
I opened it as/with nbviewer and then selected it all and saved it as a "txt" file that I then opened in Notepad++. I then resaved it as a file with the extension ipynb and opened it in my jupyter notebook ok.
|
The easy thing to do is to copy the JSON contents into a notepad and save it again with .ipynb extension
|
49,737,148
|
I am trying to create a CNN model in Keras with multiple conv3d to work on cifar10 dataset. But facing the following issue:
>
> ValueError: ('The specified size contains a dimension with value <=
> 0', (-8000, 256))
>
>
>
Below is my code that I am trying to execute.
```python
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv3D, MaxPooling3D
from keras.optimizers import SGD
import os
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 20
learning_rate = 0.01
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
img_rows = x_train.shape[1]
img_cols = x_train.shape[2]
colors = x_train.shape[3]
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1,colors, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1,colors, img_rows, img_cols)
input_shape = (1, colors, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, colors, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, colors, 1)
input_shape = (img_rows, img_cols, colors, 1)
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu',input_shape=input_shape))
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Dropout(0.25))
model.add(Conv3D(64, kernel_size=(3, 3, 3),activation='relu'))
model.add(Conv3D(64, kernel_size=(3, 3, 3),activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
sgd=SGD(lr=learning_rate)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=sgd,
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
I have tried with *single* conv3d and it *worked* but the accuracy was very low. Code snippet as below:
```python
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu',input_shape=input_shape))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
```
|
2018/04/09
|
[
"https://Stackoverflow.com/questions/49737148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2511239/"
] |
What do you gain by putting this string in your scenario. IMO all you are doing is making the scenario harder to read!
What do you lose by putting this string in your scenario?
Well first of all you now have to have at least two things the determine the exact contents of the string, the thing in the application that creates it and the hardcoded string in your scenario. So you are repeating yourself.
In addition you've increased the cost of change. Lets say we want our strings to change from 'Breedx' to 'Breed: x'. Now you have to change every scenario that looks at the drop down. This will take much much longer than changing the code.
So what can you do instead?
Change your scenario step so that it becomes `Then I should see the patient breeds` and delegate the HOW of the presentation of the breeds and even the sort of control that the breeds are presented in to something that is outside of Cucumber e.g. a helper method called by a step definition, or perhaps even something in your codebase.
|
Try with a datatable approach. You will have to add a `DataTable` argument in the stepdefinition.
```
Then Drop-dow patient_breed contains
'Breed1'
'Breed2'
...
...
...
'Breed20']
```
For a multiline approach try the below. In this you will have to add a `String` argument to the stepdefinition.
```
Then Drop-dow patient_breed contains
"""
['Breed1','Breed2',.... Breed20']
"""
```
|
49,737,148
|
I am trying to create a CNN model in Keras with multiple conv3d to work on cifar10 dataset. But facing the following issue:
>
> ValueError: ('The specified size contains a dimension with value <=
> 0', (-8000, 256))
>
>
>
Below is my code that I am trying to execute.
```python
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv3D, MaxPooling3D
from keras.optimizers import SGD
import os
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 20
learning_rate = 0.01
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
img_rows = x_train.shape[1]
img_cols = x_train.shape[2]
colors = x_train.shape[3]
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1,colors, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1,colors, img_rows, img_cols)
input_shape = (1, colors, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, colors, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, colors, 1)
input_shape = (img_rows, img_cols, colors, 1)
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu',input_shape=input_shape))
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Dropout(0.25))
model.add(Conv3D(64, kernel_size=(3, 3, 3),activation='relu'))
model.add(Conv3D(64, kernel_size=(3, 3, 3),activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
sgd=SGD(lr=learning_rate)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=sgd,
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
I have tried with *single* conv3d and it *worked* but the accuracy was very low. Code snippet as below:
```python
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu',input_shape=input_shape))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
```
|
2018/04/09
|
[
"https://Stackoverflow.com/questions/49737148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2511239/"
] |
What do you gain by putting this string in your scenario. IMO all you are doing is making the scenario harder to read!
What do you lose by putting this string in your scenario?
Well first of all you now have to have at least two things the determine the exact contents of the string, the thing in the application that creates it and the hardcoded string in your scenario. So you are repeating yourself.
In addition you've increased the cost of change. Lets say we want our strings to change from 'Breedx' to 'Breed: x'. Now you have to change every scenario that looks at the drop down. This will take much much longer than changing the code.
So what can you do instead?
Change your scenario step so that it becomes `Then I should see the patient breeds` and delegate the HOW of the presentation of the breeds and even the sort of control that the breeds are presented in to something that is outside of Cucumber e.g. a helper method called by a step definition, or perhaps even something in your codebase.
|
I would read the entire string and then split it using Java after it has been passed into the step. In order to keep my step as a one or two liner, I would use a helper method that I implemented myself.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
*I would politely ask the people at omgele.com for a copy of their code and study it to*
1. learn Python and twisted matrix
2. decide to use it or if I decide against it, to apply what I learned from them to write my own Java site
unfortunately, the source code is not likely to be available..
Still I advise to learn from others, and if at all possible, join them to improve the code.
|
Learning Python can be an informative, interesting, and valuable process. When you really get going, you will probably find you can develop more rapidly than in Java. Twisted is an fairly well-executed framework which lets you avoid many of the pitfalls you can run into with asynchronous IO; it has top-notch implementations of quite a few protocols and a passionate, competent support community.
If you're interested in the knowledge and experience you'll gain doing so, go ahead and learn Python and use Twisted. If you feel pretty solid with your knowledge of Java you can probably read the [official tutorial](http://docs.python.org/tutorial/) a couple times then start hacking away. Twisted can take a while to click, but it's really not all that hard.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
Learn python.
This will add one very powerful tool to your toolbox.
Also twisted can do much more than just chat which will help you in future.
|
*I would politely ask the people at omgele.com for a copy of their code and study it to*
1. learn Python and twisted matrix
2. decide to use it or if I decide against it, to apply what I learned from them to write my own Java site
unfortunately, the source code is not likely to be available..
Still I advise to learn from others, and if at all possible, join them to improve the code.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
*I would politely ask the people at omgele.com for a copy of their code and study it to*
1. learn Python and twisted matrix
2. decide to use it or if I decide against it, to apply what I learned from them to write my own Java site
unfortunately, the source code is not likely to be available..
Still I advise to learn from others, and if at all possible, join them to improve the code.
|
I've worked with about a dozen different languages, and started with Python about two months ago. Java and Python in developing web apps, middleware, and services ROCKS!!
Learn Python.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
Learn python.
This will add one very powerful tool to your toolbox.
Also twisted can do much more than just chat which will help you in future.
|
Learning Python can be an informative, interesting, and valuable process. When you really get going, you will probably find you can develop more rapidly than in Java. Twisted is an fairly well-executed framework which lets you avoid many of the pitfalls you can run into with asynchronous IO; it has top-notch implementations of quite a few protocols and a passionate, competent support community.
If you're interested in the knowledge and experience you'll gain doing so, go ahead and learn Python and use Twisted. If you feel pretty solid with your knowledge of Java you can probably read the [official tutorial](http://docs.python.org/tutorial/) a couple times then start hacking away. Twisted can take a while to click, but it's really not all that hard.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
To your #2 question, take a look at Jabber (XMPP), it has several Java clients and is widely supported. Example Gtalk, Facebook use XMPP.
[Here](http://www.igniterealtime.org/projects/openfire/) is an excellent server written in Java.
|
Learning Python can be an informative, interesting, and valuable process. When you really get going, you will probably find you can develop more rapidly than in Java. Twisted is an fairly well-executed framework which lets you avoid many of the pitfalls you can run into with asynchronous IO; it has top-notch implementations of quite a few protocols and a passionate, competent support community.
If you're interested in the knowledge and experience you'll gain doing so, go ahead and learn Python and use Twisted. If you feel pretty solid with your knowledge of Java you can probably read the [official tutorial](http://docs.python.org/tutorial/) a couple times then start hacking away. Twisted can take a while to click, but it's really not all that hard.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
Learn python.
This will add one very powerful tool to your toolbox.
Also twisted can do much more than just chat which will help you in future.
|
To your #2 question, take a look at Jabber (XMPP), it has several Java clients and is widely supported. Example Gtalk, Facebook use XMPP.
[Here](http://www.igniterealtime.org/projects/openfire/) is an excellent server written in Java.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
Learn python.
This will add one very powerful tool to your toolbox.
Also twisted can do much more than just chat which will help you in future.
|
I've worked with about a dozen different languages, and started with Python about two months ago. Java and Python in developing web apps, middleware, and services ROCKS!!
Learn Python.
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
To your #2 question, take a look at Jabber (XMPP), it has several Java clients and is widely supported. Example Gtalk, Facebook use XMPP.
[Here](http://www.igniterealtime.org/projects/openfire/) is an excellent server written in Java.
|
I've worked with about a dozen different languages, and started with Python about two months ago. Java and Python in developing web apps, middleware, and services ROCKS!!
Learn Python.
|
12,142,174
|
I want to call a Python script from C, passing some arguments that are needed in the script.
The script I want to use is mrsync, or [multicast remote sync](http://sourceforge.net/projects/mrsync/). I got this working from command line, by calling:
```
python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata
```
>
> -m is the list containing the target ip-addresses.
> -s is the directory that contains the files to be synced.
> -t is the directory on the target machines where the files will be put.
>
>
>
So far I managed to run a Python script without parameters, by using the following C program:
```
Py_Initialize();
FILE* file = fopen("/tmp/myfile.py", "r");
PyRun_SimpleFile(file, "/tmp/myfile.py");
Py_Finalize();
```
This works fine. However, I can't find how I can pass these argument to the `PyRun_SimpleFile(..)` method.
|
2012/08/27
|
[
"https://Stackoverflow.com/questions/12142174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/960585/"
] |
Seems like you're looking for an answer using the python development APIs from Python.h. Here's an example for you that should work:
```
#My python script called mypy.py
import sys
if len(sys.argv) != 2:
sys.exit("Not enough args")
ca_one = str(sys.argv[1])
ca_two = str(sys.argv[2])
print "My command line args are " + ca_one + " and " + ca_two
```
And then the C code to pass these args:
```
//My code file
#include <stdio.h>
#include <python2.7/Python.h>
void main()
{
FILE* file;
int argc;
char * argv[3];
argc = 3;
argv[0] = "mypy.py";
argv[1] = "-m";
argv[2] = "/tmp/targets.list";
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
```
If you can pass the arguments into your C function this task becomes even easier:
```
void main(int argc, char *argv[])
{
FILE* file;
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
```
You can just pass those straight through. Now my solutions only used 2 command line args for the sake of time, but you can use the same concept for all 6 that you need to pass... and of course there's cleaner ways to capture the args on the python side too, but that's just the basic idea.
Hope it helps!
|
You have two options.
1. Call
```
system("python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata")
```
in your C code.
2. Actually use the API that `mrsync` (hopefully) defines. This is more flexible, but much more complicated. The first step would be to work out how you would perform the above operation as a Python function call. If `mrsync` has been written nicely, there will be a function `mrsync.sync` (say) that you call as
```
mrsync.sync("/tmp/targets.list", "/tmp/sourcedata", "/tmp/targetdata")
```
Once you've worked out how to do that, you can call the function directly from the C code using the Python API.
|
20,412,091
|
I am trying to make this 2 n body diagram to work in vpython, it seems that is working but something is wrong with my center or mass, or something, i don't really know. The 2 n body system is shifting and is not staying still.
```
from visual import*
mt= 1.99e30 #Kg
G=6.67e-11 #N*(m/kg)^2
#Binary System stars
StarA=sphere(pos= (3.73e11,0,0),radius=5.28e10,opacity=0.5,color=color.green,make_trail=True, interval=10)
StarB=sphere(pos=(-3.73e11,0,0),radius=4.86e10,opacity=0.5, color=color.blue,make_trail=True, interval=10)
#StarC=sphere(pos=(3.44e13,0,0),radius=2.92e11,opacity=0.5, color=color.yellow,make_trail=True, interval=10)
#mass of binary stars
StarA.mass= 1.45e30 #kg
StarB.mass= 1.37e30 #kg
#StarC.mass= 6.16e29 #kg
#initial velocities of binary system
StarA.velocity =vector(0,8181.2,0)
StarB.velocity =vector(0,-8181.2,0)
#StarC.velocity= vector(0,1289.4,0)
#Time step for each binary star
dt=1e5
StarA.pos= StarA.pos + StarA.velocity*dt
StarB.pos= StarB.pos + StarB.velocity*dt
#StarC.pos= StarC.pos + StarC.velocity*dt
#Lists
objects=[]
objects.append(StarA)
objects.append(StarB)
#objects.append(StarC)
#center of mass
Ycm=0
Xcm=0
Zcm=0
Vcmx=0
Vcmy=0
Vcmz=0
TotalMass=0
for each in objects:
TotalMass=TotalMass+each.mass
Xcm= Xcm + each.pos.x*each.mass
Ycm= Ycm + each.pos.y*each.mass
Zcm= Zcm + each.pos.z*each.mass
Vcmx= Vcmx + each.velocity.x*each.mass
Vcmy= Vcmy + each.velocity.y*each.mass
Vcmz= Vcmz + each.velocity.z*each.mass
for each in objects:
Xcm=Xcm/TotalMass
Ycm=Ycm/TotalMass
Zcm=Zcm/TotalMass
Vcmx=Vcmx/TotalMass
Vcmy=Vcmy/TotalMass
Vcmz=Vcmz/TotalMass
each.pos=each.pos-vector(Xcm,Ycm,Zcm)
each.velocity=each.velocity-vector(Vcmx,Vcmy,Vcmz)
#Code for Triple system
firstStep=0
while True:
rate(200)
for i in objects:
i.acceleration = vector(0,0,0)
for j in objects:
if i != j:
dist = j.pos - i.pos
i.acceleration = i.acceleration + G * j.mass * dist / mag(dist)**3
for i in objects:
if firstStep==0:
i.velocity = i.velocity + i.acceleration*dt
firstStep=1
else:
i.velocity = i.velocity + i.acceleration*dt
i.pos = i.pos + i.velocity*dt
```
|
2013/12/05
|
[
"https://Stackoverflow.com/questions/20412091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2839580/"
] |
Two points I'd like to make about your problem:
1. You're using Euler's method of integration. That is the "i.velocity = i.velocity + i.acceleration\*dt" part of your code. This method is not very accurate, especially with oscillatory problems like this one. That's part of the reason you're noticing the drift in your system. I'd recommend using the [RK4 method of integration.](http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) It's a bit more complicated but gives very good results, although [Verlet integration](http://en.wikipedia.org/wiki/Verlet_integration) may be more useful for you here.
2. Your system may have a net momentum in some direction. You have them starting with the same speed, but have different masses, so the net momentum is non-zero. To correct for this transform the system into a zero momentum reference frame. To do this add up the momentum (mass times velocity vector) of all the stars, then divide by the total mass of the system. This will give you a velocity vector to subtract off from all your initial velocity vectors of the stars, and so your center of mass will not move.
Hope this helps!
|
This is wrong:
```
for each in objects:
Xcm=Xcm/TotalMass
...
```
This division should only happen once. And then the averages should be removed from the objects, as in
```
Xcm=Xcm/TotalMass
...
for each in objects:
each.pos.x -= Xcm
...
```
|
28,550,511
|
I am trying to learn how to use callbacks between C and Python by way of Cython and have been looking at [this demo](https://github.com/cython/cython/tree/master/Demos/callback). I would like a Python function applied to one std::vector/numpy.array and store the results in another. I can compile and run without errors, but ultimately the vector y is
not being changed.
C++ header
```
// callback.hpp
#include<vector>
typedef double (*Callback)( void *apply, double &x );
void function( Callback callback, void *apply, vector<double> &x,
vector<double> &y );
```
C++ source
```
// callback.cpp
#include "callback.hpp"
#include <iostream>
using namespace std;
void function( Callback callback, void* apply,
vector<double> &x, vector<double> &y ) {
int n = x.size();
for(int i=0;i<n;++i) {
y[i] = callback(apply,x[i]);
std::cout << y[i] << std::endl;
}
```
Cython header
```
# cy_callback.pxd
import cython
from libcpp.vector cimport vector
cdef extern from "callback.hpp":
ctypedef double (*Callback)( void *apply, double &x )
void function( Callback callback, void* apply, vector[double] &x,
vector[double] &y )
```
Cython source
```
# cy_callback.pyx
from cy_callback cimport function
from libcpp.vector cimport vector
def pyfun(f,x,y):
function( cb, <void*> f, <vector[double]&> x, <vector[double]&> y )
cdef double cb(void* f, double &x):
return (<object>f)(x)
```
I compile with fairly boilerplate setup: python setup.py build\_ext -i
```
# setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy
import os
os.environ["CC"] = "g++"
os.environ["CXX"] = "g++"
setup( name = 'callback',
ext_modules=[Extension("callback",
sources=["cy_callback.pyx","callback.cpp"],
language="c++",
include_dirs=[numpy.get_include()])],
cmdclass = {'build_ext': build_ext},
)
```
And finally test with the Python script
```
# test.py
import numpy as np
from callback import pyfun
x = np.arange(11)
y = np.zeros(11)
pyfun(lambda x:x**2,x,y)
print(y)
```
When the elements of y are set in callback.cpp, the correct values are being
printed to the screen, which means that pyfun is indeed being evaluated correctly, however, at the Python level, y remains all zeros.
Any idea what I'm doing incorrectly?
|
2015/02/16
|
[
"https://Stackoverflow.com/questions/28550511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2308288/"
] |
Thanks, Ian. Based on your suggestion, I changed the code to return a vector instead of trying to modify it in place. This works, although is admittedly not particularly efficient
callback.hpp
```
typedef double (*Callback)( void *apply, double x );
vector<double> function( Callback callback, void *apply,
const vector<double> &x );
```
callback.cpp
```
vector<double> function( Callback callback, void* apply,
const vector<double> &x ) {
int n = x.size();
vector<double> y(n);
for(int i=0;i<n;++i) {
y[i] = callback(apply,x[i]);
}
return y;
}
```
cy\_callback.pxd
```
cdef extern from "callback.hpp":
ctypedef double (*Callback)( void *apply, const double &x )
vector[double] function( Callback callback, void* apply,
vector[double] &x )
```
cy\_callback.pyx
```
from cy_callback cimport function
from libcpp.vector cimport vector
def pyfun(f,x):
return function( cb, <void*> f, <const vector[double]&> x )
cdef double cb(void* f, double x ):
return (<object>f)(x)
```
test.py
```
import numpy as np
from callback import pyfun
x = np.arange(11)
y = np.zeros(11)
def f(x):
return x*x
y = pyfun(f,x)
print(y)
```
|
There is an implicit copy of the array made when you cast it to be a vector.
There isn't currently any way to have a vector take ownership of memory that has already been allocated, so the only workaround will be to copy the values manually or by exposing `std::copy` to cython. See [How to cheaply assign C-style array to std::vector?](https://stackoverflow.com/questions/5836324/how-to-cheaply-assign-c-style-array-to-stdvector).
|
67,798,070
|
I am more or less following [this example](http://4/1AY0e-g4pMh6JPfkexh5nvWf9lvug3sHK98_jxAnwhsYlrB3F20Jkp350PKY) to integrate the ray tune hyperparameter library with the huggingface transformers library using my own dataset.
Here is my script:
```
import ray
from ray import tune
from ray.tune import CLIReporter
from ray.tune.examples.pbt_transformers.utils import download_data, \
build_compute_metrics_fn
from ray.tune.schedulers import PopulationBasedTraining
from transformers import glue_tasks_num_labels, AutoConfig, \
AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments
def get_model():
# tokenizer = AutoTokenizer.from_pretrained(model_name, additional_special_tokens = ['[CHARACTER]'])
model = ElectraForSequenceClassification.from_pretrained('google/electra-small-discriminator', num_labels=2)
model.resize_token_embeddings(len(tokenizer))
return model
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
"electra_hp_tune",
report_to = "wandb",
learning_rate=2e-5, # config
do_train=True,
do_eval=True,
evaluation_strategy="epoch",
load_best_model_at_end=True,
num_train_epochs=2, # config
per_device_train_batch_size=16, # config
per_device_eval_batch_size=16, # config
warmup_steps=0,
weight_decay=0.1, # config
logging_dir="./logs",
)
trainer = Trainer(
model_init=get_model,
args=training_args,
train_dataset=chunked_encoded_dataset['train'],
eval_dataset=chunked_encoded_dataset['validation'],
compute_metrics=compute_metrics
)
tune_config = {
"per_device_train_batch_size": 32,
"per_device_eval_batch_size": 32,
"num_train_epochs": tune.choice([2, 3, 4, 5])
}
scheduler = PopulationBasedTraining(
time_attr="training_iteration",
metric="eval_acc",
mode="max",
perturbation_interval=1,
hyperparam_mutations={
"weight_decay": tune.uniform(0.0, 0.3),
"learning_rate": tune.uniform(1e-5, 2.5e-5),
"per_device_train_batch_size": [16, 32, 64],
})
reporter = CLIReporter(
parameter_columns={
"weight_decay": "w_decay",
"learning_rate": "lr",
"per_device_train_batch_size": "train_bs/gpu",
"num_train_epochs": "num_epochs"
},
metric_columns=[
"eval_f1", "eval_loss", "epoch", "training_iteration"
])
from ray.tune.integration.wandb import WandbLogger
trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
backend="ray",
n_trials=10,
scheduler=scheduler,
keep_checkpoints_num=1,
checkpoint_score_attr="training_iteration",
progress_reporter=reporter,
name="tune_transformer_gr")
```
The last function call (to trainer.hyperparameter\_search) is when the error is raised. The error message is:
>
> AttributeError: module 'pickle' has no attribute 'PickleBuffer'
>
>
>
And here is the full stack trace:
>
>
>
> ---
>
>
> AttributeError Traceback (most recent call
> last)
>
>
> in ()
> 8 checkpoint\_score\_attr="training\_iteration",
> 9 progress\_reporter=reporter,
> ---> 10 name="tune\_transformer\_gr")
>
>
> 14 frames
>
>
> /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in
> hyperparameter\_search(self, hp\_space, compute\_objective, n\_trials,
> direction, backend, hp\_name, \*\*kwargs) 1666 1667
>
> run\_hp\_search = run\_hp\_search\_optuna if backend ==
> HPSearchBackend.OPTUNA else run\_hp\_search\_ray
> -> 1668 best\_run = run\_hp\_search(self, n\_trials, direction, \*\*kwargs) 1669 1670 self.hp\_search\_backend = None
>
>
> /usr/local/lib/python3.7/dist-packages/transformers/integrations.py in
> run\_hp\_search\_ray(trainer, n\_trials, direction, \*\*kwargs)
> 231
> 232 analysis = ray.tune.run(
> --> 233 ray.tune.with\_parameters(\_objective, local\_trainer=trainer),
> 234 config=trainer.hp\_space(None),
> 235 num\_samples=n\_trials,
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py in
> with\_parameters(trainable, \*\*kwargs)
> 294 prefix = f"{str(trainable)}\_"
> 295 for k, v in kwargs.items():
> --> 296 parameter\_registry.put(prefix + k, v)
> 297
> 298 trainable\_name = getattr(trainable, "**name**", "tune\_with\_parameters")
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/registry.py in
> put(self, k, v)
> 160 self.to\_flush[k] = v
> 161 if ray.is\_initialized():
> --> 162 self.flush()
> 163
> 164 def get(self, k):
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/registry.py in
> flush(self)
> 169 def flush(self):
> 170 for k, v in self.to\_flush.items():
> --> 171 self.references[k] = ray.put(v)
> 172 self.to\_flush.clear()
> 173
>
>
> /usr/local/lib/python3.7/dist-packages/ray/\_private/client\_mode\_hook.py
> in wrapper(\*args, \*\*kwargs)
> 45 if client\_mode\_should\_convert():
> 46 return getattr(ray, func.**name**)(\*args, \*\*kwargs)
> ---> 47 return func(\*args, \*\*kwargs)
> 48
> 49 return wrapper
>
>
> /usr/local/lib/python3.7/dist-packages/ray/worker.py in put(value)
>
> 1512 with profiling.profile("ray.put"): 1513 try:
> -> 1514 object\_ref = worker.put\_object(value) 1515 except ObjectStoreFullError: 1516 logger.info(
>
>
> /usr/local/lib/python3.7/dist-packages/ray/worker.py in
> put\_object(self, value, object\_ref)
> 259 "inserting with an ObjectRef")
> 260
> --> 261 serialized\_value = self.get\_serialization\_context().serialize(value)
> 262 # This *must* be the first place that we construct this python
> 263 # ObjectRef because an entry with 0 local references is created when
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> serialize(self, value)
> 322 return RawSerializedObject(value)
> 323 else:
> --> 324 return self.\_serialize\_to\_msgpack(value)
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_msgpack(self, value)
> 302 metadata = ray\_constants.OBJECT\_METADATA\_TYPE\_PYTHON
> 303 pickle5\_serialized\_object =
>
> --> 304 self.\_serialize\_to\_pickle5(metadata, python\_objects)
> 305 else:
> 306 pickle5\_serialized\_object = None
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_pickle5(self, metadata, value)
> 262 except Exception as e:
> 263 self.get\_and\_clear\_contained\_object\_refs()
> --> 264 raise e
> 265 finally:
> 266 self.set\_out\_of\_band\_serialization()
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_pickle5(self, metadata, value)
> 259 self.set\_in\_band\_serialization()
> 260 inband = pickle.dumps(
> --> 261 value, protocol=5, buffer\_callback=writer.buffer\_callback)
> 262 except Exception as e:
> 263 self.get\_and\_clear\_contained\_object\_refs()
>
>
> /usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle\_fast.py
> in dumps(obj, protocol, buffer\_callback)
> 71 file, protocol=protocol, buffer\_callback=buffer\_callback
> 72 )
> ---> 73 cp.dump(obj)
> 74 return file.getvalue()
> 75
>
>
> /usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle\_fast.py
> in dump(self, obj)
> 578 def dump(self, obj):
> 579 try:
> --> 580 return Pickler.dump(self, obj)
> 581 except RuntimeError as e:
> 582 if "recursion" in e.args[0]:
>
>
> /usr/local/lib/python3.7/dist-packages/pyarrow/io.pxi in
> pyarrow.lib.Buffer.**reduce\_ex**()
>
>
> AttributeError: module 'pickle' has no attribute 'PickleBuffer'
>
>
>
My environment set-up:
* Am using Google Colab
* Platform: Linux-5.4.109+-x86\_64-with-Ubuntu-18.04-bionic
* Python version: 3.7.10
* Transformers version: 4.6.1
* ray version: 1.3.0
What I have tried:
* Updating pickle
* Installed and imported pickle5 as pickle
* Made sure that I did not have a python file with the name of 'pickle' in my immediate directory
Where is this bug coming from and how can I resolve it?
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67798070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7254514/"
] |
I had the same error when trying to use pickle.dump(), for me it worked to downgrade pickle5 from version 0.0.11 to 0.0.10
|
Not a "real" solution but at least a workaround. For me this issue was occurring on Python 3.7. Switching to Python 3.8 solved the issue.
|
67,798,070
|
I am more or less following [this example](http://4/1AY0e-g4pMh6JPfkexh5nvWf9lvug3sHK98_jxAnwhsYlrB3F20Jkp350PKY) to integrate the ray tune hyperparameter library with the huggingface transformers library using my own dataset.
Here is my script:
```
import ray
from ray import tune
from ray.tune import CLIReporter
from ray.tune.examples.pbt_transformers.utils import download_data, \
build_compute_metrics_fn
from ray.tune.schedulers import PopulationBasedTraining
from transformers import glue_tasks_num_labels, AutoConfig, \
AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments
def get_model():
# tokenizer = AutoTokenizer.from_pretrained(model_name, additional_special_tokens = ['[CHARACTER]'])
model = ElectraForSequenceClassification.from_pretrained('google/electra-small-discriminator', num_labels=2)
model.resize_token_embeddings(len(tokenizer))
return model
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
"electra_hp_tune",
report_to = "wandb",
learning_rate=2e-5, # config
do_train=True,
do_eval=True,
evaluation_strategy="epoch",
load_best_model_at_end=True,
num_train_epochs=2, # config
per_device_train_batch_size=16, # config
per_device_eval_batch_size=16, # config
warmup_steps=0,
weight_decay=0.1, # config
logging_dir="./logs",
)
trainer = Trainer(
model_init=get_model,
args=training_args,
train_dataset=chunked_encoded_dataset['train'],
eval_dataset=chunked_encoded_dataset['validation'],
compute_metrics=compute_metrics
)
tune_config = {
"per_device_train_batch_size": 32,
"per_device_eval_batch_size": 32,
"num_train_epochs": tune.choice([2, 3, 4, 5])
}
scheduler = PopulationBasedTraining(
time_attr="training_iteration",
metric="eval_acc",
mode="max",
perturbation_interval=1,
hyperparam_mutations={
"weight_decay": tune.uniform(0.0, 0.3),
"learning_rate": tune.uniform(1e-5, 2.5e-5),
"per_device_train_batch_size": [16, 32, 64],
})
reporter = CLIReporter(
parameter_columns={
"weight_decay": "w_decay",
"learning_rate": "lr",
"per_device_train_batch_size": "train_bs/gpu",
"num_train_epochs": "num_epochs"
},
metric_columns=[
"eval_f1", "eval_loss", "epoch", "training_iteration"
])
from ray.tune.integration.wandb import WandbLogger
trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
backend="ray",
n_trials=10,
scheduler=scheduler,
keep_checkpoints_num=1,
checkpoint_score_attr="training_iteration",
progress_reporter=reporter,
name="tune_transformer_gr")
```
The last function call (to trainer.hyperparameter\_search) is when the error is raised. The error message is:
>
> AttributeError: module 'pickle' has no attribute 'PickleBuffer'
>
>
>
And here is the full stack trace:
>
>
>
> ---
>
>
> AttributeError Traceback (most recent call
> last)
>
>
> in ()
> 8 checkpoint\_score\_attr="training\_iteration",
> 9 progress\_reporter=reporter,
> ---> 10 name="tune\_transformer\_gr")
>
>
> 14 frames
>
>
> /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in
> hyperparameter\_search(self, hp\_space, compute\_objective, n\_trials,
> direction, backend, hp\_name, \*\*kwargs) 1666 1667
>
> run\_hp\_search = run\_hp\_search\_optuna if backend ==
> HPSearchBackend.OPTUNA else run\_hp\_search\_ray
> -> 1668 best\_run = run\_hp\_search(self, n\_trials, direction, \*\*kwargs) 1669 1670 self.hp\_search\_backend = None
>
>
> /usr/local/lib/python3.7/dist-packages/transformers/integrations.py in
> run\_hp\_search\_ray(trainer, n\_trials, direction, \*\*kwargs)
> 231
> 232 analysis = ray.tune.run(
> --> 233 ray.tune.with\_parameters(\_objective, local\_trainer=trainer),
> 234 config=trainer.hp\_space(None),
> 235 num\_samples=n\_trials,
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py in
> with\_parameters(trainable, \*\*kwargs)
> 294 prefix = f"{str(trainable)}\_"
> 295 for k, v in kwargs.items():
> --> 296 parameter\_registry.put(prefix + k, v)
> 297
> 298 trainable\_name = getattr(trainable, "**name**", "tune\_with\_parameters")
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/registry.py in
> put(self, k, v)
> 160 self.to\_flush[k] = v
> 161 if ray.is\_initialized():
> --> 162 self.flush()
> 163
> 164 def get(self, k):
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/registry.py in
> flush(self)
> 169 def flush(self):
> 170 for k, v in self.to\_flush.items():
> --> 171 self.references[k] = ray.put(v)
> 172 self.to\_flush.clear()
> 173
>
>
> /usr/local/lib/python3.7/dist-packages/ray/\_private/client\_mode\_hook.py
> in wrapper(\*args, \*\*kwargs)
> 45 if client\_mode\_should\_convert():
> 46 return getattr(ray, func.**name**)(\*args, \*\*kwargs)
> ---> 47 return func(\*args, \*\*kwargs)
> 48
> 49 return wrapper
>
>
> /usr/local/lib/python3.7/dist-packages/ray/worker.py in put(value)
>
> 1512 with profiling.profile("ray.put"): 1513 try:
> -> 1514 object\_ref = worker.put\_object(value) 1515 except ObjectStoreFullError: 1516 logger.info(
>
>
> /usr/local/lib/python3.7/dist-packages/ray/worker.py in
> put\_object(self, value, object\_ref)
> 259 "inserting with an ObjectRef")
> 260
> --> 261 serialized\_value = self.get\_serialization\_context().serialize(value)
> 262 # This *must* be the first place that we construct this python
> 263 # ObjectRef because an entry with 0 local references is created when
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> serialize(self, value)
> 322 return RawSerializedObject(value)
> 323 else:
> --> 324 return self.\_serialize\_to\_msgpack(value)
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_msgpack(self, value)
> 302 metadata = ray\_constants.OBJECT\_METADATA\_TYPE\_PYTHON
> 303 pickle5\_serialized\_object =
>
> --> 304 self.\_serialize\_to\_pickle5(metadata, python\_objects)
> 305 else:
> 306 pickle5\_serialized\_object = None
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_pickle5(self, metadata, value)
> 262 except Exception as e:
> 263 self.get\_and\_clear\_contained\_object\_refs()
> --> 264 raise e
> 265 finally:
> 266 self.set\_out\_of\_band\_serialization()
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_pickle5(self, metadata, value)
> 259 self.set\_in\_band\_serialization()
> 260 inband = pickle.dumps(
> --> 261 value, protocol=5, buffer\_callback=writer.buffer\_callback)
> 262 except Exception as e:
> 263 self.get\_and\_clear\_contained\_object\_refs()
>
>
> /usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle\_fast.py
> in dumps(obj, protocol, buffer\_callback)
> 71 file, protocol=protocol, buffer\_callback=buffer\_callback
> 72 )
> ---> 73 cp.dump(obj)
> 74 return file.getvalue()
> 75
>
>
> /usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle\_fast.py
> in dump(self, obj)
> 578 def dump(self, obj):
> 579 try:
> --> 580 return Pickler.dump(self, obj)
> 581 except RuntimeError as e:
> 582 if "recursion" in e.args[0]:
>
>
> /usr/local/lib/python3.7/dist-packages/pyarrow/io.pxi in
> pyarrow.lib.Buffer.**reduce\_ex**()
>
>
> AttributeError: module 'pickle' has no attribute 'PickleBuffer'
>
>
>
My environment set-up:
* Am using Google Colab
* Platform: Linux-5.4.109+-x86\_64-with-Ubuntu-18.04-bionic
* Python version: 3.7.10
* Transformers version: 4.6.1
* ray version: 1.3.0
What I have tried:
* Updating pickle
* Installed and imported pickle5 as pickle
* Made sure that I did not have a python file with the name of 'pickle' in my immediate directory
Where is this bug coming from and how can I resolve it?
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67798070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7254514/"
] |
I also encountered this error on Google Colab trying ray tune hyperparameter search with the huggingface transformers.
This helped me:
```
!pip install pickle5
```
Then
```
import pickle5 as pickle
```
After the first run there will be the pickle warning to restart the notebook and the same error. After the second βRestart and run allβ the ray tune hyperparameter search begins.
|
Not a "real" solution but at least a workaround. For me this issue was occurring on Python 3.7. Switching to Python 3.8 solved the issue.
|
66,559,058
|
I would like to convert a string `temp.filename.txt` to `temp\.filename\.txt` using python
Tried string replace method but the output is not as expected
```
filename = "temp.filename.txt"
filename.replace(".", "\.")
output: 'temp\\.filename\\.txt'
```
|
2021/03/10
|
[
"https://Stackoverflow.com/questions/66559058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3572886/"
] |
`\` is a special character, which is *represented* as `\\`, this doesn't mean your string actually contains 2 `\` characters.
(as suggested by @saipy, if you *print* your string, only single `\` should show up...)
|
```
filename = "temp.filename.txt"
result=filename.replace(".", "\.")
print(result)
```
[I stored a result in variable(result) its working fine check this](https://i.stack.imgur.com/klc8N.png)
|
57,379,888
|
The Source Code
---------------
I have a bit of code requiring that I call a property setter to test wether or not locking functionaliy of a class is working (some functions of the class are `async`, requiring that a padlock boolean be set during their execution). The setter has been written to raise a `RuntimeError` if the lock has been set for the instance.
Here is the code:
```
@filename.setter
def filename(self, value):
if not self.__padlock:
self.__filename = value
else:
self.__events.on_error("data_store is locked. you should be awaiting safe_unlock if you wish to "
"change the source.")
```
As you can see here, if `self.__padlock` is `True`, a `RuntimeError` is raised. The issue arises when attempting to assert the setter with python `unittest` .
The Problem
===========
It appears that `unittest` lacks functionality needed to assert wether or not a property setter raises an exception.
Attempting to use `assertRaises` doesn't obviously work:
```
# Non-working concept
self.assertRaises(RuntimeError, my_object.filename, "testfile.txt")
```
The Question
============
How does one assert that a class's property setter will raise a given exception in a python `unittest.TestCase` method?
|
2019/08/06
|
[
"https://Stackoverflow.com/questions/57379888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7423333/"
] |
You need to actually *invoke* the setter, via an assignment. This is simple to do, as long as you use `assertRaises` as a context manager.
```
with self.assertRaises(RuntimeError):
my_object.filename = "testfile.txt"
```
---
If you couldn't do that, you would have to fall back to an explicit `try` statement (which gets tricky, because you need to handle both "no exception" and "exception other than `RuntimeError`" separately.
```
try:
my_object.filename = "testfile.txt"
except RuntimeError:
pass
except Exception:
raise AssertionError("something other than RuntimeError")
else:
raise AssertionError("no RuntimeError")
```
or (more) explicitly invoke the setter:
```
self.assertRaises(RuntimeError, setattr, myobject, 'filename', 'testfile.txt')
```
or worse, *explicitly* invoke the setter:
```
self.assertRaises(RuntimeError, type(myobject).filename.fset, myobject, 'testfile.txt')
```
In other words, three cheers for context managers!
|
You can use the `setattr` method like so:
```
self.assertRaises(ValueError, setattr, p, "name", None)
```
In the above example, we will try to set `p.name` equal to `None` and check if there is a `ValueError` raised.
|
13,212,300
|
I don't know if this question has duplicates , but i haven't found one yet.
when using python you can create GUI fastly , but sometimes you cannot find a method to do what you want. for example i have the following problem:
let's suppose that there is a canvas called K with a rectangle with ID=1(canvas item id , not memory id) in it.
if i want to redraw the item i can delete it and then redraw it with new settings.
```
K.delete(1)
K.create_rectangle(x1,y1,x2,y2,options...)
```
here is the problem:the object id changes; how can i redraw or move or resize the rectangle or simply change it without changing its id with a method?for example:
```
K.foo(1,options....)
```
if there isn't such a method , then i should create a list with the canvas object ids , but it is not elegant and not fast.for example:
```
ItemIds=[None,None,etc...]
ItemIds[0]=K.create_rectangle(old options...)
K.delete(ItemIds[0])
ItemIds[0]=K.create_rectangle(new options...)
```
|
2012/11/03
|
[
"https://Stackoverflow.com/questions/13212300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1569222/"
] |
You can use [`Canvas.itemconfig`](https://web.archive.org/web/20201108093851id_/http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.itemconfig-method):
```
item = K.create_rectangle(x1,y1,x2,y2,options...)
K.itemconfig(item,options)
```
To move the item, you can use [`Canvas.move`](https://web.archive.org/web/20201108093851id_/http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.move-method)
---
```
import Tkinter as tk
root = tk.Tk()
canvas = tk.Canvas(root)
canvas.pack()
item = canvas.create_rectangle(50, 25, 150, 75, fill="blue")
def callback():
canvas.itemconfig(item,fill='red')
button = tk.Button(root,text='Push me!',command=callback)
button.pack()
root.mainloop()
```
|
I searched around and found the perfect Tkinter method for resizing. canvas.coords() does the trick. just feed it your new coordinates and it's "good to go". Python 3.4
PS. don't forget the first param is the id.
|
4,156,464
|
I need to parse a series of short strings that are comprised of 3 parts: a question and 2 possible answers. The string will follow a consistent format:
This is the question "answer\_option\_1 is in quotes" "answer\_option\_2 is in quotes"
I need to identify the question part and the two possible answer choices that are in single or double quotes.
Ex.:
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'
How do I do this in python?
|
2010/11/11
|
[
"https://Stackoverflow.com/questions/4156464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504732/"
] |
```
>>> import re
>>> s = "Who will win the game 'Michigan' 'Ohio State'"
>>> re.match(r'(.+)\s+([\'"])(.+?)\2\s+([\'"])(.+?)\4', s).groups()
('Who will win the game', "'", 'Michigan', "'", 'Ohio State')
```
|
One possibility is that you can use regex.
```
import re
robj = re.compile(r'^(.*) [\"\'](.*)[\"\'].*[\"\'](.*)[\"\']')
str1 = "Who will win the game 'Michigan' 'Ohio State'"
r1 = robj.match(str1)
print r1.groups()
str2 = 'What color is the sky today? "blue" or "grey"'
r2 = robj.match(str2)
r2.groups()
```
Output:
```
('Who will win the game', 'Michigan', 'Ohio State')
('What color is the sky today?', 'blue', 'grey')
```
|
4,156,464
|
I need to parse a series of short strings that are comprised of 3 parts: a question and 2 possible answers. The string will follow a consistent format:
This is the question "answer\_option\_1 is in quotes" "answer\_option\_2 is in quotes"
I need to identify the question part and the two possible answer choices that are in single or double quotes.
Ex.:
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'
How do I do this in python?
|
2010/11/11
|
[
"https://Stackoverflow.com/questions/4156464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504732/"
] |
If your format is a simple as you say (i.e. *not* as in your examples), you don't need regex. Just `split` the line:
```
>>> line = 'What color is the sky today? "blue" "grey"'.strip('"')
>>> questions, answers = line.split('"', 1)
>>> answer1, answer2 = answers.split('" "')
>>> questions
'What color is the sky today? '
>>> answer1
'blue'
>>> answer2
'grey'
```
|
One possibility is that you can use regex.
```
import re
robj = re.compile(r'^(.*) [\"\'](.*)[\"\'].*[\"\'](.*)[\"\']')
str1 = "Who will win the game 'Michigan' 'Ohio State'"
r1 = robj.match(str1)
print r1.groups()
str2 = 'What color is the sky today? "blue" or "grey"'
r2 = robj.match(str2)
r2.groups()
```
Output:
```
('Who will win the game', 'Michigan', 'Ohio State')
('What color is the sky today?', 'blue', 'grey')
```
|
4,156,464
|
I need to parse a series of short strings that are comprised of 3 parts: a question and 2 possible answers. The string will follow a consistent format:
This is the question "answer\_option\_1 is in quotes" "answer\_option\_2 is in quotes"
I need to identify the question part and the two possible answer choices that are in single or double quotes.
Ex.:
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'
How do I do this in python?
|
2010/11/11
|
[
"https://Stackoverflow.com/questions/4156464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504732/"
] |
```
>>> import re
>>> s = "Who will win the game 'Michigan' 'Ohio State'"
>>> re.match(r'(.+)\s+([\'"])(.+?)\2\s+([\'"])(.+?)\4', s).groups()
('Who will win the game', "'", 'Michigan', "'", 'Ohio State')
```
|
Pyparsing will give you a solution that will adapt to some variability in the input text:
```
questions = """\
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'""".splitlines()
from pyparsing import *
quotedString.setParseAction(removeQuotes)
q_and_a = SkipTo(quotedString)("Q") + delimitedList(quotedString, Optional("or"))("A")
for qn in questions:
print qn
qa = q_and_a.parseString(qn)
print "qa.Q", qa.Q
print "qa.A", qa.A
print
```
Will print:
```
What color is the sky today? "blue" or "grey"
qa.Q What color is the sky today?
qa.A ['blue', 'grey']
Who will win the game 'Michigan' 'Ohio State'
qa.Q Who will win the game
qa.A ['Michigan', 'Ohio State']
```
|
4,156,464
|
I need to parse a series of short strings that are comprised of 3 parts: a question and 2 possible answers. The string will follow a consistent format:
This is the question "answer\_option\_1 is in quotes" "answer\_option\_2 is in quotes"
I need to identify the question part and the two possible answer choices that are in single or double quotes.
Ex.:
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'
How do I do this in python?
|
2010/11/11
|
[
"https://Stackoverflow.com/questions/4156464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504732/"
] |
If your format is a simple as you say (i.e. *not* as in your examples), you don't need regex. Just `split` the line:
```
>>> line = 'What color is the sky today? "blue" "grey"'.strip('"')
>>> questions, answers = line.split('"', 1)
>>> answer1, answer2 = answers.split('" "')
>>> questions
'What color is the sky today? '
>>> answer1
'blue'
>>> answer2
'grey'
```
|
Pyparsing will give you a solution that will adapt to some variability in the input text:
```
questions = """\
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'""".splitlines()
from pyparsing import *
quotedString.setParseAction(removeQuotes)
q_and_a = SkipTo(quotedString)("Q") + delimitedList(quotedString, Optional("or"))("A")
for qn in questions:
print qn
qa = q_and_a.parseString(qn)
print "qa.Q", qa.Q
print "qa.A", qa.A
print
```
Will print:
```
What color is the sky today? "blue" or "grey"
qa.Q What color is the sky today?
qa.A ['blue', 'grey']
Who will win the game 'Michigan' 'Ohio State'
qa.Q Who will win the game
qa.A ['Michigan', 'Ohio State']
```
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
I found [this page](https://support.anaconda.com/customer/en/portal/articles/2797011-updating-anaconda-to-python-3-6) with detailed instructions to upgrade Anaconda to a major newer version of Python (from Anaconda 4.0+). First,
```
conda update conda
conda remove argcomplete conda-manager
```
I also had to `conda remove` some packages not on the official list:
* backports\_abc
* beautiful-soup
* blaze-core
Depending on packages installed on your system, you may get additional `UnsatisfiableError` errors - simply add those packages to the remove list. Next, install the version of Python,
```
conda install python==3.6
```
which takes a while, after which a message indicated to `conda install anaconda-client`, so I did
```
conda install anaconda-client
```
which said it's already there. Finally, following the directions,
```
conda update anaconda
```
I did this in the Windows 10 command prompt, but things should be similar in Mac OS X.
|
Only solution that works was create a new conda env with the name you want (you will, unfortunately, delete the old one to keep the name). Then create a new env with a new python version and re-run your `install.sh` script with the conda/pip installs (or the yaml file or whatever you use to keep your requirements):
```
conda remove --name original_name --all
conda create --name original_name python=3.8
sh install.sh # or whatever you usually do to install dependencies
```
doing `conda install python=3.8` doesn't work for me. Also, why do you want 3.6? Move forward with the word ;)
---
Note bellow doesn't work:
=========================
If you want to update the conda version of your previous env what you can also do is the following (more complicated than it should be because [you cannot rename envs in conda](https://stackoverflow.com/questions/42231764/how-can-i-rename-a-conda-environment)):
1. create a temporary new location for your current env:
```
conda create --name temporary_env_name --clone original_env_name
```
2. delete the original env (so that the new env can have that name):
```
conda deactivate
conda remove --name original_env_name --all # or its alias: `conda env remove --name original_env_name`
```
3. then create the new empty env with the python version you want and clone the original env:
```
conda create --name original_env_name python=3.8 --clone temporary_env_name
```
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
In the past, I have found it quite difficult to try to upgrade in-place.
Note: my use-case for Anaconda is as an all-in-one Python environment. I don't bother with separate virtual environments. If you're using `conda` to create environments, this may be destructive because `conda` creates environments with hard-links inside your `Anaconda/envs` directory.
So if you use environments, you may first want to [export your environments](https://conda.io/docs/user-guide/tasks/manage-environments.html#exporting-the-environment-file). After activating your environment, do something like:
```
conda env export > environment.yml
```
After backing up your environments (if necessary), you may remove your old Anaconda (it's very simple to uninstall Anaconda):
```
$ rm -rf ~/anaconda3/
```
and replace it by downloading the new Anaconda, e.g. Linux, 64 bit:
```
$ cd ~/Downloads
$ wget https://repo.continuum.io/archive/Anaconda3-4.3.0-Linux-x86_64.sh
```
([see here for a more recent one](https://repo.continuum.io/archive/)),
and then executing it:
```
$ bash Anaconda3-4.3.0-Linux-x86_64.sh
```
|
This is how I mange to get (as currently there is no direct support- in future it will be for sure) python 3.9 in anaconda and windows 10
**Note:** I needed extra packages so install them, install only what you need
```
conda create --name e39 python=3.9 --channel conda-forge
```
**Update**
Python 3.9 is available with conda, use below command
conda create --name python=3.9
And it will create your python 3.9 virtual environment simply.
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
Only solution that works was create a new conda env with the name you want (you will, unfortunately, delete the old one to keep the name). Then create a new env with a new python version and re-run your `install.sh` script with the conda/pip installs (or the yaml file or whatever you use to keep your requirements):
```
conda remove --name original_name --all
conda create --name original_name python=3.8
sh install.sh # or whatever you usually do to install dependencies
```
doing `conda install python=3.8` doesn't work for me. Also, why do you want 3.6? Move forward with the word ;)
---
Note bellow doesn't work:
=========================
If you want to update the conda version of your previous env what you can also do is the following (more complicated than it should be because [you cannot rename envs in conda](https://stackoverflow.com/questions/42231764/how-can-i-rename-a-conda-environment)):
1. create a temporary new location for your current env:
```
conda create --name temporary_env_name --clone original_env_name
```
2. delete the original env (so that the new env can have that name):
```
conda deactivate
conda remove --name original_env_name --all # or its alias: `conda env remove --name original_env_name`
```
3. then create the new empty env with the python version you want and clone the original env:
```
conda create --name original_env_name python=3.8 --clone temporary_env_name
```
|
1. Open Anaconda Powershell Prompt with **administrator user.**
2. Type in `conda update python`.
3. Wait about 10 min, in this process you may need to type in `y` in some time.
4. After completing, check your python version in conda by typing `python --version`
5. If it is the newest version, then you can restart your computer.
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
I found [this page](https://support.anaconda.com/customer/en/portal/articles/2797011-updating-anaconda-to-python-3-6) with detailed instructions to upgrade Anaconda to a major newer version of Python (from Anaconda 4.0+). First,
```
conda update conda
conda remove argcomplete conda-manager
```
I also had to `conda remove` some packages not on the official list:
* backports\_abc
* beautiful-soup
* blaze-core
Depending on packages installed on your system, you may get additional `UnsatisfiableError` errors - simply add those packages to the remove list. Next, install the version of Python,
```
conda install python==3.6
```
which takes a while, after which a message indicated to `conda install anaconda-client`, so I did
```
conda install anaconda-client
```
which said it's already there. Finally, following the directions,
```
conda update anaconda
```
I did this in the Windows 10 command prompt, but things should be similar in Mac OS X.
|
This is how I mange to get (as currently there is no direct support- in future it will be for sure) python 3.9 in anaconda and windows 10
**Note:** I needed extra packages so install them, install only what you need
```
conda create --name e39 python=3.9 --channel conda-forge
```
**Update**
Python 3.9 is available with conda, use below command
conda create --name python=3.9
And it will create your python 3.9 virtual environment simply.
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
If you want to upgrade the Python version inside your existing environment, activate it first with `conda activate <env_name>` and then do:
```
conda install -c anaconda python=<version>
```
You might also need to update the dependencies with
```
conda update --all
```
|
Best method I found:
```
source activate old_env
conda env export > old_env.yml
```
Then process it with something like this:
```
with open('old_env.yml', 'r') as fin, open('new_env.yml', 'w') as fout:
for line in fin:
if 'py35' in line: # replace by the version you want to supersede
line = line[:line.rfind('=')] + '\n'
fout.write(line)
```
then edit manually the first (`name: ...`) and last line (`prefix: ...`) to reflect your new environment name and run:
```
conda env create -f new_env.yml
```
you might need to remove or change manually the version pin of a few packages for which which the pinned version from `old_env` is found incompatible or missing for the new python version.
I wish there was a built-in, easier way...
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
Anaconda has not updated python internally to 3.6.
a) Method 1
1. If you wanted to update you will type `conda update python`
2. To update anaconda type `conda update conda`
3. If you want to upgrade between major python version like 3.5 to 3.6, you'll have to do
```
conda install python=$pythonversion$
```
b) Method 2 - Create a new environment (Better Method)
```
conda create --name py36 python=3.6
```
c) To get the absolute latest python (3.6.5 at time of writing)
```
conda create --name py365 python=3.6.5 --channel conda-forge
```
You can see all this from [here](http://conda.pydata.org/docs/using/pkgs.html#package-update)
Also, refer to this for force [upgrading](https://www.scivision.dev/switch-anaconda-latest-python-3/)
EDIT: Anaconda now has a Python 3.6 version [here](https://www.continuum.io/downloads)
|
Only solution that works was create a new conda env with the name you want (you will, unfortunately, delete the old one to keep the name). Then create a new env with a new python version and re-run your `install.sh` script with the conda/pip installs (or the yaml file or whatever you use to keep your requirements):
```
conda remove --name original_name --all
conda create --name original_name python=3.8
sh install.sh # or whatever you usually do to install dependencies
```
doing `conda install python=3.8` doesn't work for me. Also, why do you want 3.6? Move forward with the word ;)
---
Note bellow doesn't work:
=========================
If you want to update the conda version of your previous env what you can also do is the following (more complicated than it should be because [you cannot rename envs in conda](https://stackoverflow.com/questions/42231764/how-can-i-rename-a-conda-environment)):
1. create a temporary new location for your current env:
```
conda create --name temporary_env_name --clone original_env_name
```
2. delete the original env (so that the new env can have that name):
```
conda deactivate
conda remove --name original_env_name --all # or its alias: `conda env remove --name original_env_name`
```
3. then create the new empty env with the python version you want and clone the original env:
```
conda create --name original_env_name python=3.8 --clone temporary_env_name
```
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
I found [this page](https://support.anaconda.com/customer/en/portal/articles/2797011-updating-anaconda-to-python-3-6) with detailed instructions to upgrade Anaconda to a major newer version of Python (from Anaconda 4.0+). First,
```
conda update conda
conda remove argcomplete conda-manager
```
I also had to `conda remove` some packages not on the official list:
* backports\_abc
* beautiful-soup
* blaze-core
Depending on packages installed on your system, you may get additional `UnsatisfiableError` errors - simply add those packages to the remove list. Next, install the version of Python,
```
conda install python==3.6
```
which takes a while, after which a message indicated to `conda install anaconda-client`, so I did
```
conda install anaconda-client
```
which said it's already there. Finally, following the directions,
```
conda update anaconda
```
I did this in the Windows 10 command prompt, but things should be similar in Mac OS X.
|
If you want to upgrade the Python version inside your existing environment, activate it first with `conda activate <env_name>` and then do:
```
conda install -c anaconda python=<version>
```
You might also need to update the dependencies with
```
conda update --all
```
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
I found [this page](https://support.anaconda.com/customer/en/portal/articles/2797011-updating-anaconda-to-python-3-6) with detailed instructions to upgrade Anaconda to a major newer version of Python (from Anaconda 4.0+). First,
```
conda update conda
conda remove argcomplete conda-manager
```
I also had to `conda remove` some packages not on the official list:
* backports\_abc
* beautiful-soup
* blaze-core
Depending on packages installed on your system, you may get additional `UnsatisfiableError` errors - simply add those packages to the remove list. Next, install the version of Python,
```
conda install python==3.6
```
which takes a while, after which a message indicated to `conda install anaconda-client`, so I did
```
conda install anaconda-client
```
which said it's already there. Finally, following the directions,
```
conda update anaconda
```
I did this in the Windows 10 command prompt, but things should be similar in Mac OS X.
|
I'm using a **Mac OS Mojave**
These 4 steps worked for me.
1. `conda update conda`
2. `conda install python=3.6`
3. `conda install anaconda-client`
4. `conda update anaconda`
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
I'm using a **Mac OS Mojave**
These 4 steps worked for me.
1. `conda update conda`
2. `conda install python=3.6`
3. `conda install anaconda-client`
4. `conda update anaconda`
|
If you want to upgrade the Python version inside your existing environment, activate it first with `conda activate <env_name>` and then do:
```
conda install -c anaconda python=<version>
```
You might also need to update the dependencies with
```
conda update --all
```
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
If you want to upgrade the Python version inside your existing environment, activate it first with `conda activate <env_name>` and then do:
```
conda install -c anaconda python=<version>
```
You might also need to update the dependencies with
```
conda update --all
```
|
This is how I mange to get (as currently there is no direct support- in future it will be for sure) python 3.9 in anaconda and windows 10
**Note:** I needed extra packages so install them, install only what you need
```
conda create --name e39 python=3.9 --channel conda-forge
```
**Update**
Python 3.9 is available with conda, use below command
conda create --name python=3.9
And it will create your python 3.9 virtual environment simply.
|
42,549,482
|
Here is the pseudo code:
```
class Foo (list):
def methods...
foo=Foo()
foo.readin()
rule='....'
bar=[for x in foo if x.match(rule)]
```
Here, bar is of a list, however I'd like it to be a instance of Foo, The only way I know is to create a for loop and append items one by one:
```
bar=Foo()
for item in foo:
if item.match(rule):
bar.append(item)
```
So I'd like to know if there is any more concise way or more pythonic way to do this ?
|
2017/03/02
|
[
"https://Stackoverflow.com/questions/42549482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4250879/"
] |
You can pass in a [generator expression](https://docs.python.org/3/tutorial/classes.html#generator-expressions) to the `Foo()` call:
```
bar = Foo(x for x in foo if x.match(rule))
```
(When passing a generator expression to a call, where it is the only argument, you can drop the parentheses you normally would put around a generator expression).
|
It seems another answer got deleted because the original answerer deleted it. So I post the other way here for completeness. If the origin answerer restore he's answer I will delete this answer myself.
another way to do this is to use the build in filter function. the code is :
```
bar=filter( lambda x: x.match(rule) , foo )
```
I guess why the original answer guy deleted his answer because it is said to be a not encouraged using the filter function. I have done some research before asked this question. But I think I learned a lot from that answer. Because I have tried to use the filter function myself , but never figured out how to use it correctly. So this answer taught me how to read the manual Correctly, and no matter what, it is still a valid way to solve my problem. So here, if the original answerer can see my post, thank you I appreciate your help and it surely helped me.
updated:
as what said by Martijn in the comment, this is not a valid answer. I'll keep this anwser because this talk is good. but this is not a valid way to solve my problem.
|
51,011,204
|
I have a 1D array X with both +/- elements. I'm isolating their signs as follows:
```
idxN, idxP = X<0, X>=0
```
Now I want to create an array whose value depends on the sign of X. I was trying to compute this but it gives the captioned syntax error.
```
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
y(idxP) = X(idxP)+[math.log(np.exp(-x)+1) for x in X(idxP)];
```
Is the LHS assignment the culprit?
Thanks.
[Edit] The full code is as follows:
```
y = np.zeros(X.shape)
idxN, idxP = X<0, X>=0
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
y(idxP) = X(idxP)+[math.log(np.exp(-x)+1) for x in X(idxP)];
return y
```
The traceback is:
```
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
File "<ipython-input-63-9a4488f04660>", line 1
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
^
SyntaxError: can't assign to function call
```
|
2018/06/24
|
[
"https://Stackoverflow.com/questions/51011204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9473446/"
] |
In some programming languages like Matlab, indexes are references with parentheses. In Python, indexes are represented with square brackets.
If I have a list, `mylist = [1,2,3,4]`, I reference elements like this:
```
> mylist[1]
2
```
Wen you say `y(idxN)`, Python thinks you are trying to pass `idxN` as an argument a function named `y`.
|
I got it to work like this:
```
y = np.zeros(X.shape)
idxN, idxP = X<0, X>=0
yn,yp,xn,xp = y[idxN], y[idxP],X[idxN],X[idxP]
yn = [math.log(1+np.exp(x)) for x in xn]
yp = xp+[math.log(np.exp(-x)+1) for x in xp];
```
If there is a better way, please let me know. Thanks.
|
13,283,628
|
I have created a mezzanine project and its name is mezzanine-heroku-test
I create a Procfile that has the content as follow:
**web: python manage.py run\_gunicorn -b "0.0.0.0:$PORT" -w 3**
Next, I access to the website to test and I receive the error: Internal Server Error.
So, Could you please help me deploy mezzanine on heroku step by step or some suggestion?
Thank you so much.
|
2012/11/08
|
[
"https://Stackoverflow.com/questions/13283628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/875781/"
] |
Two possibilities:
1. There is a table within another schema ("database" in mysql terminology) which has a FK reference
2. The innodb internal data dictionary is out of sync with the mysql one.
You can see which table it was (one of them, anyway) by doing a "SHOW ENGINE INNODB STATUS" after the drop fails.
If it turns out to be the latter case, I'd dump and restore the whole server if you can.
MySQL 5.1 and above will give you the name of the table with the FK in the error message.
and try this too
Please refer this question [stackoverflow](https://stackoverflow.com/questions/3334619/cannot-delete-or-update-a-parent-row-a-foreign-key-constraint-fails)
Disable foreign key checking
```
DISABLE KEYS
```
Do make sure to `SET FOREIGN_KEY_CHECKS=1;`
|
Instead of using default innodb storage engine you can easily configure django to use MyISAM. In the later case, it wouldn't restrict you from performing various operation because of the relationships. Both have their positive and negative points.
|
39,961,414
|
I am new to learning regex in python and I'm wondering how do I use regex in python to store the integers(positive and negative) i want into a list!
For example
This is the data in a list.
```
data =
[u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=-5,B=5)',
u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=5,Y=5)',
u'\x1b[0m[\x1b[1m\x1b[10m\xbb\x1b[0m\x1b[36m]\x1b[0m : ']
```
How do I extract the integer values of A and B (negative and positive) and store them in a variable so that I can work with the numbers?
I tried smth like this but the list is empty ..
```
for line in data[0]:
pattern = re.compile("([A-Z]=(-?\d+?),[A-Z]=(-?\d+?))")
store = pattern.findall(line)
print store
```
Thank you and appreciate it
|
2016/10/10
|
[
"https://Stackoverflow.com/questions/39961414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6949864/"
] |
When you use `$(".form-control")`, jquery select all `.form-control` element. But you need to select target element using `this` variable in event function and use [`.prev()`](https://api.jquery.com/prev/) to select previous element.
```js
$(".show").mousedown(function(){
$(this).prev().attr('type','text');
}).mouseup(function(){
$(this).prev().attr('type','password');
}).mouseout(function(){
$(this).prev().attr('type','password');
});
```
|
Just target the previous input instead of all inputs with the given class
```js
$(".form-control").on("keyup", function() {
if ($(this).val())
$(this).next(".show").show();
else
$(this).next(".show").hide();
}).trigger('keyup');
$(".show").mousedown(function() {
$(this).prev(".form-control").prop('type', 'text');
}).mouseup(function() {
$(this).prev(".form-control").prop('type', 'password');
}).mouseout(function() {
$(this).prev(".form-control").prop('type', 'password');
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<form name="resetting_form" method="post" action="">
<div class="form-group has-feedback">
<input type="password" id="password_first" required="required" placeholder="New Password" class="form-control">
<span class="show">show</span>
</div>
<div class="form-group has-feedback">
<input type="password" id="password_second" required="required" placeholder="Repeat Password" class="form-control">
<span class="show">show</span>
</div>
<input type="submit" class="btn btn-primary" value="Submit">
</form>
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.