date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2018/03/16
| 607
| 1,831
|
<issue_start>username_0: So, I need to change in a code all the places which look like:
```
obj && obj.property && obj.subproperty
```
with
```
_.get(obj, 'property.subproperty')
```
I believe I can find all such occurrences with grep and some regexp. So, can somebody help me with regexp?
example of an occurrence:
```
if (thing.reported &&
thing.reported.payload &&
thing.reported.payload.metadata &&
thing.reported.payload.metadata.position) {
```
I'm finished with regexp like this:
```
/(.+?)[^a-zA-Z\d].?\1.(.+?)[^a-zA-Z\d].?\1.\2.(.+?)[^a-zA-Z\d]/gim
```<issue_comment>username_1: You could try:
```
search: obj && obj\.(\w+) && obj\.(\w+)
replace: _.get(obj, '$1.$2')
```
Does obj need to be different too in each case?
If the depth is variable, I don't think there's a general solution with regexes.
Upvotes: 0 <issue_comment>username_2: You could use a regex like this:
```
(\S+)\s*&&\s*\1\.\S+
```
This matches all objects followed with && followed by the same object with a . and something else.
If you want to use it with grep you need to remove the newlines because grep can't deal with those.
```
cat test.js| tr -d '\n' | grep -P '(\S+)\s*&&\s*\1\.\S+'
```
Upvotes: 1 <issue_comment>username_3: In vim, this
```
:%s/\s*if\s*(\_.\{-} &&\_\s*\(.\{-}\)\.\(.*\)\s*)\s*{/_.get(\1, \2)/
```
Will transform the example of an occurrence into :
```
_.get(thing, reported.payload.metadata.position)
```
To explain a bit :
* %s/ : to try to match in all the file (important with multi-line matches)
* \s\* : means any number of white spaces (or tabs).
* \\_. : means any character including newline.
* \{-} : means that what is before will be matched the least number of occurences possible
* \. : means a real dot
* \(\) : that how to make capturing parenthesis in vim
Upvotes: 1
|
2018/03/16
| 2,540
| 9,260
|
<issue_start>username_0: I have the following function, which is supposed to check through an array, sum up some values and then return a string with the sum (for Dialogflow/Google Assitant):
```
function howMuch(app) {
return bank.getTurnovers(accessToken)
.then((result) => {
const { turnovers } = JSON.parse(result);
let sum = turnovers
.filter((i) => {
if (i.tags.includes(tag)) {
console.log(i); // correctly prints relevant elements
return i;
}
})
.reduce((prev, j) => {
console.log(prev, j.amount) // correctly prints 0 7, 7 23
return prev + j.amount;
}, 0);
console.log(sum); // prints sum correctly: 30
return app.ask('You have spend ' + sum); // returns 0 for sum
})
.catch((error) => {
// handle error
});
};
```
The problem is however, the function only returns `You have spend 0`, where 0 is the initial value set in the reduce function, but the sum is actually 30. It just does not seem to wait for the reduce to finish.
What is the problem here?<issue_comment>username_1: `app.ask()` sends the response back to the user (and, since you're using `ask()` and not `tell()`, also says to keep the microphone open). It returns exactly what Express' `Response.send()` method returns.
It does **not** return the string to the calling function.
If you want the value, you shouldn't call `app.ask()` at this point. Just return the value (or a Promise that eventually resolves to that value) and have that call `app.ask()`.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Congratulations, you've uncovered the single greatest misconception people have when using `Promises`: that a Promise can return a predictable value to its calling context.
As we know, callbacks we pass to a `new Promise(callback)` constructor receive two arguments, each callbacks themselves, that are expected to be invoked with
1. resolve(result) - the result of the asynchronous operation
2. reject(error) - an error condition should that operation fail
Let's assume that we define a simple function that returns a string message wrapped in an object:
```
/**
* @typedef {Object} EchoResult
* @property {String} message
*
* @example
* {message: 'this is a message'}
*/
/**
* Wraps a message in a EchoResult.
* @param {String} msg - the message to wrap
* @return {EchoResult}
*/
const echo = function (msg) {
return {
message: msg
}
}
```
By calling
```
const result = echo('hello from echo!')
console.log(result)
```
we can expect
```
{ message: 'hello from echo!'}
```
to be printed to the console.
So far so good.
Now let's do the same thing wrapped in a Promise:
```
/**
* Wraps a msg in an object.
* @param {String} msg - the message to wrap
* @return {Promise.}
\*/
const promised\_echo = function (msg) {
return new Promise((resolve, reject) => {
resolve({message: msg})
})
}
```
Invoking this function will return a Promise which we expect will *ultimately* resolve into an `EchoResult`, so that by calling
```
promised_echo('hello from promised echo!')
.then(res => {
console.log(res)
})
```
we can expect
```
{ message: 'hello from promised echo!'}
```
to be printed to the console.
Woo hoo! \*triumphal fanfare\* We've used a Promise!
Now comes the part that confuses **a lot** of people.
What would you expect our `promised_echo` function to return? Let’s find out.
```
const result =
promised_echo('returns a Promise!')
console.log({result})
```
Yup! It returns a `Promise`!
```
{ result: Promise { } }
```
This is because we defined the `promised_echo` function above to do just that. Easy peasy.
But what would you expect `promised_echo` that has been chained with a `.then()` function to return?
```
const result =
promised_echo('hello from a Promise!')
.then(msg => {
return msg
})
console.log({result})
```
Oh! Surprise! It also returns a Promise!
```
{ result: Promise { } }
```
Fear not! If you expected it to return the string `hello from a Promise!`, you are not alone.
This makes perfect sense because since both the `then()` and `catch()` functions (known collectively as "thennables") are designed to be chained to Promises, and you can chain multiple thenables onto a Promise, a thenable cannot, by definition, return any useable, predictable value to its calling context!
In the OP’s example code, you have
```
.then((result) => {
// ...
console.log(sum); // prints sum correctly: 30
return app.ask('You have spend ' + sum); // returns 0 for sum
}
```
where the final thennable in your Promise chain attempts to return the result of calling your `app.ask()` function which, as we have seen, cannot be done.
So to be able to use a value passed to a thenable, it seems that you can only use it **inside** the thennable callback itself.
This is the *single greatest misunderstanding* in all of **Promise-dom**!
"But **WAIT**!”, as the early morning TV huckster says, “*There's MORE*!"
What about the other kind of thennable, the `.catch()` function?
Here we find another surprise!
It works **exactly** as we've discovered the `.then()` function does, in that it cannot return a value and moreover, *cannot throw an error we can use to locate the problem in our code*!
Let's redefine `promised_echo` to fail rather ungracefully:
```
const bad_promised_echo = function (msg) {
return new Promise((resolve, reject) => {
throw new Error(`FIRE! FAMINE! FLEE!! IT'S TEOTWAWKI!`)
})
}
// TEOTWAWKI = The End Of The World As We Know It!
```
Such that, running
```
bad_promised_echo('whoops!')
```
will fail with:
```
(node:40564) UnhandledPromiseRejectionWarning:
Unhandled promise rejection (rejection id: 1):
Error: FIRE! FAMINE! FLEE!! IT'S TEOTWAWKI!
```
Wait, what?
Where has my stacktrace gone?
I need my stacktrace so I can determine where in my code the error occurred! ("I need my pain! It's what makes me, *me*!" - paraphrased from <NAME>.)
We lose our stacktrace because, from the perspective of the Promise's callback function, we have no runtime context from which we can derive it. (More on this below.)
Oh, I know! I'll just trap the error using the other kind of thennable, the `.catch()` function.
You'd think so, but *no dice*! because we still lack any runtime context. I'll save you the example code this time. Trust me, or better yet, try it yourself! :)
Promises run outside of the usual flow of program control, in a dim twilit world of things that run asynchronously, things that we expect will complete sometime in...
(this is where I strike my masterful scientist pose, staring off into the distance with arm upraised and finger pointing, and ominiously intone..)
***THE FUTURE!***
\*queue swoopy scifi theremin music\*.
And that's the really interesting, but *confusing* thing about asynchronous programming, we run functions to accomplish some goal but we have no way to know *when* they will finish which is the very nature of asynchronicity.
Consider this code:
```
request.get('http://www.example.com').then(result => {
// do something with the result here
})
```
When might you expect it to complete?
Depending on several unknowable factors, such as network conditions, server load, etc., you ***cannot*** know when the get request will complete, unless you have some spooky precognitive ability.
To illustrate this, let's go back to our echo example and inject a little of that "unknowability" into our function by delaying the resolution of our our Promise by a smidge:
```
let result
const promised_echo_in_the_future = function (msg) {
return new Promise(function(resolve, reject) {
setTimeout(() => { // make sure our function completes in the future
result = msg
resolve({
message: msg
})
}, 1) // just a smidge
})
}
```
And run it (I think you know where I'm going with this by now.)
```
promised_echo_in_the_future('hello from the future!')
console.log({result})
```
Since the resolution of our Promise occurs just a smidge after our console log function attempts to reference it, we can expect
```
{ result: undefined }
```
to be printed on the console.
OK, thats a lot to digest and I commend you for your patience.
We have only one more thing to consider.
How about `async/await`? Can we use them to expose the resolved value of an async function so we can use it outside of that function’s context?
Unfortunately, no.
Because async/await is really just syntactic sugar used to hide the complexity of Promises by making them *appear* to operate synchronously,
```
const async_echo = async function (msg) {
let res = await promised_echo(msg)
return res
}
let result = async_echo();
console.log({result})
```
will print the expected result to the console
```
{ result: Promise { } }
```
I hope this clears things up a little.
The best advice I can give when using Promises is:
Always use any resolved result of invoking a Promise chain within the callbacks of the chain itself and handle any errors created in Promise chains the same way.
Upvotes: 1
|
2018/03/16
| 628
| 1,860
|
<issue_start>username_0: Who knows how to obtain the id\_token with `Keycloak`?
I have been working with `Keycloak` in `Java` (Spring, JEE) and postman.
The basics work fine but I need the `id_token` since there are some claims that they are not present in the `access_token` but they are present in the `id_token`.
Using the `keycloak-core` library I could obtain the Keycloak context, but the id\_token attribute always is null.
Some idea?<issue_comment>username_1: If you are using keycloak version 3.2.1,
then below mail chain will help you.
Hi All
I am using below curl command
```
curl -k https://IP-ADDRESS:8443/auth/realms/Test123/protocol/openid-connect/token -d "grant_type=client_credentials" -d "client_id=SURE_APP" -d "client_secret=ca3c4212-f3e8-43a4-aa14-1011c7601c67"
```
In the above command's response id\_token is missing ,which is require for kong to tell who i am?
In my keycloak `realm->client-> Full Scope Allowed ->True`
Ok I found it we have to add
```
scope=openid
```
then only it will work
Upvotes: 6 [selected_answer]<issue_comment>username_2: In keycloak 2.x the id\_token was inside the returned token object.
They removed it in keycloak 3.x.
just add to your request the following:
```
scope: "openid"
```
as listed below to retain the id\_token
<http://lists.jboss.org/pipermail/keycloak-user/2018-February/013170.html>
Upvotes: 2 <issue_comment>username_3: I had the same thing with Keycloak 3.4.3 version.
I added `scope=openid` to my request as username_2 mentioned in his answer and it works.
Here is my request:
>
> curl -X POST -H "Content-Type:application/x-www-form-urlencoded" -d
> **"scope=openid"** -d "grant\_type=password" -d "client\_id=test" -d "username=<EMAIL>" -d "password=<PASSWORD>"
> '<https://YOUR-DOMAIN/realms/test123/protocol/openid-connect/token>'
>
>
>
Upvotes: 4
|
2018/03/16
| 609
| 1,702
|
<issue_start>username_0: Two tables:
Price list table PRICE\_LIST:
```
ITEM PRICE
MANGO 5
BANANA 2
APPLE 2.5
ORANGE 1.5
```
Records of sale REC\_SALE (list of transactions)
```
ITEM SELLLING_PRICE
MANGO 4
MANGO 3
BANANA 2
BANANA 1
ORANGE 0.5
ORANGE 4
```
Selecting records from REC\_SALE where Items were sold less than the PRICE listed in the PRICE\_LIST table
```
SELECT A.*
FROM
(
select RS.ITEM,RS.SELLING_PRICE, PL.PRICE AS ACTUAL_PRICE
from REC_SALE RS,
PRICE_LIST PL
where RS.ITEM = PL.ITEM
) A
WHERE A.SELLING_PRICE < A.ACTUAL_PRICE ;
```
Result:
```
ITEM SELLING_PRICE PRICE
MANGO 4 5
MANGO 3 5
BANANA 1 2
ORANGE 0.5 1.5
```
I have these same two tables as dataframe in jupyter notebook
what would be a equivalent python statement of the SQL statement above using pandas?<issue_comment>username_1: `merge` with `.loc`
```
df1.merge(df2).loc[lambda x : x.PRICE>x.SELLLING_PRICE]
Out[198]:
ITEM PRICE SELLLING_PRICE
0 MANGO 5.0 4.0
1 MANGO 5.0 3.0
3 BANANA 2.0 1.0
4 ORANGE 1.5 0.5
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Use [`merge`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) with [`query`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html):
```
df = pd.merge(df1, df2, on='ITEM').query('PRICE >SELLLING_PRICE')
print (df)
ITEM PRICE SELLLING_PRICE
0 MANGO 5.0 4.0
1 MANGO 5.0 3.0
3 BANANA 2.0 1.0
4 ORANGE 1.5 0.5
```
Upvotes: 0
|
2018/03/16
| 576
| 1,539
|
<issue_start>username_0: I've a dynamic HTML page which has a table with multiple 'tbody' elements.
Now, I'm stuck with CSS as I need to show a vertical bar inside each of the 'tbody' as shown in the image attached.
How could I get this done? I tried using 'tr::after' and creating a bar, but didn't help.
Here's my html:
Could you please help me achieve this?
```html
| | |
| --- | --- |
| Row 1 Column 1 | Row 1 Column 2 |
| Row 2 Column 1 | Row 2 Column 2 |
| Row 3 |
| Row 1 Column 1 | Row 1 Column 2 |
| Row 2 Column 1 | Row 2 Column 2 |
| Row 3 |
| Row 1 Column 1 | Row 1 Column 2 |
| Row 2 Column 1 | Row 2 Column 2 |
| Row 3 |
```
[](https://i.stack.imgur.com/cYd9d.png)<issue_comment>username_1: `merge` with `.loc`
```
df1.merge(df2).loc[lambda x : x.PRICE>x.SELLLING_PRICE]
Out[198]:
ITEM PRICE SELLLING_PRICE
0 MANGO 5.0 4.0
1 MANGO 5.0 3.0
3 BANANA 2.0 1.0
4 ORANGE 1.5 0.5
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Use [`merge`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) with [`query`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html):
```
df = pd.merge(df1, df2, on='ITEM').query('PRICE >SELLLING_PRICE')
print (df)
ITEM PRICE SELLLING_PRICE
0 MANGO 5.0 4.0
1 MANGO 5.0 3.0
3 BANANA 2.0 1.0
4 ORANGE 1.5 0.5
```
Upvotes: 0
|
2018/03/16
| 1,424
| 4,063
|
<issue_start>username_0: I am a newbie in Java. I am trying to sort array arr[] according to the values of array val[] and it should maintain the insertion order.
```
int arr[] = {2,3,2,4,5,12,2,3,3,3,12};
int val[] = {3,4,3,1,1,2,3,4,4,4,2};
```
I am using this :
```
ArrayList al = new ArrayList () ;
for(int i = 0 ; i < arr.length ; i++)
al.add(arr);
Collections.sort(al , (left , right) -> val[al.indexOf(left)] -
val[al.indexOf(right)])
```
My output should be
```
4 5 12 12 2 2 2 3 3 3 3
```<issue_comment>username_1: The `val` array seems to be one element longer.
```
int[] arr = {2, 3, 2, 4, 5, 12, 2, 3, 3, 3, 12};
int[] val = {3, 4, 3, 1, 1, 2, 2, 3, 4, 4, /*4,*/ 2};
int[] sortedArr = IntStream.range(0, arr.length)
.mapToObj(i -> new int[] {arr[i], val[i]})
.sorted((lhs, rhs) -> Integer.compare(lhs[1], rhs[1]))
.mapToInt(pair -> pair[0])
.toArray();
System.out.println(Arrays.toString(sortedArr));
```
which results in
```
[4, 5, 12, 2, 12, 2, 2, 3, 3, 3, 3]
```
As sorting is stable, one could either do two sorts or combine them:
```
int[] sortedArr = IntStream.range(0, arr.length)
.mapToObj(i -> new int[] {arr[i], val[i]})
.sorted((lhs, rhs) -> Integer.compare(lhs[0], rhs[0]))
.sorted((lhs, rhs) -> Integer.compare(lhs[1], rhs[1]))
.mapToInt(pair -> pair[0])
.toArray();
```
and then, voila
```
[4, 5, 2, 12, 12, 2, 2, 3, 3, 3, 3]
```
Upvotes: 1 <issue_comment>username_2: Just write your sorting algorithm of choice, compare values from `val` and sort both `arr` and `val` accordingly.
Solely for the sake of brevity, here's an example using bubble-sort:
```
static void sortByVal(int[] arr, int[] val) {
if (arr.length != val.length) { return; }
for (int i=0; i < val.length; i++) {
for (int j=1; j < (val.length-i); j++) {
if (val[j-1] > val[j]) {
int temp = val[j-1];
val[j-1] = val[j];
val[j] = temp;
temp = arr[j-1];
arr[j-1] = arr[j];
arr[j] = temp;
}
}
}
}
```
Note however that you usually shouldn't resort to reimplementing a sorting algorithm but rather switch to appropriate datastructures.
For instance instead of using a key-array and a values-array, use an array containing key-value pairs. This array can then be sorted easily:
```
Collections.sort(array, (p0, p1) -> Integer.compare(p0.val, p1.val));
```
Upvotes: 1 <issue_comment>username_3: Your two arrays are different lengths so this fails but if you fix that problem this should work:
```
static > List getSortOrder(List list) {
// Ints in increasing order from 0. One for each entry in the list.
List order = IntStream.rangeClosed(0, list.size() - 1).boxed().collect(Collectors.toList());
Collections.sort(order, (o1, o2) -> {
// Comparing the contents of the list at the position of the integer.
return list.get(o1).compareTo(list.get(o2));
});
return order;
}
// Array form.
static > List getSortOrder(T[] list) {
return getSortOrder(Arrays.asList(list));
}
static List reorder(List list, List order) {
return order.stream().map(i -> list.get(i)).collect(Collectors.toList());
}
// Array form.
static T[] reorder(T[] list, List order) {
return reorder(Arrays.asList(list), order).toArray(list);
}
public void test(String[] args) {
Integer arr[] = {2,3,2,4,5,12,2,3,3,3,12};
Integer val[] = {3,4,3,1,1,2,2,3,4,4,4,2};
List sortOrder = getSortOrder(val);
Integer[] reordered = reorder(arr, sortOrder);
System.out.println(Arrays.toString(reordered));
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: You should look at your logic
val[0] = arr[3] = 4
val[1] = arr[4] = 5
val[2] = arr[3] the value is 4 following your current pattern.
To get 12 the desired value you would have to remove the previously used 4,5
This pattern contradicts what was done at val[1] because 4 was not removed.
Upvotes: 0
|
2018/03/16
| 613
| 2,357
|
<issue_start>username_0: I am new to ionic application development, while searching through the net I saw a link which directed me to the **ionic creator**. Reading through it I was able to create an app.
After creating it, I downloaded the source files but now I am trying to run it on my computer. On the cmd screen, I typed `ionic serve`but I receive this error message `[ERROR] Sorry! ionic serve can only be run in an Ionic project directory`. Could someone please take me through the process of running the app on my system.<issue_comment>username_1: You need to run that command from project directory.
For example, if your application is in *C:\Users\UserName\Desktop\MyApp* you need to go to MyApp directory and there you can run `ionic serve` command.
Upvotes: 1 <issue_comment>username_2: 1. **Ionic v1**
Install the latest version of the local Ionic CLI installed by typing the command:
```
npm install -g ionic@latest
```
To start a new Ionic project, type the command:
```
ionic start myapp --type ionic1
```
Now go into the newly created myapp directory, and you will see directory called www inside.
Delete everything inside of the www folder, and move the unzipped files and folders from STEP 1 into the www folder. The directory structure should look like:
[](https://i.stack.imgur.com/6pEQs.png)
Next, move the directory called SCSS-MOVEME up one directory, and rename it to scss. This directory should now sit side-by-side with the www directory.
Now, run the command npm install from directly inside the myapp folder. This will install gulp.js and a few handy tasks, such as gulp-sass and gulp-minify-css.
Finally, in the ionic.config.json file, add the JavaScript property "gulpStartupTasks": ["sass", "watch"].
2. **Ionic v3.x**
This step requires you to have the latest version of the local Ionic CLI installed.
To start a new Ionic project, type the command:
```
ionic start myapp
```
Now go into the newly created myapp directory, and you will see directory called src inside.
Copy and paste the contents from your zip export into the src directory. You will want to overwrite the app directory, pages directory, and index.html
Source: [ZIP Export an Ionic Project](https://docs.usecreator.com/docs/zip-export-an-ionic-project)
Upvotes: 0
|
2018/03/16
| 1,156
| 4,030
|
<issue_start>username_0: I'm trying to match and group objects, based on a property on each object, and put them in their own array that I can use to sort later for some selection criteria. The sort method isn't an option for me, because I need to sort for 4 different values of the property.
**How can I dynamically create separate arrays for the objects who have a matching property?**
For example, I can do this if I know that the form.RatingNumber will be 1, 2, 3, or 4:
```
var ratingNumOne = [],
ratingNumTwo,
ratingNumThree,
ratingNumFour;
forms.forEach(function(form) {
if (form.RatingNumber === 1){
ratingNumOne.push(form);
} else if (form.RatingNumber === 2){
ratingNumTwo.push(form)
} //and so on...
});
```
The problem is that the form.RatingNumber property could be any number, so hard-coding 1,2,3,4 will not work.
How can I group the forms dynamically, by each RatingNumber?<issue_comment>username_1: You could use an object and take `form.RatingNumber` as key.
If you have zero based values without gaps, you could use an array instead of an object.
```
var ratingNumOne = [],
ratingNumTwo = [],
ratingNumThree = [],
ratingNumFour = [],
ratings = { 1: ratingNumOne, 2: ratingNumTwo, 3: ratingNumThree, 4: ratingNumFour };
// usage
ratings[form.RatingNumber].push(form);
```
Upvotes: 0 <issue_comment>username_2: Try to create an object with the "RatingNumber" as property:
```
rating = {};
forms.forEach(function(form) {
if( !rating[form.RatingNumber] ){
rating[form.RatingNumber] = []
}
rating[form.RatingNumber].push( form )
})
```
Upvotes: 0 <issue_comment>username_3: try this its a work arround:
```
forms.forEach(form => {
if (!window['ratingNumber' + form.RatingNumber]) window['ratingNumber' + form.RatingNumber] = [];
window['ratingNumber' + form.RatingNumber].push(form);
});
```
this will create the variables automaticly. In the end it will look like this:
```
ratingNumber1 = [form, form, form];
ratingNumber2 = [form, form];
ratingNumber100 = [form];
```
but to notice `ratingNumber3` (for example) could also be undefined.
Just to have it said, your solution makes no sense but this version works at least.
Upvotes: 0 <issue_comment>username_4: try to use [reduce](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) function, something like this:
```
forms.reduce((result, form) => {
result[form.RatingNumber] = result[form.RatingNumber] || []
result[form.RatingNumber].push(form)
}
,{})
```
the result would be object, with each of the keys is the rating number and the values is the forms with this rating number.
that would be dynamic for any count of rating number
Upvotes: 1 <issue_comment>username_5: It does not matter what numbers you are getting with `RatingNumber`, just use it as index. The result will be an object with the `RatingNumber` as indexes and an array of object that have that `RatingNumber` as value.
```js
//example input
var forms = [{RatingNumber:5 }, {RatingNumber:6}, {RatingNumber:78}, {RatingNumber:6}];
var results = {};
$.each(forms, function(i, form){
if(!results[form.RatingNumber])
results[form.RatingNumber]=[];
results[form.RatingNumber].push(form);
});
console.log(results);
```
HIH
Upvotes: 0 <issue_comment>username_6: ```js
// Example input data
let forms = [{RatingNumber: 1}, {RatingNumber: 4}, {RatingNumber: 2}, {RatingNumber: 1}],
result = [];
forms.forEach(form => {
result[form.RatingNumber]
? result[form.RatingNumber].push(form)
: result[form.RatingNumber] = [form];
});
// Now `result` have all information. Next can do something else..
let getResult = index => {
let res = result[index] || [];
// Write your code here. For example VVVVV
console.log(`Rating ${index}: ${res.length} count`)
console.log(res)
}
getResult(1)
getResult(2)
getResult(3)
getResult(4)
```
Upvotes: 0
|
2018/03/16
| 1,209
| 4,199
|
<issue_start>username_0: I'm trying to use `ThreadPoolExecutor` in Python 3.6 on Windows 7 and it seems that the exceptions are silently ignored or stop program execution. Example code:
```
#!/usr/bin/env python3
from time import sleep
from concurrent.futures import ThreadPoolExecutor
EXECUTOR = ThreadPoolExecutor(2)
def run_jobs():
EXECUTOR.submit(some_long_task1)
EXECUTOR.submit(some_long_task2, 'hello', 123)
return 'Two jobs was launched in background!'
def some_long_task1():
print("Task #1 started!")
for i in range(10000000):
j = i + 1
1/0
print("Task #1 is done!")
def some_long_task2(arg1, arg2):
print("Task #2 started with args: %s %s!" % (arg1, arg2))
for i in range(10000000):
j = i + 1
print("Task #2 is done!")
if __name__ == '__main__':
run_jobs()
while True:
sleep(1)
```
The output:
```
Task #1 started!
Task #2 started with args: hello 123!
Task #2 is done!
```
It's hanging there until I kill it with `Ctrl`+`C`.
However, when I remove `1/0` from `some_long_task1`, Task #1 completes without problem:
```
Task #1 started!
Task #2 started with args: hello 123!
Task #1 is done!
Task #2 is done!
```
I need to capture the exceptions raised in functions running in `ThreadPoolExecutor` *somehow*.
Python 3.6 (Minconda), Windows 7 x64.<issue_comment>username_1: You can handle the exceptions with a [`try`](https://docs.python.org/2/tutorial/errors.html#handling-exceptions) statement. This is how your `some_long_task1` method could look like:
```py
def some_long_task1():
print("Task #1 started!")
try:
for i in range(10000000):
j = i + 1
1/0
except Exception as exc:
print('some_long_task1 generated an exception: {}'.format(exc))
print("Task #1 is done!")
```
Output when the method is used within your script:
```
Task #1 started!
Task #2 started with args: hello 123!
some_long_task1 generated an exception: integer division or modulo by zero
Task #1 is done!
Task #2 is done!
(the last while loop running...)
```
Upvotes: 2 <issue_comment>username_2: [`ThreadPoolExecutor.submit`](https://docs.python.org/3.6/library/concurrent.futures.html#concurrent.futures.Executor.submit) returns a [*future* object](https://docs.python.org/3.6/library/concurrent.futures.html#concurrent.futures.Future) that represents the result of the computation, once it's available. In order to not ignore the exceptions raised by the job, you need to actually access this result. First, you can change `run_job` to return the created futures:
```
def run_jobs():
fut1 = EXECUTOR.submit(some_long_task1)
fut2 = EXECUTOR.submit(some_long_task2, 'hello', 123)
return fut1, fut2
```
Then, have the top-level code [wait](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait) for the futures to complete, and access their results:
```
import concurrent.futures
if __name__ == '__main__':
futures = run_jobs()
concurrent.futures.wait(futures)
for fut in futures:
print(fut.result())
```
Calling `result()` on a future whose execution raised an exception will propagate the exception to the caller. In this case the `ZeroDivisionError` will get raised at top-level.
Upvotes: 6 [selected_answer]<issue_comment>username_3: As previous answers indicated, there are two ways to catch the exceptions for the `ThreadPoolExecutor` - check the exception in the `future` or log them. It all depends on what one wants to deal with the potential exceptions. In general, my rule of thumb includes:
1. If I *only* want to record the error, including the trace stack. [`log.exception`](https://docs.python.org/3/library/logging.html#logging.Logger.exception) is a better choice. It needs to add extra logic to record the same amount of information via `future.exception()`.
2. If the code needs to handle different exceptions differently in the main thread. Checking the status of of `future` is the way to go. Also, you might find the function [as\_completed](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.as_completed) is also useful in this circumstance.
Upvotes: 1
|
2018/03/16
| 972
| 3,465
|
<issue_start>username_0: I have a data frame in R that looks somewhat like this:
```
A | B
0 0
1 0
0 0
0 0
0 1
0 1
1 0
1 0
1 0
```
I now want to replace all sequences of more than one "1" in the columns so that only the first "1" is kept and the others are replaced by "0", so that the result looks like this
```
A | B
0 0
1 0
0 0
0 0
0 1
0 0
1 0
0 0
0 0
```
I hope you understood what I meant (English is not my mother tongue and especially the R-"vocabulary" is a bit hard for, which is probably why I couldn't find a solution through googling). Thank you in advance!<issue_comment>username_1: You can handle the exceptions with a [`try`](https://docs.python.org/2/tutorial/errors.html#handling-exceptions) statement. This is how your `some_long_task1` method could look like:
```py
def some_long_task1():
print("Task #1 started!")
try:
for i in range(10000000):
j = i + 1
1/0
except Exception as exc:
print('some_long_task1 generated an exception: {}'.format(exc))
print("Task #1 is done!")
```
Output when the method is used within your script:
```
Task #1 started!
Task #2 started with args: hello 123!
some_long_task1 generated an exception: integer division or modulo by zero
Task #1 is done!
Task #2 is done!
(the last while loop running...)
```
Upvotes: 2 <issue_comment>username_2: [`ThreadPoolExecutor.submit`](https://docs.python.org/3.6/library/concurrent.futures.html#concurrent.futures.Executor.submit) returns a [*future* object](https://docs.python.org/3.6/library/concurrent.futures.html#concurrent.futures.Future) that represents the result of the computation, once it's available. In order to not ignore the exceptions raised by the job, you need to actually access this result. First, you can change `run_job` to return the created futures:
```
def run_jobs():
fut1 = EXECUTOR.submit(some_long_task1)
fut2 = EXECUTOR.submit(some_long_task2, 'hello', 123)
return fut1, fut2
```
Then, have the top-level code [wait](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait) for the futures to complete, and access their results:
```
import concurrent.futures
if __name__ == '__main__':
futures = run_jobs()
concurrent.futures.wait(futures)
for fut in futures:
print(fut.result())
```
Calling `result()` on a future whose execution raised an exception will propagate the exception to the caller. In this case the `ZeroDivisionError` will get raised at top-level.
Upvotes: 6 [selected_answer]<issue_comment>username_3: As previous answers indicated, there are two ways to catch the exceptions for the `ThreadPoolExecutor` - check the exception in the `future` or log them. It all depends on what one wants to deal with the potential exceptions. In general, my rule of thumb includes:
1. If I *only* want to record the error, including the trace stack. [`log.exception`](https://docs.python.org/3/library/logging.html#logging.Logger.exception) is a better choice. It needs to add extra logic to record the same amount of information via `future.exception()`.
2. If the code needs to handle different exceptions differently in the main thread. Checking the status of of `future` is the way to go. Also, you might find the function [as\_completed](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.as_completed) is also useful in this circumstance.
Upvotes: 1
|
2018/03/16
| 1,441
| 5,165
|
<issue_start>username_0: I'm trying to use a codegen tool for Go to automatically generate some code based on the contents of other go files. The codegen tool will get standard arguments which can be deduced from the name of the file its generating and the name of the file that it's parsing. If I were doing it all manually, it would look like:
```
foo-tool -name FooInterface -file foo/api.go
foo-tool -name BarInterface -file foo/api.go
foo-tool -name BingInterface -file foo/bing.go
foo-tool -name BazInterface -file foo/baz.go
```
But I don't want to do it manually, I want to use Make! So I tried to accomplish the same thing with a Makefile and a pattern rule.
```
foo_FooInterface.go : foo/api.go
foo_BarInterface.go : foo/api.go
foo_BingInterface.go : foo/bing.go
foo_BazInterface.go : foo/baz.go
foo_%.go : %.go
$(eval foo_name=$(subst mock_,,$(subst .go,,$(@F))))
build-foo -name $(foo_name) -file $<
```
In my mind, the first 3 rules would set up the dependency graph, and the pattern rule would tell Make what to do with the dependencies. But when I try running `make foo_BarInterface.go`, I get `make: Nothing to be done for foo_BarInterface.go`. I understand why this happens, because Make is expecting to match foo\_FooInterface.go with FooInterface.go, but I don't want to restructure my project's files like that.
Is this possible, or do I need to do something like:
```
foo_FooInterface.go : foo/api.go
build-foo -name FooInterface -file foo/api.go
foo_BarInterface.go : foo/api.go
build-foo -name BarInterface -file foo/api.go
foo_BingInterface.go : foo/bing.go
build-foo -name BingInterface -file foo/bing.go
foo_BazInterface.go : foo/baz.go
build-foo -name BingInterface -file foo/baz.go
```
Which I really don't want to do, because new `Interface`s will be added as the codebase grows, and I don't want to require people to type all of that every time.
Edit: I wouldn't mind specifying the rule manually every time, but I need a rule that collects all the generated files together, and I don't want to specify every foo\_\*.go in that one. Is there a way to say "This rule depends on all rules (not files) matching a pattern?" I was able to do
```
foo_files := $(shell grep 'foo_\w\+.go' Makefile | cut -d : -f1)
```
But this seems bad to me.<issue_comment>username_1: You can handle the exceptions with a [`try`](https://docs.python.org/2/tutorial/errors.html#handling-exceptions) statement. This is how your `some_long_task1` method could look like:
```py
def some_long_task1():
print("Task #1 started!")
try:
for i in range(10000000):
j = i + 1
1/0
except Exception as exc:
print('some_long_task1 generated an exception: {}'.format(exc))
print("Task #1 is done!")
```
Output when the method is used within your script:
```
Task #1 started!
Task #2 started with args: hello 123!
some_long_task1 generated an exception: integer division or modulo by zero
Task #1 is done!
Task #2 is done!
(the last while loop running...)
```
Upvotes: 2 <issue_comment>username_2: [`ThreadPoolExecutor.submit`](https://docs.python.org/3.6/library/concurrent.futures.html#concurrent.futures.Executor.submit) returns a [*future* object](https://docs.python.org/3.6/library/concurrent.futures.html#concurrent.futures.Future) that represents the result of the computation, once it's available. In order to not ignore the exceptions raised by the job, you need to actually access this result. First, you can change `run_job` to return the created futures:
```
def run_jobs():
fut1 = EXECUTOR.submit(some_long_task1)
fut2 = EXECUTOR.submit(some_long_task2, 'hello', 123)
return fut1, fut2
```
Then, have the top-level code [wait](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait) for the futures to complete, and access their results:
```
import concurrent.futures
if __name__ == '__main__':
futures = run_jobs()
concurrent.futures.wait(futures)
for fut in futures:
print(fut.result())
```
Calling `result()` on a future whose execution raised an exception will propagate the exception to the caller. In this case the `ZeroDivisionError` will get raised at top-level.
Upvotes: 6 [selected_answer]<issue_comment>username_3: As previous answers indicated, there are two ways to catch the exceptions for the `ThreadPoolExecutor` - check the exception in the `future` or log them. It all depends on what one wants to deal with the potential exceptions. In general, my rule of thumb includes:
1. If I *only* want to record the error, including the trace stack. [`log.exception`](https://docs.python.org/3/library/logging.html#logging.Logger.exception) is a better choice. It needs to add extra logic to record the same amount of information via `future.exception()`.
2. If the code needs to handle different exceptions differently in the main thread. Checking the status of of `future` is the way to go. Also, you might find the function [as\_completed](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.as_completed) is also useful in this circumstance.
Upvotes: 1
|
2018/03/16
| 546
| 1,882
|
<issue_start>username_0: I am wondering if react-select could show html in rendered options. For example if fetched ajax option has `**text**`, I would like to see text as bold in the dropdown menu instead of seeing `**text**`.
I read the documentation and I didnt find any option for this.
Thank you<issue_comment>username_1: You can rely on the `optionComponent` prop of react-select and the [dangerouslySetInnerHTML feature](https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml) of React
And give to `optionComponent` a component like this
```
const MyOptionComponent = props => ;
```
Upvotes: 1 <issue_comment>username_2: You can create either a custom `optionRenderer` or `optionComponent` for `react-select`.
I'd recommend you `optionRenderer` as it is more simple, if you only want to put convert to html. You can see an example here:
<https://github.com/jquintozamora/react-taxonomypicker/blob/master/app/src/components/TaxonomyPicker/TaxonomyPicker.tsx#L135-L148>
There is another example for `optionComponent` here (just in case you want extra functionality):
<https://github.com/JedWatson/react-select/blob/master/examples/src/components/CustomComponents.js#L15>
Upvotes: 1 <issue_comment>username_3: As simple as:
```
{ value: 'foo', label: }
```
No option component, no option renderer, just simple jsx.
Upvotes: 2 <issue_comment>username_4: I implemented the above solution and it broke the searchable feature. The label expects a string, not an element. -- There is actually a prop/function to do this
```
formatOptionLabel={function(data) {
return (
);
}}
```
Check out this post: [React-Select: How to maintain search-ability while passing HTML to the label value in options](https://stackoverflow.com/questions/63121386/react-select-how-to-maintain-search-ability-while-passing-html-to-the-label-val/63122700#63122700)
Upvotes: 3
|
2018/03/16
| 626
| 2,057
|
<issue_start>username_0: I build an app, and I try of call a method in another, but it show me this error :
>
> (int) in MainActivity cannot be applied
>
>
>
how can I fix this?
```
public void Method1 () {
alet(); //here it show the error
}
public void alet (int position) {
rutaGE = getemployeeName(position);
jornadaGE = getmailid(position);
}
```
and I can't delete `int position`, because I need I get `rutaGE` and `JornadaGE` in another method.<issue_comment>username_1: You can rely on the `optionComponent` prop of react-select and the [dangerouslySetInnerHTML feature](https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml) of React
And give to `optionComponent` a component like this
```
const MyOptionComponent = props => ;
```
Upvotes: 1 <issue_comment>username_2: You can create either a custom `optionRenderer` or `optionComponent` for `react-select`.
I'd recommend you `optionRenderer` as it is more simple, if you only want to put convert to html. You can see an example here:
<https://github.com/jquintozamora/react-taxonomypicker/blob/master/app/src/components/TaxonomyPicker/TaxonomyPicker.tsx#L135-L148>
There is another example for `optionComponent` here (just in case you want extra functionality):
<https://github.com/JedWatson/react-select/blob/master/examples/src/components/CustomComponents.js#L15>
Upvotes: 1 <issue_comment>username_3: As simple as:
```
{ value: 'foo', label: }
```
No option component, no option renderer, just simple jsx.
Upvotes: 2 <issue_comment>username_4: I implemented the above solution and it broke the searchable feature. The label expects a string, not an element. -- There is actually a prop/function to do this
```
formatOptionLabel={function(data) {
return (
);
}}
```
Check out this post: [React-Select: How to maintain search-ability while passing HTML to the label value in options](https://stackoverflow.com/questions/63121386/react-select-how-to-maintain-search-ability-while-passing-html-to-the-label-val/63122700#63122700)
Upvotes: 3
|
2018/03/16
| 578
| 2,011
|
<issue_start>username_0: I'm new using IOT Hub from Azure and I am writing a connector which listen to an enterprise MQTT broker and send them back to an IOT Hub server. The problem I'm facing is that I need to create a connection per devices... Is there a way to avoid that ?
Either by using the IOT Hub client SDK or any MQTT library (like paho)
It's not an option to program all the devices to connect directly to the IOT Hub.<issue_comment>username_1: You can rely on the `optionComponent` prop of react-select and the [dangerouslySetInnerHTML feature](https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml) of React
And give to `optionComponent` a component like this
```
const MyOptionComponent = props => ;
```
Upvotes: 1 <issue_comment>username_2: You can create either a custom `optionRenderer` or `optionComponent` for `react-select`.
I'd recommend you `optionRenderer` as it is more simple, if you only want to put convert to html. You can see an example here:
<https://github.com/jquintozamora/react-taxonomypicker/blob/master/app/src/components/TaxonomyPicker/TaxonomyPicker.tsx#L135-L148>
There is another example for `optionComponent` here (just in case you want extra functionality):
<https://github.com/JedWatson/react-select/blob/master/examples/src/components/CustomComponents.js#L15>
Upvotes: 1 <issue_comment>username_3: As simple as:
```
{ value: 'foo', label: }
```
No option component, no option renderer, just simple jsx.
Upvotes: 2 <issue_comment>username_4: I implemented the above solution and it broke the searchable feature. The label expects a string, not an element. -- There is actually a prop/function to do this
```
formatOptionLabel={function(data) {
return (
);
}}
```
Check out this post: [React-Select: How to maintain search-ability while passing HTML to the label value in options](https://stackoverflow.com/questions/63121386/react-select-how-to-maintain-search-ability-while-passing-html-to-the-label-val/63122700#63122700)
Upvotes: 3
|
2018/03/16
| 2,239
| 7,168
|
<issue_start>username_0: I want to retain CST time always with offset -6, at present I am getting as `2018-03-15T05:08:53-05:00`.
But I want to change it as with offset -6 like `2018-03-15T05:08:53-06:00` through out the year.
```
TimeZone tz= TimeZone.getdefault();
if(tz.inDayLightTime())
{
getCSTDate(cal)
// I would like to change the logic here.
}
public XMLGregorianCalendar getCSTDate(Calendar cal)
{
return XMLGregorianCalendar;
}
```
my input type : calendar
output : XMLGregorianCalendar<issue_comment>username_1: Then don't use a timezone that tracks Daylight Saving Time changes (which is probably the case of yours `TimeZone.getDefault()`).
If you want a fixed offset, you can do:
```
TimeZone tz = TimeZone.getTimeZone("GMT-06:00");
```
Not sure why you want that, because if you're dealing with timezones, you must consider DST effects. And `2018-03-15T05:08:53-06:00` is not the same instant as `2018-03-15T05:08:53-05:00`, so changing the offset while keeping all the other fields is usually wrong - as it's not clear why you want that and what you want to achieve, I can't give you more advice on that.
Upvotes: 2 <issue_comment>username_2: tl;dr
=====
If you want the current moment as seen through a fixed [offset-from-UTC](https://en.wikipedia.org/wiki/UTC_offset), use [`OffsetDateTime`](https://docs.oracle.com/javase/9/docs/api/java/time/OffsetDateTime.html) with [`ZoneOffset`](https://docs.oracle.com/javase/9/docs/api/java/time/ZoneOffset.html).
```
OffsetDateTime.now(
ZoneOffset.ofHours( -6 )
)
```
Details
=======
>
> always with offset -6
>
>
>
The [Answer by username_1](https://stackoverflow.com/a/49322992/642706) is correct: If you don’t want the effects of Daylight Saving Time (DST), don’t use a time zone that respects DST.
If you always want an offset-from-UTC fixed at six hours behind UTC, use an [`OffsetDateTime`](https://docs.oracle.com/javase/9/docs/api/java/time/OffsetDateTime.html).
```
ZoneOffset offset = ZoneOffset.ofHours( -6 ) ;
OffsetDateTime odt = OffsetDateTime.now( offset ) ; // Ignores DST, offset is fixed and unchanging.
```
Be clear that an offset is simply a number hours, minutes, and seconds displacement from UTC. In contrast, a time zone is a history of past, present, and future changes in offset used by the people of a particular region. So generally, you should be using a time zone rather than a mere offset. Your insistence on a fixed offset is likely unwise.
The 3-4 letter abbreviations such as `CST` are *not* time zones. They are used by mainstream media to give a rough idea about time zone and indicate if DST is in effect. But they are *not*standardized. [They are not even unique](https://www.timeanddate.com/time/zones/)! For example, CST means *Central Standard Time* as well as *China Standard Time* or *Cuba Standard Time*.
Use real time zones with names in the format of `continent/region`.
Avoid all the legacy date-time classes such as `TimeZone` now supplanted by the *java.time* classes. Specifically, `ZoneId`.
```
ZoneId z = ZoneId.of( "America/Chicago" ) ;
ZonedDateTime zdt = ZonedDateTime.now( z ) ; // Respects DST changes in offset.
```
If your real issue is wanting to detect DST to alter your logic, I suggest you rethink the problem. I suspect you are attacking the wrong issue. But if you insist, you can ask for the offset currently in effect on your `ZonedDateTime`, and you can ask a `ZoneId` if DST is in effect for any particular moment via the `ZoneRules` class.
```
ZoneOffset offsetInEffect = zdt.getOffset() ;
```
And…
```
Boolean isDstInEffect = zdt.getZone.getRules().isDaylightSavings( zdt.toInstant() ) ;
```
On that last line, note the incorrect use of plural with `s` on `isDaylightSavings`.
The [`XMLGregorianCalendar`](https://docs.oracle.com/javase/9/docs/api/javax/xml/datatype/XMLGregorianCalendar.html) class is part of the troublesome old legacy date-time classes, now supplanted by the *java.time* classes, specifically `ZonedDateTime`. To inter-operate with old code not yet updated to *java.time*, convert to the modern class via the legacy class `GregorianCalendar`.
```
ZonedDateTime zdt = myXmlCal.toGregorianCalendar().toZonedDateTime() ;
```
---
About *java.time*
=================
The [*java.time*](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/9/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/9/docs/api/java/text/SimpleDateFormat.html).
The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) classes.
To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310).
You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes.
Where to obtain the java.time classes?
* [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later
+ Built-in.
+ Part of the standard Java API with a bundled implementation.
+ Java 9 adds some minor features and fixes.
* [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7)
+ Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/).
* [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system))
+ Later versions of Android bundle implementations of the java.time classes.
+ For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706).
The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html).
Upvotes: 1
|
2018/03/16
| 483
| 1,583
|
<issue_start>username_0: I see a code that goes like this:
```
try {
// code
// ...
} catch (e) {
throw(e);
}
```
That's it. There's nothing else in that `catch` block.
Why would I want to do that? Isn't that exactly the same as letting the exception pass through?<issue_comment>username_1: There is no use of throw in catch block. the below code will print err.
```
try {
if(x == "") throw "empty";
if(isNaN(x)) throw "not a number";
x = Number(x);
if(x < 5) throw "too low";
if(x > 10) throw "too high";
}
catch(err) {
console.log("Input is " + err);
}
```
if you put throw in catch block it will not print the error like below
```
try {
if(x == "") throw "empty";
if(isNaN(x)) throw "not a number";
x = Number(x);
if(x < 5) throw "too low";
if(x > 10) throw "too high";
}
catch(err) {
throw err;
console.log("Input is " + err);
}
```
Upvotes: -1 <issue_comment>username_2: The only thing it does is alter the stack trace.
```
function hurl() {
throw 'chunder';
}
```
This exception will appear to originate from within `hurl`:
```
hurl();
```
This exception will appear to originate from `(anonymous)` (or whatever scope it'll be):
```
try {
hurl();
} catch (e) {
throw e;
}
```
So it might be useful if you want to obscure the origins of an exception for whatever reason (can't think of why you might want to, but there you are). Other than that, there's no point to it.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 448
| 1,901
|
<issue_start>username_0: Can anyone tell me: When I press Cancel in the Test Explorer window during a unit test run, is execution simply terminated there and then, or will the currently running test actually run to completion? (Or does it depend which test adapter is being used?)
I'm using VS2012. Some solutions use the Microsoft test adapter, some use NUnit (if it makes a difference).<issue_comment>username_1: >
> is execution simply terminated there and then, or will the currently running test actually run to completion?
>
>
>
Found documentations that states
>
> This stops the test run at its current location. **The test being run is completed, but no additional tests are run**. Results from any already completed tests in the current test run are retained. The status of the test that was running when you clicked Stop changes to Aborted. The status of tests that have not yet completed changes from Pending to Not Executed.
>
>
>
*Emphasis mine*
So it appears what ever test is currently running when you cancel will go to completion if possible, but no further tests will be run.
Note that the above is quote for MSTest. I am uncertain about other test runners.
Upvotes: 0 <issue_comment>username_2: Two part answer...
1. It **does** depend on the particular adapter. The Test Explorer calls the adapter's `ITestExecutor.Cancel()` method and what happens after that point depends on the adapter.
2. With respect to NUnit...
If you are using the NUnit 3 VS adapter and running tests using the NUnit 3 framework, the test is cancelled immediately by aborting the thread(s) that is(are) executing the current test(tests). [Parens because you may be running multiple tests in parallel with NUnit 3.] No teardowns are run.
If you are using the NUnit [V2] VS adapter, the thread is killed immediately and all execution stops. No teardowns are run.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 904
| 3,328
|
<issue_start>username_0: I've made the following code to run multiple thread through a non thread safe object (here an `ArrayList`) :
```
import java.time.LocalDateTime;
import java.util.List;
import java.util.ArrayList;
public class A implements Runnable {
String name;
static List list = new ArrayList();
private static Object lock = new Object();
A(String name) {
this.name = name;
}
@Override
public void run() {
for(int i = 1; i <= 1000; i++) {
list.add(i);
}
System.out.println(list.size());
}
}
```
I was expecting this code just to produce wrong answers since `ArrayList` is not thread-safe. But instead I get this error :
```
Exception in thread "Thread-1" 1003
2401
2799
3799
java.lang.ArrayIndexOutOfBoundsException: 109
at java.util.ArrayList.add(Unknown Source)
at threads.A.run(A.java:16)5123
at java.lang.Thread.run(Unknown Source)
Exception in thread "Thread-5" java.lang.ArrayIndexOutOfBoundsException: 4164
at java.util.ArrayList.add(Unknown Source)
at threads.A.run(A.java:16)
at java.lang.Thread.run(Unknown Source)
6123
```
Can anyone explain to me what is leading to this specific error?<issue_comment>username_1: Well, you are using a non thread-safe collection in a multi threaded environment without any synchronization.
Let's examine the `add` method, where you get the exception:
```
/**
* Appends the specified element to the end of this list.
*
* @param e element to be appended to this list
* @return true (as specified by {@link Collection#add})
*/
public boolean add(E e) {
ensureCapacityInternal(size + 1); // Increments modCount!!
elementData[size++] = e;
return true;
}
```
When multiple threads call this method at the same time, it is quite possible that `ensureCapacityInternal(size + 1)` verifies there is enough space for 1 new element, but then multiple threads try to add an element at the same time, so `elementData[size++]` throws `ArrayIndexOutOfBoundsException` for some of them.
Upvotes: 3 <issue_comment>username_2: `ArrayList` is not a thread-safe class.
Underlying *storage* for elements is `Object[]` which is an array. Any array requires the allocation space in memory which is predefined in the compile time. However, when an `ArrayList` "wants" to add the new element (beyond the underlying array bound), several things have to be done (without your knowledge). Underlying array gets a new (increased) length. Every element of the old array is copied to the new array, **and** then the new element is added. So, you **can** expect the `ArrayIndexOutOfBoundsException` exception when an `ArrayList` is used in multi-thread environment.
You are adding elements too fast so `ArrayList#add() -> grow() -> newCapacity()` can't calculate the correct capacity to allocate the memory for all of the elements coming in.
```
private void add(E e, Object[] elementData, int s) {
if (s == elementData.length)
elementData = grow();
elementData[s] = e;
size = s + 1;
}
```
At some point of time, the condition `s == elementData.length` inside `ArrayList#add` says that there is a space for a new element `A`. Immediately after that other threads put their elements into the list. Now there is no space for `A` and `elementData[s] = e;` throws an exception.
Upvotes: 4 [selected_answer]
|
2018/03/16
| 973
| 2,592
|
<issue_start>username_0: ```
#include
#include
#include
#include
using namespace std;
void print(vector v) {
for (unsigned int i = 0; i < v.size(); i++) {
cout << "[" << i << "] " << v[i] << "\n";
}
}
int main(){
vector v(5);
v[0] = "Egg";
v[1] = "Milk";
v[2] = "Sugar";
v[3] = "Chocolate";
v[4] = "Flour";
print(v);
system("pause");
}
```
How do I make a loop that searches for the item, "sugar" and replace it with "honey."? Sry, im new to vectors<issue_comment>username_1: If you want to replace the first instance of the string (if it exists) you can use `std::find` then assign to the iterator that is returned.
```
std::vector v {"Egg", "Milk", "Sugar", "Chocolate", "Flour"};
auto itMatch = std::find(v.begin(), v.end(), "Sugar");
if (itMatch != v.end())
\*itMatch = "Honey";
```
If you'd like to replace all instances
```
std::replace(v.begin(), v.end(), "Sugar", "Honey");
```
Upvotes: 3 <issue_comment>username_2: You can use the standard alfgorithm `std::find`. For example
```
#include
#include
#include
#include
int main()
{
std::vector v =
{
"Egg", "Milk", "Sugar", "Chocolate", "Flour"
};
const char \*src = "Sugar";
const char \*dsn = "Honey";
auto it = std::find( v.begin(), v.end(), src );
if ( it != v.end() ) \*it = dsn;
for ( const auto &s : v ) std::cout << s << ' ';
std::cout << std::endl;
return 0;
}
```
The program output is
```
Egg Milk Honey Chocolate Flour
```
If you want to replace all occurences of "Sugar" then you can use the standard algorithm `std::replace`.
For example
```
#include
#include
#include
#include
int main()
{
std::vector v =
{
"Egg", "Milk", "Sugar", "Chocolate", "Flour", "Sugar"
};
const char \*src = "Sugar";
const char \*dsn = "Honey";
std::replace( v.begin(), v.end(), src, dsn );
for ( const auto &s : v ) std::cout << s << ' ';
std::cout << std::endl;
return 0;
}
```
The program output is
```
Egg Milk Honey Chocolate Flour Honey
```
If you mean the substitution only in the function `print` within the loop then the function can look the following way
```
#include
#include
#include
void print( const std::vector &v,
const std::string &src = "Sugar",
const std::string &dsn = "Honey" )
{
for ( std::vector::size\_type i = 0; i < v.size(); i++ )
{
std::cout << "[" << i << "] " << ( v[i] == src ? dsn : v[i] ) << "\n";
}
}
int main()
{
std::vector v =
{
"Egg", "Milk", "Sugar", "Chocolate", "Flour"
};
print( v );
return 0;
}
```
Its output is
```
[0] Egg
[1] Milk
[2] Honey
[3] Chocolate
[4] Flour
```
Upvotes: 0
|
2018/03/16
| 1,119
| 4,206
|
<issue_start>username_0: I subscribed to `ids` and `search` in the ui but i wasn't getting any results so i stepped through with the debugger and found out that the transformation is not getting triggered after the first time. So when i call `setIds` the first time `ids` gets updated but for every call after the first one the transformation won't trigger. Same goes for the `search`.
Any ideas what might possible go wrong?
```
class MyViewModel : ViewModel() {
private val repository = Repository.sharedInstance
var recentRadius: LiveData>?
var recentRoute: LiveData>?
init {
recentRadius = repository.recentRadius()
recentRoute = repository.recentRoute()
}
private val idsInput = MutableLiveData()
fun setIdsInput(textId: String) {
idsInput.value = textId
}
val ids: LiveData> = Transformations.switchMap(idsInput) { id ->
repository.ids(id)
}
private val searchInput = MutableLiveData()
fun setSearchInput(search: Search) {
searchInput.value = search
}
val search: LiveData = Transformations.switchMap(searchInput) { search ->
when (search.type) {
SearchType.ID -> repository.id(search)
SearchType.RADIUS -> repository.radius(search)
SearchType.ROUTE -> repository.route(search)
}
}
}
```<issue_comment>username_1: The most common reason why transformation don't get triggered is when there is no `Observer` observing it or the input `LiveData` is not getting changed.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Below example illustrates use of map when observer is attached in the activity.
**Activity**
```
class MainActivity : AppCompatActivity() {
lateinit var mBinding : ActivityMainBinding
private val mViewModel : MainViewModel by lazy {
getViewModel { MainViewModel(this.application) }
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
mBinding = DataBindingUtil.setContentView(this, R.layout.activity_main)
mBinding.vm = mViewModel
// adding obeserver
mViewModel.videoName.observe(this, Observer { value ->
value?.let {
//Toast.makeText(this, it, Toast.LENGTH\_LONG).show()
}
})
}
}
```
**ViewModel** with map
```
class MainViewModel(val appContext : Application) : AndroidViewModel(appContext) {
private val TAG = "MainViewModel"
var videoData = MutableLiveData()
var videoName : LiveData
init {
// Update the data
videoName = Transformations.map(videoData) { "updated : "+it.webUrl }
}
fun onActionClick(v : View) {
// change data
videoData.value = VideoDownload(System.currentTimeMillis().toString())
}
fun onReActionClick(v : View) {
// check data
Toast.makeText(appContext, videoName.value, Toast.LENGTH\_LONG).show()
}
}
```
**ViewModel** with switchMap
```
class MainViewModel(val appContext : Application) : AndroidViewModel(appContext) {
private val TAG = "MainViewModel"
var videoData = MutableLiveData()
var videoName : LiveData
init {
// Update the data
videoName = Transformations.switchMap(videoData) { modData(it.webUrl) }
}
private fun modData(str: String): LiveData {
val liveData = MutableLiveData()
liveData.value = "switchmap : "+str
return liveData
}
fun onActionClick(v : View) {
// change data
videoData.value = VideoDownload(System.currentTimeMillis().toString())
}
fun onReActionClick(v : View) {
// check data
Toast.makeText(appContext, videoName.value, Toast.LENGTH\_LONG).show()
}
}
```
Upvotes: 0 <issue_comment>username_3: for me, it was because the observer owner was a fragment. It stopped triggering when navigating to different fragments. I changed the observer owner to the activity and it triggered as expected.
```
itemsViewModel.items.observe(requireActivity(), Observer {
```
The view model was defined as a class property:
```
private val itemsViewModel: ItemsViewModel by lazy {
ViewModelProvider(requireActivity()).get(ItemsViewModel::class.java)
}
```
Upvotes: 0 <issue_comment>username_4: If you really want it to be triggered.
```
fun LiveData.forceMap(
mapFunction: (X) -> Y
): LiveData {
val result = MutableLiveData()
this.observeForever {x->
if (x != null) {
result.value = mapFunction.invoke(x)
}
}
return result
}
```
Upvotes: 0
|
2018/03/16
| 449
| 1,612
|
<issue_start>username_0: There is a variable in my PowerShell script called `$IncludeSubfolders` (either `0` or `1`)
Depending on its value, I would like to call `Get-ChildItem` method either with or without the `-Recurse` option.
Currently, my code looks like this:
```
if($IncludeSubfolders) {
$Files = Get-ChildItem $RootPath -Name $FileMask -Recurse
} else {
$Files = Get-ChildItem $RootPath -Name $FileMask
}
```
I would like to avoid this branching and just have a single line of code calling `Get-ChildItem` with `-Recurse` flag if `$IncludeSubfolders` is `1` or without it if it is `0`.
Is this achievable? How?
Powershell version 5.0.10586.117<issue_comment>username_1: This should do it:
```
$Files = Get-ChildItem $RootPath -Name $FileMask -Recurse:$IncludeSubfolders
```
Upvotes: 1 <issue_comment>username_2: `$IncludeSubfolders` should ideally be a boolean value, not an integer, but it can be converted easily.
Even if you must keep it an integer, the code is the same.
```
$Files = Get-ChildItem $RootPath -Name $FileMask -Recurse:$IncludeSubfolders
```
Why?
----
Because `-Recurse` is a Switch parameter, which is a special type of boolean in which an explicit value is not needed. Present makes it true, absent makes it false. But all switch parameters can take an explicit value as well, using the colon directly after it.
If you try to use a non-boolean value in an expression where a boolean is expected, PowerShell will do its best to coalesce that value to boolean, and in this case `0` will become `$false` while `1` will become `$true`.
Upvotes: 5 [selected_answer]
|
2018/03/16
| 788
| 1,950
|
<issue_start>username_0: I have a zig zag border on the bottom of my element. How can I move it to the border of the left side, instead?
```css
.zigzag {
height: 150px;
width: 400px;
background: linear-gradient(-135deg, #e8117f 5px, transparent 0) 0 5px, linear-gradient(135deg, #e8117f 5px, #fff 0) 0 5px;
background-color: #e8117f;
background-position: left bottom;
background-repeat: repeat-x;
background-size: 10px 10px;
}
```
[](https://i.stack.imgur.com/ClWa2.png)<issue_comment>username_1: You should be able to change the linear-gradient degrees to achieve this, **and** set `background-repeat` to `repeat-y`.
```css
.zigzag {
height: 150px;
width: 400px;
background: linear-gradient(-137deg, #e8117f 6px, transparent 0) 0 5px, linear-gradient(320deg, #e8117f 5px, #fff 0) 0 5px;
background-color: #e8117f;
background-position: left bottom;
background-repeat: repeat-y;
background-size: 10px 10px;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I have made [an online generator](https://css-generators.com/custom-borders/) where you can easily get all the directions. It uses mask so it can work with any element and background coloration.
```css
.box {
display: inline-block;
width: 150px;
aspect-ratio: 1;
margin: 5px;
background: linear-gradient(45deg,red,blue);
-webkit-mask: var(--mask);
mask: var(--mask);
}
.top {
--mask: conic-gradient(from 135deg at top,#0000,#000 1deg 89deg,#0000 90deg) 50%/30px 100%;
}
.bottom {
--mask: conic-gradient(from -45deg at bottom,#0000,#000 1deg 89deg,#0000 90deg) 50%/30px 100%;
}
.left {
--mask: conic-gradient(from 45deg at left,#0000,#000 1deg 89deg,#0000 90deg) 50%/100% 30px;
}
.right {
--mask: conic-gradient(from -135deg at right,#0000,#000 1deg 89deg,#0000 90deg) 50%/100% 30px;
}
```
```html
```
Upvotes: 0
|
2018/03/16
| 871
| 3,006
|
<issue_start>username_0: I'm creating a Loopback application and have created a custom user model, based on built-in User model.
```
{
"name": "user",
"base": "User",
"idInjection": true,
"properties": {
"test": {
"type": "string",
"required": false
}
},
"validations": [],
"acls": [],
"methods": []
}
```
Then in boot script I'm creating (if not exists) new user, new role and a roleMapping.
```
User.create(
{ username: 'admin', email: '<EMAIL>', password: '<PASSWORD>' }
, function (err, users) {
if (err) throw err;
console.log('Created user:', users);
//create the admin role
Role.create({
name: 'admin'
}, function (err, role) {
if (err) throw err;
//make user an admin
role.principals.create({
principalType: RoleMapping.USER,
principalId: users.id
}, function (err, principal) {
if (err) throw err;
console.log(principal);
});
});
});
```
Then in custom remote method I'm trying to get all roles for User, using user's id. [Loopbacks' documentation on this topic](https://loopback.io/doc/en/lb3/HasMany-relations.html) says that
>
> Once you define a “hasMany” relation, LoopBack adds a method with the relation name to the declaring model class’s prototype automatically. For example: Customer.prototype.orders(...).
>
>
>
And gives this example:
>
> customer.orders([filter],
> function(err, orders) {
> ...
> });
>
>
>
But when I am trying to use `User.roles()` method, (const User = app.models.user;) I get the next error:
>
> TypeError: User.roles is not a function
>
>
>
But when I'm making a remote request `http://localhost:9000/api/users/5aab95a03e96b62718940bc4/roles`, I get the desired roleMappings array.
So, i would appreciate if someone could help get this data using js. I know I can probably just query the RoleMappings model, but I've wanted to do it the documentation-way.<issue_comment>username_1: Loopback documentation suggests to [extend the built-in user model](http://loopback.io/doc/en/lb3/Extending-built-in-models.html)
to add more properties and functionalities.
A good practice is creating a model `Member` that extends the built-in model `User`. In the new model declare the following relationship:
`"relations": {
"roles": {
"type": "hasMany",
"model": "RoleMapping",
"foreignKey": "principalId"
}
}`
Now, you can get all the user roles:
```js
user.roles(function (err, roles) {
// roles is an array of RoleMapping objects
})
```
where `user` is an instance of `Member`.
Upvotes: 2 <issue_comment>username_2: This is an old question, but I faced the same issue and was able to solve it by having the relation username_1 suggested and accessing the roles like this:
```
const userInstance = await User.findById(userId);
const roles = await userInstance.roles.find();
```
Roles is not a function, it is an object. By the way this is using loopback 3.
Upvotes: 0
|
2018/03/16
| 707
| 2,800
|
<issue_start>username_0: I'm trying to build a simple Discord.js bot, but I'm having trouble adding user input to an array stored within a json file.
I'd like to have a single json file that will store information the bot can draw on, including an array of quotes that is also able to be added to by users. Currently, the settings.json file looks like this:
```
{ "token" : , //connection token here
"prefix" : "|", //the prefix within messages that identifies a command
"quotes" : [] //array storing a range of quotes
}
```
I can then draw information from the array, choosing a random quote from those currently stored, as shown below:
```
const config = require("./settings.json");
var quotes = config.quotes;
function randomQuote() {
return quotes[Math.floor(Math.random() * quotes.length)];
};
if(message.content.toLowerCase() === prefix + "quote") {
message.channel.send(randomQuote());
}
```
This all works as intended, but I can't for the life of me work out how to allow users to add a quote to the array (it'd use a command something like |addquote). I know that, for writing data to a json file, I'd use something like this:
```
var fs = require('fs');
let test = JSON.parse(fs.readFileSync("./test.json", "utf8"));
if(message.content.toLowerCase() === 'test') {
test++;
fs.writeFile("./test.json", JSON.stringify(test), (err) => {
if (err) console.error(err)
});
}
```
But what for what I'm trying to do now - target a specific array within an existing json file that contains other data separate to the array, and add a new entry rather than overwrite what's there - I'm pretty much stumped. I've looked around a lot, but either I haven't found what I'm looking for, or I couldn't understand it when I found it. Could anyone help me out here?<issue_comment>username_1: Push the new item into the array:
```
config.quotes.push(newQuote);
```
**Edit**: I should point out that using `require` to read a JSON file this way would likely cache it, so changes you make to the file might not show up the next time you `require` it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A file can't be "edited" aside from concatenation at the end of the file. To modify the file outside of that would require to do so in memory and then overwrite the existing file with the value in memory. That said there are many concerns with using a file like this for storing user input, some of which include: limiting the size of the file; multiple processes possibly reading from the same file that can be overwritten by any one of them; a process crash or an error during a write that would render the file useless; etc.
Have you looked into a real data store? Any simple DB would work and this is actually what they are made for.
Upvotes: 0
|
2018/03/16
| 1,135
| 3,016
|
<issue_start>username_0: I have metrics at the rep and client level:
```
select r.rep_month, c.client_month,
count(distinct r.id) reps, count(distinct c.id) clients
from clients c
left join reps r on c.rep_id=r.id
```
This of course doesn't work because it gives all combinations of rep\_month/client\_month- and from a time series stand point- they should be calculated based on two different dates.
What I need is for reps to be calculated based on rep\_month and clients to be calculated based on client\_month, so there should just be one date in the output.
A generalized example is like so:
```
rep_date client_date reps clients
3/1/18 0:00 8/1/17 0:00 14 24
3/1/18 0:00 2/1/17 0:00 4 6
3/1/18 0:00 12/1/17 0:00 9 12
3/1/18 0:00 1/1/18 0:00 14 16
3/1/18 0:00 10/1/17 0:00 11 11
3/1/18 0:00 12/1/16 0:00 4 7
3/1/18 0:00 1/1/17 0:00 1 1
3/1/18 0:00 4/1/17 0:00 4 4
3/1/18 0:00 3/1/17 0:00 12 14
3/1/18 0:00 11/1/17 0:00 5 7
3/1/18 0:00 5/1/17 0:00 4 5
3/1/18 0:00 11/1/16 0:00 1 1
3/1/18 0:00 2/1/18 0:00 5 5
3/1/18 0:00 8/1/16 0:00 2 2
3/1/18 0:00 9/1/17 0:00 16 20
3/1/18 0:00 (null) 49 0
```
This would be the expected output:
```
date reps clients
3/1/18 49 135
```
But please note that **there can be cases where rep\_date and client\_date are not null**, so combining the two into: `coalesce(client_date,rep_date)` won't work.
Thank you!<issue_comment>username_1: You can try:
```
Select r.rep_month, c.client, r.reps from
(select rep_month, count(distinct id) reps
from reps
group by rep_month) r
left join
(select client_month, count(distinct id) clients
from clients
group by client_month) c
on r.rep_month = c.client_month
```
It doesn't make sense that you can join on id, just join on month. Or, if there is a main table with the id in it, start with that table first. Plus, the clients table has to have all the months in it. If there are always reps every month, put that table first then left join.
Upvotes: 0 <issue_comment>username_2: I think you want something like this:
```
select mon, sum(reps) as reps, sum(clients) as clients
from ((select c.client_month as mon, count(*) as clients, 0 as reps
from clients c
group by c.client_month
) union all
(select r.rep_month, 0 as clients, count(*) as reps
from reps r
group by r.rep_month
)
) rc
group by mon
order by mon;
```
Notes:
* You can also do this with a `join`, but you have to deal with time periods that are missing from either table (i.e., you need `full outer join` and lots of `coalesce()`).
* I am assuming that the `id`s are unique in each table, so `count(*)` and `count(distinct id)` do the same thing. The former is more efficient, because it does not incur overhead to remove duplicates.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,159
| 3,224
|
<issue_start>username_0: Ok so I have two mysql tables, the first one is "`tbl_forum`":
```
|MSG_ID|MSG_QUERYID|MSG_ISPARENT| MSG_PARENTID
1 | 59 | 1 | 1
2 | 59 | 0 | 1
```
The second one is "`tbl_frs`":
```
|FRS_ID|FRS_QUERYID|FRS_PARENTID | FRS_CONTENT
1 | 59 | 1 | xxxx
2 | 59 | 1 | yyyy
```
I want a query which would yield all rows of both `tbl_forum` and `tbl_frs` (join) where `msg_queryid = 59` and `msg_isparent = 1`. So in the example above I would like to get a single row which would look like this:
```
|MSG_ID|MSG_QUERYID|MSG_ISPARENT| MSG_PARENTID | FRS_ID|FRS_QUERYID|FRS_PARENTID | FRS_CONTENT
|1 |59 | 1 | 1 | 1 | 59 | 1 | xxxx
```
I tried this:
```
SELECT * from tbl_frs
JOIN tbl_forum
ON msg_queryid=frs_queryid
WHERE msg_isparent=1
```
but it yielded two rows... (both msg\_id 1 and 2...). How can I fix this?
edit
and the winner is:
```
SELECT /column names.../ FROM tbl_frs JOIN tbl_forum ON tbl_frs.FRS_PARENTID=tbl_forum.MSG_ID WHERE tbl_frs.FRS_QUERYID=59 AND tbl_forum.MSG_ISPARENT=1
```<issue_comment>username_1: If I understand, you want to select **ALL** from your two tables where `msg_queryid=frs_queryid AND msg_isparent=1`, right?
Btw, using `*` is not a good practise, you should name your field !
Try this maybe:
```
SELECT
FRS.FRS_ID, FRS.FRS_QUERYID, FRS.FRS_PARENTID, FRS.FRS_CONTENT
F.MSG_ID, F.MSG_QUERYID, F.MSG_ISPARENT, F.MSG_PARENTID
FROM tbl_frs as FRS
INNER JOIN tbl_forum as F ON F.MSG_QUERYID=FRS.FRS_QUERYID AND F.MSG_ISPARENT=1
```
Upvotes: 1 <issue_comment>username_2: Please try below query:
```
SELECT * from tbl_forum JOIN tbl_frs ON msg_queryid = frs_queryid WHERE msg_isparent = 1
```
Thanks!
Upvotes: 0 <issue_comment>username_3: You should set the condition with `FRS_ID=1 AND MSG_ISPARENT =1` then you will get your expect.
You can try this.
```
SELECT
T1.MSG_ID,
T1.MSG_QUERYID,
T1.MSG_ISPARENT,
T1.MSG_PARENTID,
T2.FRS_ID,
T2.FRS_QUERYID,
T2.FRS_PARENTID,
T2.FRS_CONTENT
FROM tbl_frs AS T2
JOIN tbl_forum AS T1 ON T1.msg_queryid=T2.frs_queryid
WHERE T2.FRS_ID=1 AND T1.MSG_ISPARENT =1
```
[SQLFiddle](http://sqlfiddle.com/#!9/e1b6d0/6)
Upvotes: 1 <issue_comment>username_4: The query you have mentioned in your question has provided the correct answer to your problem.
```
SELECT * FROM tbl_frs JOIN tbl_forum ON msg_queryid = frs_queryid WHERE msg_isparent = 1
```
In the above query, you have told MySQL to fetch all rows where `MSG_ISPARENT` equals to `1`. So SQL takes the value from the column `MSG_QUERYID` (which is `59`) and then it matches the value with `tbl_frs.FRS_QUERYID`.
Now there are two rows in `FRS_QUERYID` with value `59`. So MySQL will return two rows.
>
> **If you add the condition `FRS_ID = 1` it won't meet your requirement. Be careful!!!**
>
>
>
Because it won't return ***"all rows of both `tbl_forum` and `tbl_frs` (join) where `msg_queryid = 59` and `msg_isparent = 1`"***.
The above query gives you the correct results. Don't change it unless you completely understand what's going on.
Upvotes: 0
|
2018/03/16
| 803
| 2,375
|
<issue_start>username_0: I have updated version 8.5.3 to 9.0.1 of the notes, and in xPages Properties has the field minimum supported release, in the version update I did not change the checked option, does anyone know what the impact of this field in the application?
[configuration location](https://i.stack.imgur.com/Z0Uvz.png)<issue_comment>username_1: If I understand, you want to select **ALL** from your two tables where `msg_queryid=frs_queryid AND msg_isparent=1`, right?
Btw, using `*` is not a good practise, you should name your field !
Try this maybe:
```
SELECT
FRS.FRS_ID, FRS.FRS_QUERYID, FRS.FRS_PARENTID, FRS.FRS_CONTENT
F.MSG_ID, F.MSG_QUERYID, F.MSG_ISPARENT, F.MSG_PARENTID
FROM tbl_frs as FRS
INNER JOIN tbl_forum as F ON F.MSG_QUERYID=FRS.FRS_QUERYID AND F.MSG_ISPARENT=1
```
Upvotes: 1 <issue_comment>username_2: Please try below query:
```
SELECT * from tbl_forum JOIN tbl_frs ON msg_queryid = frs_queryid WHERE msg_isparent = 1
```
Thanks!
Upvotes: 0 <issue_comment>username_3: You should set the condition with `FRS_ID=1 AND MSG_ISPARENT =1` then you will get your expect.
You can try this.
```
SELECT
T1.MSG_ID,
T1.MSG_QUERYID,
T1.MSG_ISPARENT,
T1.MSG_PARENTID,
T2.FRS_ID,
T2.FRS_QUERYID,
T2.FRS_PARENTID,
T2.FRS_CONTENT
FROM tbl_frs AS T2
JOIN tbl_forum AS T1 ON T1.msg_queryid=T2.frs_queryid
WHERE T2.FRS_ID=1 AND T1.MSG_ISPARENT =1
```
[SQLFiddle](http://sqlfiddle.com/#!9/e1b6d0/6)
Upvotes: 1 <issue_comment>username_4: The query you have mentioned in your question has provided the correct answer to your problem.
```
SELECT * FROM tbl_frs JOIN tbl_forum ON msg_queryid = frs_queryid WHERE msg_isparent = 1
```
In the above query, you have told MySQL to fetch all rows where `MSG_ISPARENT` equals to `1`. So SQL takes the value from the column `MSG_QUERYID` (which is `59`) and then it matches the value with `tbl_frs.FRS_QUERYID`.
Now there are two rows in `FRS_QUERYID` with value `59`. So MySQL will return two rows.
>
> **If you add the condition `FRS_ID = 1` it won't meet your requirement. Be careful!!!**
>
>
>
Because it won't return ***"all rows of both `tbl_forum` and `tbl_frs` (join) where `msg_queryid = 59` and `msg_isparent = 1`"***.
The above query gives you the correct results. Don't change it unless you completely understand what's going on.
Upvotes: 0
|
2018/03/16
| 1,099
| 4,315
|
<issue_start>username_0: I have a docker image on my local machine. When I try to run the image using the below Jenkins file.
```
agent {
docker {
image 'gcc/sample:latest'
args "-v /etc/passwd:/etc/passwd:ro
}
}
```
Then I am getting the following error.
```
+ docker pull gcc/sample:latest
Pulling repository docker.io/gcc/sample
Error: image gcc/sample:latest not found
script returned exit code 1
```
Is there any way that I can say in Jenkins files to look for docker image at my local machine instead of docker.io for docker image.<issue_comment>username_1: Turns out that this is already a known issue: <https://issues.jenkins-ci.org/browse/JENKINS-47106>
The docker image is always pulled inside the agent definition. The only solution in the meantime is to push the image to Dockerhub or a private registery so that it will be successfully pulled.
Upvotes: 0 <issue_comment>username_2: For that, you need docker registry on your local machine or on Jenkins server where Jenkins is running.
Just run the docker registry container
```
docker run -d -p 5000:5000 --restart always --name registry registry:2
```
First tag your image
```
docker tag alpine localhost:5000/gcc/sample:latest
```
push that image to your local docker registry
```
docker push localhost:5000/gcc/sample:latest
```
Now if you want to pull that docker images in jenkins so give pull path with dns as we pull some thing from docker registry its contain full name with dns.
```
docker pull localhost:5000/gcc/sample:latest
```
**A registry is a storage and content delivery system, holding named Docker images, available in different tagged versions.**
>
> docker pull ubuntu instructs docker to pull an image named ubuntu from the official Docker Hub. This is simply a shortcut for the longer docker pull docker.io/library/ubuntu command
>
>
> docker pull myregistrydomain:port/foo/bar instructs docker to contact
> the registry located at myregistrydomain:port to find the image
> foo/bar
>
>
> Running your own Registry is a great solution to integrate with and
> complement your CI/CD system.
>
>
>
<https://docs.docker.com/registry/introduction/>
Upvotes: 3 <issue_comment>username_3: In our environment using parallel stages prevents the pull action for some reason.
```
#!/usr/bin/env groovy
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Parallel') {
parallel {
stage('EL6') {
agent {
docker {
image 'centos6'
label 'Docker&&EL'
}
}
steps {
...
}
}
}
}
}
}
```
Background: We typically use Multibranch Pipeline jobs with parallel steps. While testing something new I wrote up the simplest Jenkinsfile that utilizes docker. I used a simple Pipeline job with no parallel section and noticed that it always attempted a pull (and failed).
Relevant plugins:
```
Docker Commons Plugin
Provides the common shared functionality for various Docker-related plugins.
1.11
Downgrade to 1.11
Docker Pipeline
Build and use Docker containers from pipelines.
1.15
Pipeline: API
Plugin that defines Pipeline API.
2.28
Downgrade to 2.25
Pipeline: Basic Steps
Commonly used steps for Pipelines.
2.6
Pipeline: Declarative
An opinionated, declarative Pipeline.
1.2.7
Pipeline: Declarative Agent API
Replaced by Pipeline: Declarative Extension Points API plugin.
1.1.1
Pipeline: Declarative Extension Points API
APIs for extension points used in Declarative Pipelines.
1.2.7
Pipeline: Job
Defines a new job type for pipelines and provides their generic user interface.
2.17
Pipeline: Model API
Model API for Declarative Pipeline.
1.2.7
Pipeline: Stage Step
Adds the Pipeline step stage to delineate portions of a build.
2.3
Pipeline: Stage Tags Metadata
Library plugin for Pipeline stage tag metadata.
1.2.7
Pipeline: Stage View Plugin
Pipeline Stage View Plugin.
2.9
Pipeline: Step API
API for asynchronous build step primitive.
2.16
Downgrade to 2.14
Pipeline: Supporting APIs
Common utility implementations to build Pipeline Plugin
2.17
```
Upvotes: 0
|
2018/03/16
| 638
| 2,106
|
<issue_start>username_0: I have shaders kept in assets folder.
name of the shader (file name) : "vertex.vs"
path : assets/shaders/vertex.vs
I want to access this file from a C++ file from NDK without calling Java or JNI whatever it is.
From reading various resources I managed to understand that i have to use the header
```
#include
```
After that I create pointers and open it.
```
const char* mPath = "shaders/vertex.vs";
AAssetManager* mAssetManager;
AAsset* mAsset;
mAsset = AAssetManager_open(mAssetManager, mPath,AASSET_MODE_UNKNOWN);
int foo = AAsset_getLength(mAsset);
LOGD( "This is a number: %d", foo );
AAsset_close(mAsset);
```
But it doesn't do anything.
And what's with this read function.
```
AAsset_read(mAsset,pBuffer,bytesToRead);
```
Where is the data read? How to define the pBuffer ?
Can someone share a simple example on how to read the data from a raw file and how to access it(Like showing it in logcat)?<issue_comment>username_1: You must initialize **mAssetManager** to begin with, we usually get it from Java via a JNI call, see e.g. [this answer](https://stackoverflow.com/a/11617834/192373). You can obtain this Java object in your C++ code [like this](https://stackoverflow.com/a/22436260/192373), but this still needs **JNIEnv**.
If you really really want to extract an asset from your APK with no JNI interaction, it not impossible. The trick is to [find your APK file](https://stackoverflow.com/q/7701801/192373) and trust that it is a ZIP file under the hood.
Upvotes: 4 [selected_answer]<issue_comment>username_2: >
> Can someone share a simple example on how to read the data from a raw file and how to access it(Like showing it in logcat)?
>
>
>
[Here](https://github.com/nkh-lab/ndk-config-provider) example project that unpacks resource file from res/raw and puts it to FilesDir location (e.g. /data/user/0/com.example.myapp/files) which is accessible from native code.
Also [there](https://github.com/nkh-lab/ndk-config-provider/blob/master/doc/activity-diagram.png) an activity diagram that describes the algorithm for this approach.
Upvotes: 1
|
2018/03/16
| 730
| 1,601
|
<issue_start>username_0: The elements of the array changed, and became some numbers which are never being input.
```
#include
#include
#define MAX\_SIZE 1000
int cmp(int a, int b)
{
return a>b;
}
void sort(int \*data, int n, int (\*cmp)(int, int))
{
for (;n>1;n--)
{
int i\_max = 0;
for(int i = 1;i
```
input:
```
5
5
12
346
5676434535
765654543596
3543456
6
5783945
5293
237894
273894
73
237482
4
27895
719287349723947
1
34
7
3472893
74897598347
757
178
579875498234
129
84
5
420938
23
837485
279
29871
```
output:
```
*****************
after sorting:
12 346 3543456 1150364908 1381467239
(the last two numbers were never input before, and the former number disappeared)
*****************
*****************
after sorting:
73 5293 237482 237894 273894 5783945
*****************
*****************
after sorting:
1 34 27895 586728235
*****************
*****************
after sorting:
84 129 178 757 3472893 54913274 1883154315
*****************
```<issue_comment>username_1: The input you provide is not fitting into `int`. For example `765654543596` (hex `B244912CEC`) is exceeding 32 bits, which is probably your `int` width. If you truncate it to 32 bits you will see exactly the mysterious `1150364908` (hex `44912CEC`) you see in the output.
Upvotes: 2 <issue_comment>username_2: your program is actually correct. except you are feeding in more than an `int` can contain. try the program with values not grater `>` than **2147483647** unless you want to change the datatype from `int` to possibly `unsigned int`. but still there are limits to both maximum values.
Upvotes: 0
|
2018/03/16
| 588
| 2,088
|
<issue_start>username_0: If I use something like `$('button').click(function() { alert('hi'); }` and I add the following to my :
```
```
..and I run it as a normal mobile site on iOS 11, there is no delay (=> expected behaviour).
**However**, if I run the **exact same** code in a compiled **Cordova** hybrid app on **iOS**, the delay is back! (=> not good)
Do I have to start using fastclick.js or some other workaround again like it's 2013? What about other Cordova developers on here: **do you experience the same problem?**
PS: It works fine on Android.
PPS: Adding `touch-action: manipulation;` in CSS doesn't help unfortunately<issue_comment>username_1: We experience the exact same problem.
Using FastClick as a workaround works for most iOS versions, except for the latest iOS release: 11.3.
After a fresh start of a Cordova app, FastClick works as expected but after a while (especially when calling native iOS calls like taking a photo), it shows weird behaviour where your input fields (nested in other divs) do not get selected at all and you have to tap multiple times to select an input field.
I have no clue why the 350ms delay is still active for (Cordova) apps and not for a standard mobile website opened in Safari.
Thanks to the tip from [username_2](https://stackoverflow.com/users/6160676/romain-le-qllc), I replaced fastclick with the fork that he mentioned (<https://github.com/lasselaakkonen/fastclick>).
This resolves the issues in both iOS 11.3 and 11.4.
Also big thanks to [lasselaakkonen](https://github.com/lasselaakkonen).
Upvotes: 2 <issue_comment>username_2: About the fastclick issue, apparently, it's a new bug introduced with iOs 11.3.
Here is the [full explanation](https://stackoverflow.com/questions/49500339/cant-prevent-touchmove-from-scrolling-window-on-ios)
And here is a fork which resolves the [fastclick issue with iOs 11.3](https://github.com/lasselaakkonen/fastclick/tree/fix-ios-11-3-event-timestamps)
I'm also looking for a workaround, since fastclick doesn't look to be maintained anymore...
Upvotes: 3 [selected_answer]
|
2018/03/16
| 676
| 2,235
|
<issue_start>username_0: I have following message on Login Failed response. The response shows Date in UTC format. I wanted to get the date and convert from UTC to Local.
I tried the following but I'm still having the same date formate. Can anyone help me what I'm doing wrong here
```
var loginRes = 'Too many incorrect attempts. Account is locked until: 2018-03-16T05:13:58+00:00'
var dateRegx = /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}-\d{2}:\d{2}/;
var ErrorMessage = (loginRes).replace(dateRegx,
function(match){
return moment(match).format("MMMM Do YYYY, h:mm:ss a");
});
console.log(ErrorMessage);
```
On my console print, I'm having same as LoginRes. I was expecting something like :
`Too many incorrect attempts. Account is locked until: March 16th 2018, 8:13:14 pm`<issue_comment>username_1: We experience the exact same problem.
Using FastClick as a workaround works for most iOS versions, except for the latest iOS release: 11.3.
After a fresh start of a Cordova app, FastClick works as expected but after a while (especially when calling native iOS calls like taking a photo), it shows weird behaviour where your input fields (nested in other divs) do not get selected at all and you have to tap multiple times to select an input field.
I have no clue why the 350ms delay is still active for (Cordova) apps and not for a standard mobile website opened in Safari.
Thanks to the tip from [username_2](https://stackoverflow.com/users/6160676/romain-le-qllc), I replaced fastclick with the fork that he mentioned (<https://github.com/lasselaakkonen/fastclick>).
This resolves the issues in both iOS 11.3 and 11.4.
Also big thanks to [lasselaakkonen](https://github.com/lasselaakkonen).
Upvotes: 2 <issue_comment>username_2: About the fastclick issue, apparently, it's a new bug introduced with iOs 11.3.
Here is the [full explanation](https://stackoverflow.com/questions/49500339/cant-prevent-touchmove-from-scrolling-window-on-ios)
And here is a fork which resolves the [fastclick issue with iOs 11.3](https://github.com/lasselaakkonen/fastclick/tree/fix-ios-11-3-event-timestamps)
I'm also looking for a workaround, since fastclick doesn't look to be maintained anymore...
Upvotes: 3 [selected_answer]
|
2018/03/16
| 608
| 2,104
|
<issue_start>username_0: I'm trying to export HTML table to excel using angularJS. I went through so many sites and few blogs also, but I didn't get an appropriate answer. Any help/advice greatly appreciated.
This is what I was able to achieve so far:
```
```
Angularjs:
```
app.controller('Myctrl', function($scope){
$scope.xlms = function(){
var xl = '';
xl = xl + '';
xl = xl + 'Test Sheet';
xl = xl + '';
xl = xl + '';
xl = xl + document.getElementById('export').html(); --> 'export is id of html'
tab\_text = tab\_text + '';
}
}
```
Further I donot know how to implement<issue_comment>username_1: Use the alasql cdn to export the data to xls.
```
$scope.exportData = function () {
alasql('SELECT * INTO XLS("alexa.xls",?) FROM ?',[mystyle,$scope.items]);
}; //$scope.items array of objects //mystyle -format table type.
```
Please check out below plunker link for example reference for the same.
```
`https://plnkr.co/edit/Hc4nq1EQMNEbJJHb6MbU?p=preview`
```
Upvotes: 1 <issue_comment>username_2: Pass the tableid using the following code:
```
```
Then set the logic to export the html using the following code:
```
myApp.factory('Excel',function($window){
var uri='data:application/vnd.ms-excel;base64,',
template='
{table}
',
base64=function(s){return $window.btoa(unescape(encodeURIComponent(s)));},
format=function(s,c){return s.replace(/{(\w+)}/g,function(m,p){return c[p];})};
return {
tableToExcel:function(tableId,worksheetName){
var table=$(tableId),
ctx={worksheet:worksheetName,table:table.html()},
href=uri+base64(format(template,ctx));
return href;
}
};
})
.controller('MyCtrl',function(Excel,$timeout){
$scope.xlms=function(tableId){ // ex: '#my-table'
$scope.exportHref=Excel.tableToExcel(tableId,'sheet name');
$timeout(function(){location.href=$scope.fileData.exportHref;},100); // trigger download
}
});
```
Upvotes: 0
|
2018/03/16
| 575
| 2,242
|
<issue_start>username_0: The default bootstrap modal adds a full page element that when clicking closes it. With `data-backdrop="static" data-keyboard="false"` you can disable this functionality and so that clicks outside the modal and the `esc` key do nothing.
I'd like to use a bootstrap modal that allows me to click, select and perform all the normal functions on the page without dismissing it. The only way to dismiss it would be to click buttons on the modal (`x` or cancel for example).
How can I use a bootstrap modal like that?
```
×
#### Settings
Settings
Cancel
Save
```<issue_comment>username_1: There were three steps I took to achieve the desired effect. My modal is prepended to the body, the steps may differ if yours is not.
1. Bootstrap appends a `div` element to the body (`body > .modal-backdrop`). It has the styles that cause the white "overlay" effect. This whole element can be deleted or the styles overridden.
2. A class is added to the body, `modal-open`. This has the css `overflow: hidden;`. Either remove that class, or override the css.
3. The actual modal will also need some css added. `width` or `max-width` worked for me. (makes scrolling of the modal work)
* `#myModal {max-width: 300px;}`
Don't make those changes for every modal, use a trigger to restrict them to a specific one.
```
$("#myModal").on("show.bs.modal shown.bs.modal", function(e) {
// Remove overlay and enable scrolling of body
$("body").removeClass("modal-open").find(".modal-backdrop").remove();
});
```
The above causes some "flashing" with the removal of `.modal-backdrop`. If that is unwanted, overriding the styles with css or preventing the default bootstrap action may be best.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I saw no solution for what you ask and I needed to have this feature, I did a function to do that.
It's working with jquery ui draggable element but you can move out any code related to that and it should work as well.
<https://jsfiddle.net/wyf3zxek/>
```
modalDraggableClickOutside('#modal-example', 'iframeLoaded', 'Modal shown');
```
Note that it's not working very well in an iframe (such as js fiddle) : in an iframe you need to move the modale once
Upvotes: 0
|
2018/03/16
| 717
| 2,739
|
<issue_start>username_0: Hello im new at C# and i want to Change the Color of a Button then play a sound and if the sound is over, then Change the text but if i Start the Program and press the Button the program is freezed and i get a sound and after the sound the color change in green... Sry for my bad english
```
private void button1_Click(object sender, EventArgs e)
{
if (Frage.Text.Contains("Was ist Klein, Grün und Rund?"))
{
button1.BackColor = System.Drawing.Color.GreenYellow;
if(button1.BackColor == System.Drawing.Color.GreenYellow)
{
System.Media.SoundPlayer playerwin = new System.Media.SoundPlayer();
playerwin.SoundLocation = @"C:\Wer wird Behindert\winsound.wav";
playerwin.Load();
playerwin.Play();
if (playerwin.IsLoadCompleted)
{
playerwin.PlaySync();
Frage.Text = "Was ist besser?";
}
}
}else if(Frage.Text.Contains("Was ist besser?"))
{
button1.BackColor = System.Drawing.Color.Red;
}
}
```<issue_comment>username_1: There were three steps I took to achieve the desired effect. My modal is prepended to the body, the steps may differ if yours is not.
1. Bootstrap appends a `div` element to the body (`body > .modal-backdrop`). It has the styles that cause the white "overlay" effect. This whole element can be deleted or the styles overridden.
2. A class is added to the body, `modal-open`. This has the css `overflow: hidden;`. Either remove that class, or override the css.
3. The actual modal will also need some css added. `width` or `max-width` worked for me. (makes scrolling of the modal work)
* `#myModal {max-width: 300px;}`
Don't make those changes for every modal, use a trigger to restrict them to a specific one.
```
$("#myModal").on("show.bs.modal shown.bs.modal", function(e) {
// Remove overlay and enable scrolling of body
$("body").removeClass("modal-open").find(".modal-backdrop").remove();
});
```
The above causes some "flashing" with the removal of `.modal-backdrop`. If that is unwanted, overriding the styles with css or preventing the default bootstrap action may be best.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I saw no solution for what you ask and I needed to have this feature, I did a function to do that.
It's working with jquery ui draggable element but you can move out any code related to that and it should work as well.
<https://jsfiddle.net/wyf3zxek/>
```
modalDraggableClickOutside('#modal-example', 'iframeLoaded', 'Modal shown');
```
Note that it's not working very well in an iframe (such as js fiddle) : in an iframe you need to move the modale once
Upvotes: 0
|
2018/03/16
| 550
| 1,667
|
<issue_start>username_0: I'm trying to make a script (bash or python ideally, so I learn and don't just use it dumbly) that will parse a XML file that looks like this :
```
xml version="1.0" encoding="UTF-8"?
```
I'm trying to make a script that can return me the differents attributes. For example :
```
./script Raisin -c
Orange
./script Kiwi -v
No variety defined
./script "I am a fruit" -i
6
```
And so on. I've read a lot on XML parsing, but didn't found anything yet with that kind of XML file. Any help would be greatly appreciated.<issue_comment>username_1: Check out the xmlstarlet tool.
<http://xmlstar.sourceforge.net/doc/UG/xmlstarlet-ug.html>
Using xmlstarlet you can execute XPath queries from the command line.
Upvotes: 0 <issue_comment>username_2: Complete **`bash`** + **`xmlstarlet`** solution:
`get_attr.sh` script:
```
#!/bin/bash
name=$1
declare -A attr_map
attr_map=(["-c"]=color ["-i"]=id ["-v"]=variety)
if [[ -z "$2" ]]; then
echo "Additional attribute missing!"
exit 1
fi
if [[ -z "${attr_map[$2]}" ]]; then
echo "Unsupported attribute prefix. Allowed are: ${!attr_map[@]}"
exit 1
fi
attr="${attr_map[$2]}"
result=$(xmlstarlet sel -t -m "//fruit[@name='$name' and @$attr]" -v "./@$attr" input.xml)
if [[ -n "$result" ]]; then
echo "$result"
else
echo "No $attr attribute defined"
fi
```
---
Test cases:
```
$ bash get_attr.sh "Orange" -c
orange
$ bash get_attr.sh "Raisin" -v
4
$ bash get_attr.sh "Raisin" -d
Unsupported attribute prefix. Allowed are: -v -c -i
$ bash get_attr.sh "I am a fruit" -i
6
$ bash get_attr.sh "I am a fruit" -c
No color attribute defined
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,299
| 3,689
|
<issue_start>username_0: I have several order-detail tables in the source database: `Order Header -> Order Line -> Shipped Line -> Received Line`
I create a BQ table with two levels of nested repeated fields. Here is how some sample data looks like:
```
WITH stol as (
SELECT 1 AS stol_id, "stol-1.1" AS stol_number, 1 AS stol_transfer_order_line_id, 3 AS stol_quantity
UNION ALL
SELECT 2 AS stol_id, "stol-2.1" AS stol_number, 2 AS stol_transfer_order_line_id, 2 AS stol_quantity
UNION ALL
SELECT 3 AS stol_id, "stol-2.2" AS stol_number, 2 AS stol_transfer_order_line_id, 2 AS stol_quantity
UNION ALL
SELECT 4 AS stol_id, "stol-2.3" AS stol_number, 2 AS stol_transfer_order_line_id, 1 AS stol_quantity
),
rtol as (
SELECT 1 AS stol_id, "rtol-1.1" as rtol_number, 2 as rtol_quantity
UNION ALL
SELECT 1 as stol_id, "rtol-1.2" as rtol_number, 1 AS rtol_quantity
UNION ALL
SELECT 2 as stol_id, "rtol-2.1" as rtol_number, 2 AS rtol_quantity
UNION ALL
SELECT 3 as stol_id, "rtol-2.2" as rtol_number, 1 AS rtol_quantity
),
tol as (
SELECT 1 as tol_id, "tol-1" as tol_number, 3 as tol_transfer_quantity
UNION ALL
SELECT 2 as tol_id, "tol-2" AS tol_number, 5 AS tol_transfer_quantity
),
nest AS (
SELECT s.stol_id,
s.stol_number,
s.stol_quantity,
s.stol_transfer_order_line_id,
ARRAY_AGG(STRUCT(r.rtol_number, r.rtol_quantity)) as received
FROM stol s
LEFT JOIN rtol r ON s.stol_id = r.stol_id
GROUP BY 1, 2, 3, 4
),
final as (
SELECT t.tol_id
,t.tol_number
,t.tol_transfer_quantity
,ARRAY_AGG(STRUCT(n.stol_number, n.stol_quantity, n.received)) as shipped
FROM tol t
LEFT JOIN nest n ON t.tol_id = n.stol_transfer_order_line_id
GROUP BY 1, 2, 3
)
```
I want to `sum` the shipped and received quantities for each order line. I can get the correct result like so:
```
shipped as (
SELECT tol_number
,SUM(stol_quantity) as shipped_q
FROM final t, t.shipped
GROUP BY 1
),
received as (
SELECT tol_number
,SUM(rtol_quantity) as received_q
FROM final t, t.shipped s, s.received
GROUP BY 1
)
SELECT t.tol_number
,t.tol_transfer_quantity
,s.shipped_q
,r.received_q
FROM final t
LEFT JOIN shipped s on t.tol_number = s.tol_number
LEFT JOIN received r ON t.tol_number = r.tol_number
```
Correct results:
```
Row tol_number tol_transfer_quantity shipped_q received_q
1 tol-1 3 3 3
2 tol-2 5 5 3
```
What i am wondering is if there is a better way to do this? Trying something like this will over count the first level of nesting but just feels and looks a lot cleaner:
```
SELECT tol_number
,tol_transfer_quantity
,SUM(stol_quantity) as shipped_q
,SUM(rtol_quantity) as shipped_r
FROM final t, t.shipped s, s.received
GROUP BY 1, 2
```
Wrong result for `shipped_q`:
```
Row tol_number tol_transfer_quantity shipped_q shipped_r
1 tol-2 5 5 3
2 tol-1 3 6 3
```
Many thanks for any ideas.<issue_comment>username_1: I'd suggest you use sub-selects in which you treat your arrays like tables:
```sql
SELECT
tol_id,
SUM(tol_transfer_quantity),
SUM( (SELECT SUM(stol_quantity) FROM final.shipped) ) shipped_q,
SUM( (SELECT SUM(rtol_quantity) FROM final.shipped s, s.received) ) shipped_r
FROM
final
GROUP BY
1
```
hth!
Upvotes: 2 <issue_comment>username_2: ```sql
#standardSQL
SELECT
tol_id,
tol_transfer_quantity,
(SELECT SUM(stol_quantity) FROM final.shipped) shipped_q,
(SELECT SUM(rtol_quantity) FROM final.shipped s, s.received) shipped_r
FROM final
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,213
| 4,711
|
<issue_start>username_0: I dont know how to solve this problem
i cant find my error in codes
pls help me solve it :( thanks!
```
private void loadListFood() {
cart = new Database(this).getCarts();
adapter = new CartAdapter(cart,this);
recyclerView.setAdapter(adapter);
int total = 0;
for(Order order:cart)
total+=(Integer.parseInt(order.getPrice()))*(Integer.parseInt(order.getQuantity()));
Locale locale = new Locale("en", "US");
NumberFormat fmt = NumberFormat.getCurrencyInstance(locale);
txtTotalPrice.setText(fmt.format(total));
}
```
i am being redirected to
**total+=(Integer.parseInt(order.getPrice()))\*(Integer.parseInt(order.getQuantity()));**
here is my adapter codes
```
public class CartAdapter extends RecyclerView.Adapter{
private List listData = new ArrayList<>();
private Context context;
public CartAdapter(List cart, Cart cart1)
{
}
@Override
public CartViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
LayoutInflater inflater = LayoutInflater.from(context);
View itemView = inflater.inflate(R.layout.cartlayout,parent,false);
return new CartViewHolder(itemView);
}
@Override
public void onBindViewHolder(CartViewHolder holder, int position) {
TextDrawable drawable = TextDrawable.builder()
.buildRound(""+listData.get(position).getQuantity(), Color.RED);
holder.img\_cart\_count.setImageDrawable(drawable);
int price = (Integer.parseInt(listData.get(position).getPrice()))\*(Integer.parseInt(listData.get(position).getQuantity()));
holder.txt\_price.setText(price);
holder.txt\_cart\_name.setText(listData.get(position).getProductName());
}
@Override
public int getItemCount() {
return listData.size();
}
}
```<issue_comment>username_1: From JavaDoc: The method `Integer.parseInt(String s)` throws a `NumberFormatException`
>
> if the string does not contain a parsable integer.
>
>
>
That means, method `order.getPrice()` or `order.getQuantity()` returns `"130 PHP"` which is not a valid `Integer`.
Your real problem might be: Why the method returns a `String` and not `Integer` because you have to parse your `String` now. Pretty error prone and bad practice.
If your GUI element (or whatever) does not fit with `Integer`, at least remove your "PHP" out of the input field and you might be able to parse your `String` without manipulate it with some String helper methods.
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
class CartViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener
, View.OnCreateContextMenuListener {
public TextView txt_cart_name,txt_price;
public ImageView img_cart_count;
private ItemClickListener itemClickListener;
public void setTxt_cart_name(TextView txt_cart_name) {
this.txt_cart_name = txt_cart_name;
}
public CartViewHolder(View itemView) {
super(itemView);
txt_cart_name = (TextView)itemView.findViewById(R.id.cart_item_name);
txt_price = (TextView)itemView.findViewById(R.id.cart_item_Price);
img_cart_count = (ImageView)itemView.findViewById(R.id.cart_item_count);
itemView.setOnCreateContextMenuListener(this);
}
@Override
public void onClick(View view) {
}
@Override
public void onCreateContextMenu(ContextMenu contextMenu, View view, ContextMenu.ContextMenuInfo contextMenuInfo) {
contextMenu.setHeaderTitle("Selecione uma Ação");
contextMenu.add(0,0,getAdapterPosition(),Common.DELETE);
}
}
public class CartAdapter extends RecyclerView.Adapter {
private List listData = new ArrayList<>();
private Context context;
public CartAdapter(List listData, Context context) {
this.listData = listData;
this.context = context;
}
@Override
public CartViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
LayoutInflater inflater = LayoutInflater.from(context);
View itemView = inflater.inflate(R.layout.cart\_layout,parent,false);
return new CartViewHolder(itemView);
}
@Override
public void onBindViewHolder(CartViewHolder holder, int position) {
TextDrawable drawable = TextDrawable.builder()
.buildRound(""+listData.get(position).getQuantity(), Color.BLUE);
holder.img\_cart\_count.setImageDrawable(drawable);
Locale locale = new Locale("pt","BR");
NumberFormat fmt = NumberFormat.getCurrencyInstance(locale);
int price = (Integer.parseInt(listData.get(position).getPrice()))\*(Integer.parseInt(listData.get(position).getQuantity()));
holder.txt\_price.setText(fmt.format(price));
holder.txt\_cart\_name.setText(listData.get(position).getProductName());
}
@Override
public int getItemCount() {
return listData.size();
}
}
```
Upvotes: 0
|
2018/03/16
| 440
| 1,544
|
<issue_start>username_0: Angular 4 application:
I have a list of models that I look through to create some table rows using \*ngFor. Each model has a list that contains some strings:
```
{
...,
changes:
[
'Test1',
'Test2'
]
}
```
The sample table and \*ngFor is as follows:
```
| ... |
...
| --- |
|
{{item?.test1}} |
{{item?.test2}} |
```
What this should do is, for every table definition there needs to be a check on the list of changes on each item to see if it exists. If it does, I want the style to apply otherwise nothing should happen. Several examples I have seen while researching simply state what I have above and yet this does not work.
I have tested directly in javascript to see if the values in the list are what I am expecting and I get correct results. I have also simply accessed the class using the class attribute
```
... |
```
and this works meaning it sees my css class.
I have also tried it this way:
```
ng-class="item.changes.indexOf('Test1') !== -1 ? 'cell-changed' : ''"
```
While this did not throw an error it also did not work. What am I missing?<issue_comment>username_1: You're mixing AngularJS and Angular approach. If you're using Angular 2+ you should use `[ngClass]` instead like:
```
{{item?.test2}}
|
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: `ng-class` does not exist in Angular. It's an AngularJS directive.
The cleanest way is to do something like that:
```
[class.cell-changed]="item.changes.indexOf('Test1') !== -1"
```
Upvotes: 0
|
2018/03/16
| 619
| 2,303
|
<issue_start>username_0: I am unable to write array of dicts into plist file. I can write Dictionary to plist file is ok.
My code is as follows; It is always not success
```
if var plistArray = NSMutableArray(contentsOfFile: path)
{
plistArray.add(dict)
let success = plistArray.write(toFile: path, atomically: true)
if success
{
print("success")
}
else
{
print("not success")
}
}
```
What might be wrong?
BR,
Erdem<issue_comment>username_1: I think you are missing the serialization part, you need to convert your plist file to `data` first and then write to file.
```
let pathForThePlistFile = "your plist path"
// Extract the content of the file as NSData
let data = FileManager.default.contents(atPath: pathForThePlistFile)!
// Convert the NSData to mutable array
do{
let array = try PropertyListSerialization.propertyList(from: data, options: PropertyListSerialization.MutabilityOptions.mutableContainersAndLeaves, format: nil) as! NSMutableArray
array.add("Some Data")
// Save to plist
array.write(toFile: pathForThePlistFile, atomically: true)
}catch{
print("An error occurred while writing to plist")
}
```
Upvotes: 0 <issue_comment>username_2: First of all **do not use `NSMutableArray` in Swift at all**, use native Swift `Array`.
Second of all don't use the Foundation methods to read and write Property List in Swift, use `PropertyListSerialization`.
Finally Apple highly recommends to use the `URL` related API rather than `String` paths.
---
Assuming the array contains
```
[["name" : "foo", "id" : 1], ["name" : "bar", "id" : 2]]
```
use this code to append a new item
```
let url = URL(fileURLWithPath: path)
do {
let data = try Data(contentsOf: url)
var array = try PropertyListSerialization.propertyList(from: data, format: nil) as! [[String:Any]]
array.append(["name" : "baz", "id" : 3])
let writeData = try PropertyListSerialization.data(fromPropertyList: array, format: .xml, options:0)
try writeData.write(to: url)
} catch {
print(error)
}
```
Consider to use the `Codable` protocol to be able to save property list compliant classes and structs to disk.
Upvotes: 2
|
2018/03/16
| 2,267
| 5,457
|
<issue_start>username_0: I have a dataframe consisting of counts within 10 minute time intervals, how would I set count = 0 if the time interval doesn't exist?
**DF1**
```
import pandas as pd
import numpy as np
df = pd.DataFrame({ 'City' : np.random.choice(['PHOENIX','ATLANTA','CHICAGO', 'MIAMI', 'DENVER'], 10000),
'Day': np.random.choice(['Monday','Tuesday','Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'], 10000),
'Time': np.random.randint(1, 86400, size=10000),
'COUNT': np.random.randint(1, 700, size=10000)})
df['Time'] = pd.to_datetime(df['Time'], unit='s').dt.round('10min').dt.strftime('%H:%M:%S')
print(df)
COUNT City Day Time
0 441 PHOENIX Thursday 10:20:00
1 641 ATLANTA Monday 14:30:00
2 661 PHOENIX Saturday 03:50:00
3 570 MIAMI Tuesday 21:00:00
4 222 CHICAGO Friday 15:00:00
```
**DF2** - My approach is to create all the 10 minute time slots in a day (6\*24 = 144 entries) and then use "not in"
```
df2 = pd.DataFrame({'TIME_BIN': np.arange(0, 86401, 600), })
df2['TIME_BIN'] = pd.to_datetime(df2['TIME_BIN'], unit='s').dt.round('10min').dt.strftime('%H:%M:%S')
TIME_BIN
0 00:00:00
1 00:10:00
2 00:20:00
3 00:30:00
4 00:40:00
5 00:50:00
6 01:00:00
7 01:10:00
8 01:20:00
```
How do I check if the timeslots in DF2 do not exist in DF1 for each city and day and if so, set count = 0? I basically just need to fill in all the missing time slots in DF1.
**Attempt:**
```
for each_city in df.City.unique():
for each_day in df.Day.unique():
df['Time'] = df.apply(lambda row: df2['TIME_BIN'] if row['Time'] not in (df2['TIME_BIN'].tolist()) else None)
```<issue_comment>username_1: I think need [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) by `MultiIndex` [`from_product`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_product.html):
```
np.random.seed(123)
df = pd.DataFrame({ 'City' : np.random.choice(['PHOENIX','ATLANTA','CHICAGO', 'MIAMI', 'DENVER'], 10000),
'Day': np.random.choice(['Monday','Tuesday','Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'], 10000),
'Time': np.random.randint(1, 86400, size=10000),
'COUNT': np.random.randint(1, 700, size=10000)})
df['Time'] = pd.to_datetime(df['Time'], unit='s').dt.round('10min').dt.strftime('%H:%M:%S')
df = df.drop_duplicates(['City','Day','Time'])
#print(df)
```
---
```
times = (pd.to_datetime(pd.Series(np.arange(0, 86401, 600)), unit='s')
.dt.round('10min')
.dt.strftime('%H:%M:%S'))
mux = pd.MultiIndex.from_product([df['City'].unique(),
df['Day'].unique(),
times],names=['City','Day','Time'])
df = (df.set_index(['City','Day','Time'])
.reindex(mux, fill_value=0)
.reset_index())
print (df.head(20))
City Day Time COUNT
0 CHICAGO Wednesday 00:00:00 66
1 CHICAGO Wednesday 00:10:00 205
2 CHICAGO Wednesday 00:20:00 260
3 CHICAGO Wednesday 00:30:00 127
4 CHICAGO Wednesday 00:40:00 594
5 CHICAGO Wednesday 00:50:00 683
6 CHICAGO Wednesday 01:00:00 203
7 CHICAGO Wednesday 01:10:00 0
8 CHICAGO Wednesday 01:20:00 372
9 CHICAGO Wednesday 01:30:00 109
10 CHICAGO Wednesday 01:40:00 32
11 CHICAGO Wednesday 01:50:00 184
12 CHICAGO Wednesday 02:00:00 630
13 CHICAGO Wednesday 02:10:00 108
14 CHICAGO Wednesday 02:20:00 35
15 CHICAGO Wednesday 02:30:00 604
16 CHICAGO Wednesday 02:40:00 500
17 CHICAGO Wednesday 02:50:00 367
18 CHICAGO Wednesday 03:00:00 118
19 CHICAGO Wednesday 03:10:00 546
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: One way is to convert to categories and use `groupby` to calculate Cartesian product.
In fact, given your data is largely categorical, this is a good idea and would yield memory benefits for large number of Time-City-Day combinations.
```
for col in ['Time', 'City', 'Day']:
df[col] = df[col].astype('category')
bin_cats = sorted(set(pd.Series(pd.to_datetime(np.arange(0, 86401, 600), unit='s'))\
.dt.round('10min').dt.strftime('%H:%M:%S')))
df['Time'] = df['Time'].cat.set_categories(bin_cats, ordered=True)
res = df.groupby(['Time', 'City', 'Day'], as_index=False)['COUNT'].sum()
res['COUNT'] = res['COUNT'].fillna(0).astype(int)
# Time City Day COUNT
# 0 00:00:00 ATLANTA Friday 521
# 1 00:00:00 ATLANTA Monday 767
# 2 00:00:00 ATLANTA Saturday 474
# 3 00:00:00 ATLANTA Sunday 1126
# 4 00:00:00 ATLANTA Thursday 157
# 5 00:00:00 ATLANTA Tuesday 720
# 6 00:00:00 ATLANTA Wednesday 0
# 7 00:00:00 CHICAGO Friday 1114
# 8 00:00:00 CHICAGO Monday 813
# 9 00:00:00 CHICAGO Saturday 137
# 10 00:00:00 CHICAGO Sunday 134
# 11 00:00:00 CHICAGO Thursday 0
# 12 00:00:00 CHICAGO Tuesday 168
# ..........
```
Upvotes: 1 <issue_comment>username_3: Then you can try following
```
df.groupby(['City','Day']).apply(lambda x : x.set_index('Time').reindex(df2.TIME_BIN.unique()).fillna({'COUNT':0}).ffill())
```
Upvotes: 0
|
2018/03/16
| 207
| 740
|
<issue_start>username_0: Can I fetch the details of the user who created the instance in AWS using
* instance-id
* ami id
* tag details
* or anything?
I want to contact the person who created a particular instance under a particular role. How can I achieve this?<issue_comment>username_1: You can query CloudTrail logs to find the user who started the instances.
Here is the Python Boto3 script I have created to list all the instances and owner.
<https://gist.github.com/sudharsans/39d5eaf8a82b7ccdf8b3230d13ba7d81>
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can query the Cloudtrail events, and if you need more derailed info. then you can make use of AWS Config which will give you even granular details
Upvotes: 0
|
2018/03/16
| 1,077
| 3,916
|
<issue_start>username_0: I was trying to do an exercise creating a `PhoneBook` using `HashMap`.
However I see my `addPhone` method doesn't add a new phone to my `PhoneBook` `pb` i.e. `data.put(name, num);` method inside my `addPhone` doesn't put the data into the `HashMap` `data`.
Can somebody explain me what is wrong here?
**UPD**
Now I understand it was a mistake, I used `containsValue` method instead of `containsKey`. So simple!
But this question is not similar at all to the suggested already existing question. I was not asking `Is checking for key existence in HashMap always necessary?` I know about the ways to search the `HashMap` according to the key or to the value. This question is actually caused by a mistake. However I received a very wide and useful answers here. I believe these answers, especially davidxxx's answer is excellent and may be useful for many people.
```
import java.util.HashMap;
public class PhoneBook {
private HashMap data;
public PhoneBook()
{
data = new HashMap();
}
public void addPhone(String name, String num)
{
data.put(name, num);
}
//a
public String getPhone(String name){
if(data.containsValue(name)){
return data.get(name);
}
else
return null;
}
//b
public void ToString(){
data.toString();
}
public static void main(String[] args) {
PhoneBook pb = new PhoneBook();
pb.addPhone("shlomi", "12312413yuioyuio24");
pb.addPhone("shlomi1", "1231345345241324");
pb.addPhone("shlomi2", "12312445735671324");
System.out.println(pb.getPhone("shlomi"));
System.out.println(pb.getPhone("blat"));
pb.ToString();
}
}
```<issue_comment>username_1: `data.containsValue(name)` checks whether the `HashMap` contains the **value** `name`. Since your `HashMap` contains name **keys** and number **values**, you should be calling `data.containsKey(name)`.
```
public String getPhone(String name){
if(data.containsKey(name)) {
return data.get(name);
} else
return null;
}
```
or simply
```
public String getPhone(String name) {
return data.get(name);
}
```
Upvotes: 1 <issue_comment>username_2: You provide the name that is the key to `data.containsValue(name)` instead of the value.
What you need is `Map.containskey()` if you want to return the value according to the key from the client side of your class.
Note that handling the existence in the map is not required as `null` is returned as no mapping exists for a key :
```
public String getPhone(String name){
return data.get(name);
}
```
---
**Side note**
Not the issue in the question but whatever an issue to handle.
`ToString()` is really not a good name for a method :
```
public void ToString(){
data.toString();
}
```
Method names are case sensitive, yes, but it is not a fair reason to play with that to define a slightly different naming (here is the `T` uppercase) to the `Object.toString()` method. It makes the code reading misleading.
Besides, your method returns nothing. So this is helpless : `pb.ToString();`
What you should declare is :
```
@Override
public String toString(){
return data.toString();
}
```
Adding `@Override` adds a compilation constraint that checks that the method is defined in the hierarchy.
Now you can for example write in the standard output the `toString()` representation of your `PhoneBook` object in this way :
```
System.out.println(pb);
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: HashMap maps keys to values so it contains key-value pairs.
`containsValue()` it returns true if map maps one or more keys to the specified **value**
`containsKey()` It returns true if map contains a mapping for the specified **key**
Your case has `name` as key and `num` as value. In the method `getPhone()` you have parameter which corresponds to `name` and hence you should use `containsKey()` instead of `containsValue()`
Upvotes: 2
|
2018/03/16
| 280
| 1,129
|
<issue_start>username_0: I can only think of doing this with two streams. Is there a better way?
```
LocalDate lastLoginOrMigrationDate = Stream.of(lastLogin, migrationdate)
.filter(Objects::nonNull)
.max(Comparator.comparing(LocalDate::toEpochDay)).orElse(yesterday);
return Stream.of(lastLoginOrMigrationDate, yesterday)
.min(Comparator.comparing(LocalDate::toEpochDay)).orElse(yesterday);
```<issue_comment>username_1: In my opinion, you could take the resulting optional from .max and filter it by .isBefore(yesterday). if the stream is null, then yesterday was before the max of lastLogin and migrationDate, otherwise, the result will be their max
Upvotes: 2 <issue_comment>username_2: I will do something like this
```
LocalDate result = Stream.of(lastLogin, migration)
.filter(Objects::nonNull)
.max(Comparator.comparing(LocalDate::toEpochDay))
.filter(max -> max.isBefore(yesterday))
.orElse(yesterday);
```
Question -> Why are you using `toEpochDay` for comparison rather than using the compareTo?
Upvotes: 1 [selected_answer]
|
2018/03/16
| 364
| 1,543
|
<issue_start>username_0: In the article for creating a dataaset for the TF object detection API [[link](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md#materializing-data-for-instance-segmentation-materializing-instance-seg)], users are asked to store an object mask as:
>
> a repeated list of single-channel encoded PNG strings, or a single dense 3D binary tensor where masks corresponding to each object are stacked along the first dimension
>
>
>
Since the article strongly suggests using a `repeated list of single-channel encoded PNG strings`, I would particularly be interested in knowing how to encode this. My annotations are typically from `csv` files, which I have no problem in generating the TFRecords file. Are there any instructions somewhere on how to make this conversion?<issue_comment>username_1: In my opinion, you could take the resulting optional from .max and filter it by .isBefore(yesterday). if the stream is null, then yesterday was before the max of lastLogin and migrationDate, otherwise, the result will be their max
Upvotes: 2 <issue_comment>username_2: I will do something like this
```
LocalDate result = Stream.of(lastLogin, migration)
.filter(Objects::nonNull)
.max(Comparator.comparing(LocalDate::toEpochDay))
.filter(max -> max.isBefore(yesterday))
.orElse(yesterday);
```
Question -> Why are you using `toEpochDay` for comparison rather than using the compareTo?
Upvotes: 1 [selected_answer]
|
2018/03/16
| 356
| 1,338
|
<issue_start>username_0: I'm trying to build an app and want to show error if someone didn't fill-up the form correctly. Can I change the style of each component in Android Studio using java to highlight the component.
What I've done so far is this:
**1.** in styles.xml file, I did declare a new style name errorstyle as following:
```
<item name="android:textColor">#f45c42</item>
```
**2.** secondly in java file I did try to use following code.
```
selectedDate.setTextAppearance(this, android.R.style.errorstyle);
```
And it gives me an error message in java file:
>
> cannot resolve symbol 'errorstyle'
>
>
>
Any suggestions?<issue_comment>username_1: In my opinion, you could take the resulting optional from .max and filter it by .isBefore(yesterday). if the stream is null, then yesterday was before the max of lastLogin and migrationDate, otherwise, the result will be their max
Upvotes: 2 <issue_comment>username_2: I will do something like this
```
LocalDate result = Stream.of(lastLogin, migration)
.filter(Objects::nonNull)
.max(Comparator.comparing(LocalDate::toEpochDay))
.filter(max -> max.isBefore(yesterday))
.orElse(yesterday);
```
Question -> Why are you using `toEpochDay` for comparison rather than using the compareTo?
Upvotes: 1 [selected_answer]
|
2018/03/16
| 790
| 2,529
|
<issue_start>username_0: Since the great comment and link to the great post [Never parse markup with regex](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454) by @kjhughes in my previous question [Regex repeat expression](https://stackoverflow.com/questions/49320902/regex-repeat-expression/49322209#49322209)
I have been changing as many unneeded regular expressions in my application which I used to remove content over writing a complete XPath.
But for the following I am wondering if there is also a way to solve it with XPath:
Data `Name: Herr FirstName LastName`
XPath so far: `//body//div/div/table/tr/td/div/table/tr[3]/td/div/table/tr/td/p[1]/span/text()`
Here I use following regex on: `(?<=Herr |Frau ).*`
This Because I only want the data `Firstname LastName` The reason I am asking for again a name is that this are two different mails I am scraping with different templates and want the application to be modular.
At the moment I do this still quite often in the application that I just remove all unwanted text with a regex, for this reason I want to know if it is also possible with XPath. This way I learn more about the XPath scraping and do not harm unholy childs :)<issue_comment>username_1: Assuming that the `text()` value of the XPath that you provided was "Name: Herr FirstName LastName"
Here is an example of how you can use regex in an XPath 2.0 statement to select the `text()` node if it contains "Herr" or "Frau" using [`matches()`](https://www.w3.org/TR/xpath-functions-31/#func-matches) (positive lookahead and negative lookbehind are not currently supported), and then use [`replace()`](https://www.w3.org/TR/xpath-functions-31/#func-replace) with a regex on that `text()` node value with a capture group to select the value "First Last"
```
//body//div/div/
table/tr/td/div/
table/tr[3]/td/div/
table/tr/td/p[1]/
span/text()[matches(., "Herr|Frau ")]/replace(.,'.*Herr|Frau (.*)', '$1')
```
Upvotes: 2 <issue_comment>username_2: As [<NAME> comments](https://stackoverflow.com/questions/49322994/xpath-to-solve-regex-violation#comment85647503_49322994), you needn't avoid using regex on *plain text* from XML – it's *markup* which shouldn't be parsed via regex.
[username_1 shows](https://stackoverflow.com/a/49324390/290085) how to use regex in XPath 2.0.
Here's a way to extract your targeted text if you only have XPath 1.0:
`substring(normalize-space(` ***your XPath here*** `), 12)`
Upvotes: 0
|
2018/03/16
| 769
| 3,005
|
<issue_start>username_0: I was learning hibernate, and had encountered the exception - javax.persistence.PersistenceException
But I did not understand the exact reason for this.
In which scenarios is this exception thrown?<issue_comment>username_1: PersistenceException is a Runtime Exception within JPA which may be thrown on calling entityManager's DB operations ex. find, persist, flush, lock, refresh, etc. This exception is the parent of following Exceptions :
EntityExistsException, EntityNotFoundException, NonUniqueResultException, NoResultException, OptimisticLockException, RollbackException, TransactionRequiredException.
You may use PersistenceException in order to catch any of the above exceptions in
your DAO classs.
Upvotes: 0 <issue_comment>username_2: PersistenceException occur with DB operations using EntityManager.
**Scenarios:**
* *EntityNotFoundException* => Entity does not exist. eg. you are trying
to find UserData but there is no Table with such name
* *NonUniqueResultException* => Thrown by the persistence provider when
getSingleResult() is executed on a query and there is more than one
result from the query.
eg: em.getSingleResult(). but query more
than 1 rows
* *NoResultException* => Thrown by the persistence provider when
Query.getSingleResult() or TypedQuery.getSingleResult()is executed on
a query and there is no result to return.
and Many more...
<https://docs.oracle.com/cd/E17802_01/products/products/persistence/javadoc-1_0-fr/javax/persistence/PersistenceException.html>
Upvotes: 2 <issue_comment>username_3: If there is a datatype mismatch from database table to jpa i got this error
Ex: in table the date datatype(trandate) was given as varchar2 and i was assigning it to the date object and trying to convert date to String.
```
@Transactional
public List getAgentList(String mode,String distributor) {
// TODO Auto-generated method stub
List query =null;
if (mode.equalsIgnoreCase("I") || mode.equalsIgnoreCase("O"))
{
query = (List)sessionFactory.getCurrentSession().createQuery("select a.agent,a.agentmnemonic,a.status,a.agentGroup from "
+ "BulkTransferVwbulktransferAgent a where a.status IS NULL and a.distributor =:p\_distributor")
.setParameter("p\_distributor", distributor).list();
}
List list= new ArrayList();
for (Iterator it = query.iterator(); it.hasNext(); ) {
BulkTransferVwbulktransferAgent vwbulktransferAgent = new BulkTransferVwbulktransferAgent();
Object[] myResult = (Object[]) it.next();
String agent = (String) myResult[0];
String agentmnemonic = (String) myResult[1];
String status = (String) myResult[2];
String agentGroup = (String) myResult[3];
vwbulktransferAgent.setAgent(agent);
vwbulktransferAgent.setAgentmnemonic(agentmnemonic);
vwbulktransferAgent.setStatus(status);
vwbulktransferAgent.setAgentGroup(agentGroup);
list.add(vwbulktransferAgent);
}
return list;
}
```
and sorry for the alignment i am posting answwer for the first time please guide me if its wrong..
Upvotes: 0
|
2018/03/16
| 1,475
| 5,055
|
<issue_start>username_0: I need to parse a field which is sometimes given as a date and sometimes as a date/time. Is it possible to use single datatype for this using Java 8 time API?
Currently, I attempted to use a LocalDateTime for it, but for following invocation `LocalDateTime.parse("1986-04-08", DateTimeFormatter.ofPattern("yyyy-MM-dd"))`
I get a
```
java.time.DateTimeException: Unable to obtain LocalDateTime from TemporalAccessor: {},ISO resolved to 1986-04-08 of type java.time.format.Parsed
```
This is part of some generic parser accepting a date/datetime parse pattern as configuration option. So e.g. following solution with hardcoded parsing pattern
```
if ("yyyy-MM-dd".equals(pattern)) {
LocalDate.parse(value, DateTimeFormatter.ofPattern("yyyy-MM-dd"))).atStartOfDay()
}
```
is not an option for me.
Any other suggestions how to code it in a clean way are welcome.<issue_comment>username_1: Just create custom formatter with the builder [DateTimeFormatterBuilder](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatterBuilder.html)
```
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.appendPattern("yyyy-MM-dd[ HH:mm:ss]")
.parseDefaulting(ChronoField.HOUR_OF_DAY, 0)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 0)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 0)
.toFormatter();
```
This formatter uses the `[]` brackets to allow optional parts in the format, and adds the default values for hour `HOUR_OF_DAY`, minute `MINUTE_OF_HOUR` and second `SECOND_OF_MINUTE`.
note: you can ommit, minutes and seconds, just providing the hour is enough.
And use it as usual.
```
LocalDateTime localDateTime1 = LocalDateTime.parse("1994-05-13", formatter);
LocalDateTime localDateTime2 = LocalDateTime.parse("1994-05-13 23:00:00", formatter);
```
This outputs the correct date time with default hours of 0 (starting of the day).
```
System.out.println(localDateTime1); // 1994-05-13T00:00
System.out.println(localDateTime2); // 1994-05-13T23:00
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: Jose's answer using `parseDefaulting` is nice. There's also another alternative, if you don't want to use a `DateTimeFormatterBuilder`.
First you create your formatter with an optional section - in this case, the time-of-day part, delimited by `[]`:
```
DateTimeFormatter fmt = DateTimeFormatter.ofPattern("yyyy-MM-dd[ HH:mm:ss]");
```
Then you call `parseBest`, providing the `String` to be parsed and a list of method references:
```
TemporalAccessor parsed = fmt.parseBest("1986-04-08", LocalDateTime::from, LocalDate::from);
```
In this case, it'll first try to create a `LocalDateTime`, and if it's not possible, it'll try to create a `LocalDate` (if none is possible, it'll throw an exception).
Then, you can check which type is returned, and act accordingly:
```
LocalDateTime dt;
if (parsed instanceof LocalDateTime) {
// it's a LocalDateTime, just assign it
dt = (LocalDateTime) parsed;
} else if (parsed instanceof LocalDate) {
// it's a LocalDate, set the time to whatever you want
dt = ((LocalDate) parsed).atTime(LocalTime.MIDNIGHT);
}
```
If the result is a `LocalDate`, you can choose to call `atStartOfDay()`, as suggested by others, or change to a specific time-of-day, such as `atTime(LocalTime.of(10, 30))` for 10:30 AM, for example.
Upvotes: 3 <issue_comment>username_3: I know its late for the answer, but it can help others...
Since the LocalDateTime you need to set a time, otherwise you can use just the LocalDate, I searched for LocalDateTime built-in solution to handle this.
I didn't found, however I used this following approach:
```java
// For specific date (the same as the question)
LocalDateTime specificDate LocalDateTime.of(LocalDate.of(1986, 4, 8), LocalTime.MIN);
```
**Other examples:**
```java
// For the start of day
LocalDateTime startToday = LocalDateTime.of(LocalDate.now(), LocalTime.MIN));
// For the end of day
LocalDateTime endOfToday = LocalDateTime.of(LocalDate.now(), LocalTime.MAX));
```
This way you don't need to use a formatter. :)
Upvotes: 0 <issue_comment>username_4: Based on <NAME>'s [answer](https://stackoverflow.com/questions/49323017/parse-date-or-datetime-both-as-localdatetime-in-java-8/49324132#49324132):
```
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.append(DateTimeFormatter.ISO_LOCAL_DATE)
.optionalStart()
.appendLiteral(' ') // might be 'T' in your case
.append(DateTimeFormatter.ISO_LOCAL_TIME)
.optionalEnd()
.parseDefaulting(ChronoField.HOUR_OF_DAY, 0)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 0)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 0)
.toFormatter()
.withResolverStyle(ResolverStyle.STRICT); // parse STRICT instead of SMART!
```
Usage:
```
LocalDateTime localDateTime1 = LocalDateTime.parse("1994-05-13", formatter);
LocalDateTime localDateTime2 = LocalDateTime.parse("1994-05-13 23:00:00", formatter);
```
Upvotes: 0
|
2018/03/16
| 1,301
| 4,440
|
<issue_start>username_0: I have a strange issue with my laptop. This issue is that when i run app or while build.gradle run it goes to sleep. The issue only occurs while android studio run.
The issue is with the new version of android studio 3.0.1. previously it was working fine.<issue_comment>username_1: Just create custom formatter with the builder [DateTimeFormatterBuilder](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatterBuilder.html)
```
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.appendPattern("yyyy-MM-dd[ HH:mm:ss]")
.parseDefaulting(ChronoField.HOUR_OF_DAY, 0)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 0)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 0)
.toFormatter();
```
This formatter uses the `[]` brackets to allow optional parts in the format, and adds the default values for hour `HOUR_OF_DAY`, minute `MINUTE_OF_HOUR` and second `SECOND_OF_MINUTE`.
note: you can ommit, minutes and seconds, just providing the hour is enough.
And use it as usual.
```
LocalDateTime localDateTime1 = LocalDateTime.parse("1994-05-13", formatter);
LocalDateTime localDateTime2 = LocalDateTime.parse("1994-05-13 23:00:00", formatter);
```
This outputs the correct date time with default hours of 0 (starting of the day).
```
System.out.println(localDateTime1); // 1994-05-13T00:00
System.out.println(localDateTime2); // 1994-05-13T23:00
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: Jose's answer using `parseDefaulting` is nice. There's also another alternative, if you don't want to use a `DateTimeFormatterBuilder`.
First you create your formatter with an optional section - in this case, the time-of-day part, delimited by `[]`:
```
DateTimeFormatter fmt = DateTimeFormatter.ofPattern("yyyy-MM-dd[ HH:mm:ss]");
```
Then you call `parseBest`, providing the `String` to be parsed and a list of method references:
```
TemporalAccessor parsed = fmt.parseBest("1986-04-08", LocalDateTime::from, LocalDate::from);
```
In this case, it'll first try to create a `LocalDateTime`, and if it's not possible, it'll try to create a `LocalDate` (if none is possible, it'll throw an exception).
Then, you can check which type is returned, and act accordingly:
```
LocalDateTime dt;
if (parsed instanceof LocalDateTime) {
// it's a LocalDateTime, just assign it
dt = (LocalDateTime) parsed;
} else if (parsed instanceof LocalDate) {
// it's a LocalDate, set the time to whatever you want
dt = ((LocalDate) parsed).atTime(LocalTime.MIDNIGHT);
}
```
If the result is a `LocalDate`, you can choose to call `atStartOfDay()`, as suggested by others, or change to a specific time-of-day, such as `atTime(LocalTime.of(10, 30))` for 10:30 AM, for example.
Upvotes: 3 <issue_comment>username_3: I know its late for the answer, but it can help others...
Since the LocalDateTime you need to set a time, otherwise you can use just the LocalDate, I searched for LocalDateTime built-in solution to handle this.
I didn't found, however I used this following approach:
```java
// For specific date (the same as the question)
LocalDateTime specificDate LocalDateTime.of(LocalDate.of(1986, 4, 8), LocalTime.MIN);
```
**Other examples:**
```java
// For the start of day
LocalDateTime startToday = LocalDateTime.of(LocalDate.now(), LocalTime.MIN));
// For the end of day
LocalDateTime endOfToday = LocalDateTime.of(LocalDate.now(), LocalTime.MAX));
```
This way you don't need to use a formatter. :)
Upvotes: 0 <issue_comment>username_4: Based on <NAME>'s [answer](https://stackoverflow.com/questions/49323017/parse-date-or-datetime-both-as-localdatetime-in-java-8/49324132#49324132):
```
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.append(DateTimeFormatter.ISO_LOCAL_DATE)
.optionalStart()
.appendLiteral(' ') // might be 'T' in your case
.append(DateTimeFormatter.ISO_LOCAL_TIME)
.optionalEnd()
.parseDefaulting(ChronoField.HOUR_OF_DAY, 0)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 0)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 0)
.toFormatter()
.withResolverStyle(ResolverStyle.STRICT); // parse STRICT instead of SMART!
```
Usage:
```
LocalDateTime localDateTime1 = LocalDateTime.parse("1994-05-13", formatter);
LocalDateTime localDateTime2 = LocalDateTime.parse("1994-05-13 23:00:00", formatter);
```
Upvotes: 0
|
2018/03/16
| 2,183
| 8,259
|
<issue_start>username_0: I have been working on a solution for detecting overlapping timespans in SQL Server. This is to prevent the overlap of events, or in my case, work shifts.
While there has been a lot of discussion on SO for [finding overlapping date ranges](https://stackoverflow.com/questions/325933), there is less on overlapping timespans, and even less when the timespans are allowed to cross midnight. Through lots of research and testing, I've come up with the following solution.
```
CREATE TABLE Shift
(
ShiftID INT IDENTITY(1,1),
StartTime TIME(0) NOT NULL,
EndTime TIME(0) NOT NULL
);
CREATE PROCEDURE [dbo].[spInsertShift]
@StartTime TIME(0),
@EndTime TIME(0)
AS
BEGIN
DECLARE @ThrowMessage NVARCHAR(4000)
-- Check whether the new shift would overlap with any existing shifts.
IF EXISTS
(
SELECT 0
FROM Shift
WHERE
(
-- Case #1: Neither shift crosses midnight.
(@StartTime < @EndTime AND StartTime < EndTime)
-- New shift would overlap with an existing shift.
AND @StartTime < EndTime
AND StartTime < @EndTime
)
OR
(
-- Case #2: Both shifts cross midnight.
(@EndTime < @StartTime AND EndTime < StartTime)
-- New shift would overlap with an existing shift.
AND CAST(@StartTime AS DATETIME) < DATEADD(DAY, 1, CAST(EndTime AS DATETIME))
AND CAST(StartTime AS DATETIME) < DATEADD(DAY, 1, CAST(@EndTime AS DATETIME))
)
OR
(
-- Case #3: New shift crosses midnight, but the existing shift does not.
(@EndTime < @StartTime AND StartTime < EndTime)
AND
(
-- New shift would overlap with an existing shift.
@StartTime > StartTime AND @StartTime < EndTime
OR @EndTime > StartTime AND @EndTime < EndTime
OR
(
-- Existing shift would be inside new shift.
CAST(StartTime AS DATETIME) BETWEEN CAST(@StartTime AS DATETIME) AND DATEADD(DAY, 1, CAST(@EndTime AS DATETIME))
AND CAST(EndTime AS DATETIME) BETWEEN CAST(@StartTime AS DATETIME) AND DATEADD(DAY, 1, CAST(@EndTime AS DATETIME))
)
)
)
OR
(
-- Case #4: New shift does not cross midnight, but the existing shift does.
(@StartTime < @EndTime AND EndTime < StartTime)
AND
(
-- Existing shift would overlap with new shift.
StartTime > @StartTime AND StartTime < @EndTime
OR EndTime > @StartTime AND EndTime < @EndTime
OR
(
-- New shift would be inside an existing shift.
CAST(@StartTime AS DATETIME) BETWEEN CAST(StartTime AS DATETIME) AND DATEADD(DAY, 1, CAST(EndTime AS DATETIME))
AND CAST(@EndTime AS DATETIME) BETWEEN CAST(StartTime AS DATETIME) AND DATEADD(DAY, 1, CAST(EndTime AS DATETIME))
)
)
)
)
BEGIN
SET @ThrowMessage = 'A Shift already exists in this timespan.'
THROW 50140, @ThrowMessage, 1
END
INSERT INTO Shift
(
StartTime,
EndTime
)
VALUES
(
@StartTime,
@EndTime
)
END
```
[[SQL Fiddle]](http://sqlfiddle.com/#!18/0e5c6/1)
Since my timespans are allowed to cross midnight, I had to use some mechanism to denote that the EndTime is greater than the StartTime, despite the fact that `StartTime='22:00' > EndTime='06:00'` for example. I chose to `CAST` the EndTime to a `DATETIME` and add a day, in these cases.
**My question is:** what is the best way to detect overlapping timespans that are allowed to cross midnight? My current solution feels overly verbose and complex. When testing, please keep all [timespan test cases](https://i.stack.imgur.com/0c6q0.png) in mind. Thanks!<issue_comment>username_1: Just create custom formatter with the builder [DateTimeFormatterBuilder](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatterBuilder.html)
```
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.appendPattern("yyyy-MM-dd[ HH:mm:ss]")
.parseDefaulting(ChronoField.HOUR_OF_DAY, 0)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 0)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 0)
.toFormatter();
```
This formatter uses the `[]` brackets to allow optional parts in the format, and adds the default values for hour `HOUR_OF_DAY`, minute `MINUTE_OF_HOUR` and second `SECOND_OF_MINUTE`.
note: you can ommit, minutes and seconds, just providing the hour is enough.
And use it as usual.
```
LocalDateTime localDateTime1 = LocalDateTime.parse("1994-05-13", formatter);
LocalDateTime localDateTime2 = LocalDateTime.parse("1994-05-13 23:00:00", formatter);
```
This outputs the correct date time with default hours of 0 (starting of the day).
```
System.out.println(localDateTime1); // 1994-05-13T00:00
System.out.println(localDateTime2); // 1994-05-13T23:00
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: Jose's answer using `parseDefaulting` is nice. There's also another alternative, if you don't want to use a `DateTimeFormatterBuilder`.
First you create your formatter with an optional section - in this case, the time-of-day part, delimited by `[]`:
```
DateTimeFormatter fmt = DateTimeFormatter.ofPattern("yyyy-MM-dd[ HH:mm:ss]");
```
Then you call `parseBest`, providing the `String` to be parsed and a list of method references:
```
TemporalAccessor parsed = fmt.parseBest("1986-04-08", LocalDateTime::from, LocalDate::from);
```
In this case, it'll first try to create a `LocalDateTime`, and if it's not possible, it'll try to create a `LocalDate` (if none is possible, it'll throw an exception).
Then, you can check which type is returned, and act accordingly:
```
LocalDateTime dt;
if (parsed instanceof LocalDateTime) {
// it's a LocalDateTime, just assign it
dt = (LocalDateTime) parsed;
} else if (parsed instanceof LocalDate) {
// it's a LocalDate, set the time to whatever you want
dt = ((LocalDate) parsed).atTime(LocalTime.MIDNIGHT);
}
```
If the result is a `LocalDate`, you can choose to call `atStartOfDay()`, as suggested by others, or change to a specific time-of-day, such as `atTime(LocalTime.of(10, 30))` for 10:30 AM, for example.
Upvotes: 3 <issue_comment>username_3: I know its late for the answer, but it can help others...
Since the LocalDateTime you need to set a time, otherwise you can use just the LocalDate, I searched for LocalDateTime built-in solution to handle this.
I didn't found, however I used this following approach:
```java
// For specific date (the same as the question)
LocalDateTime specificDate LocalDateTime.of(LocalDate.of(1986, 4, 8), LocalTime.MIN);
```
**Other examples:**
```java
// For the start of day
LocalDateTime startToday = LocalDateTime.of(LocalDate.now(), LocalTime.MIN));
// For the end of day
LocalDateTime endOfToday = LocalDateTime.of(LocalDate.now(), LocalTime.MAX));
```
This way you don't need to use a formatter. :)
Upvotes: 0 <issue_comment>username_4: Based on <NAME>'s [answer](https://stackoverflow.com/questions/49323017/parse-date-or-datetime-both-as-localdatetime-in-java-8/49324132#49324132):
```
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.append(DateTimeFormatter.ISO_LOCAL_DATE)
.optionalStart()
.appendLiteral(' ') // might be 'T' in your case
.append(DateTimeFormatter.ISO_LOCAL_TIME)
.optionalEnd()
.parseDefaulting(ChronoField.HOUR_OF_DAY, 0)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 0)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 0)
.toFormatter()
.withResolverStyle(ResolverStyle.STRICT); // parse STRICT instead of SMART!
```
Usage:
```
LocalDateTime localDateTime1 = LocalDateTime.parse("1994-05-13", formatter);
LocalDateTime localDateTime2 = LocalDateTime.parse("1994-05-13 23:00:00", formatter);
```
Upvotes: 0
|
2018/03/16
| 3,866
| 11,169
|
<issue_start>username_0: I'd like to construct my crawler using selenium on my server.
Thus I had installed/download required dependencies- such as chromedriver, chromium-browser etc on my Ubuntu17.10 server
However, when I run following code:
```
driver = webdriver.Chrome()
```
It returns following error:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
in ()
----> 1 driver = webdriver.Chrome()
/home/zachary/.local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py in \_\_init\_\_(self, executable\_path, port, options, service\_args, desired\_capabilities, service\_log\_path, chrome\_options)
66 service\_args=service\_args,
67 log\_path=service\_log\_path)
---> 68 self.service.start()
69
70 try:
/home/zachary/.local/lib/python3.6/site-packages/selenium/webdriver/common/service.py in start(self)
96 count = 0
97 while True:
---> 98 self.assert\_process\_still\_running()
99 if self.is\_connectable():
100 break
/home/zachary/.local/lib/python3.6/site-packages/selenium/webdriver/common/service.py in assert\_process\_still\_running(self)
109 raise WebDriverException(
110 'Service %s unexpectedly exited. Status code was: %s'
--> 111 % (self.path, return\_code)
112 )
113
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 127
```
What does it mean that it's excited..?
I can't get what the original intention of that error code and where to start to fix it.
It looks very rare case.
Maybe relevant:
I had install ubuntu desktop 17.10 on my desktop but failed to get GUI boot. Thus I am just using terminal only, but it well works so far.
I had installed ssh and remote controlling jupyter notebook from my mac to server desktop, and those errors comes from it.
Hope this info is relevant to solve this error, otherwise will abort it.<issue_comment>username_1: While working with *Selenium v3.11.0*, *ChromeDriver v2.36* and *Chrome v64.x* you have to download the latest *ChromeDriver* from the [ChromeDriver - WebDriver for Chrome](https://sites.google.com/a/chromium.org/chromedriver/downloads) and place it within your system. Next while initializing the *WebDriver* and the *WebBrowser* you have to pass the argument `executable_path` along with the absolute path of the *ChromeDriver* as follows :
```
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/path/to/chromedriver')
driver.get("http://www.python.org")
```
Upvotes: 3 <issue_comment>username_2: It seems `chromedriver` needs some extra libraries. This solved the issue for me:
```
apt-get install -y libglib2.0-0=2.50.3-2 \
libnss3=2:3.26.2-1.1+deb9u1 \
libgconf-2-4=3.2.6-4+b1 \
libfontconfig1=2.11.0-6.7+b1
```
I was working on a similar setup using a docker container instead of a server/VM without X / GUI.
To figure out which dependencies are required I tried iteratively to run it from the command line like this: `/opt/chromedriver/2.33/chromedriver --version` over and over again.
Then at eache time I used commands like `apt-cache search` and `apt-cache madison` to figure out the exact version of the `deb` package needed by `chromedriver` 2.33 (in my case, but I guess something similar would work for any version of `chromedriver`).
**Edit**
As suggested in the comments, using the `ldd` command to print shared object dependencies may be another option. As of today my `chromedriver` version after a few years from the original answer is `83.0.4103.14` - the dependencies are different as well, but see below to get an idea of what could be missing:
```
$ /usr/local/bin/chromedriver --version
ChromeDriver 83.0.4103.14 (be04594a2b8411758b860104bc0a1033417178be-refs/branch-heads/4103@{#119})
$ ldd /usr/local/bin/chromedriver
linux-vdso.so.1 (0x00007fffff7f0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f414739d000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f414737a000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f414736f000)
libglib-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f4147246000)
libnss3.so => /usr/lib/x86_64-linux-gnu/libnss3.so (0x00007f41470f7000)
libnssutil3.so => /usr/lib/x86_64-linux-gnu/libnssutil3.so (0x00007f41470c4000)
libnspr4.so => /usr/lib/x86_64-linux-gnu/libnspr4.so (0x00007f4147082000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f4146f45000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4146df6000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f4146ddb000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4146be9000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4147e56000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f4146b76000)
libplc4.so => /usr/lib/x86_64-linux-gnu/libplc4.so (0x00007f4146b6d000)
libplds4.so => /usr/lib/x86_64-linux-gnu/libplds4.so (0x00007f4146b68000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f4146b3e000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f4146b38000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f4146b30000)
libbsd.so.0 => /usr/lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f4146b14000)
```
From `man ldd`:
>
> ldd prints the shared objects (shared libraries) required by each program or shared object specified on the command line.
>
>
> ...
>
>
> In the usual case, ldd invokes the standard dynamic linker (see ld.so(8))
> with the LD\_TRACE\_LOADED\_OBJECTS environment variable set to 1. This
> causes the dynamic linker to inspect the program's dynamic
> dependencies, and find (according to the rules described in ld.so(8))
> and load the objects that satisfy those dependencies. For each
> dependency, ldd displays the location of the matching object and the
> (hexadecimal) address at which it is loaded.
>
>
>
Upvotes: 6 <issue_comment>username_3: I encountered the same error when using selenium/chromedriver on my VPS. I installed `chromium-browser` and the problem was gone.
```
sudo apt-get install -y chromium-browser
```
Maybe it's not the `chromium-browser` is needed, but the packages were installed along with it. However, that was a quick fix.
Upvotes: 5 <issue_comment>username_4: I had this same issue, and the problem was due to **chromedriver** version.
Please ensure You are using latest [Chrome Browser](https://askubuntu.com/questions/510056/how-to-install-google-chrome) along with latest [chromedriver](http://chromedriver.chromium.org/).
Upvotes: 0 <issue_comment>username_5: Solved by carefully removing existing chromedriver and updating it to a newer version:
1. Delete all existing chromedriver files
2. Download `wget https://chromedriver.storage.googleapis.com/2.46/chromedriver_linux64.zip` (replace 2.46 bit to the newer one if needed, see compatible versions here: <http://chromedriver.chromium.org/downloads>)
3. Unzip, convert to executable by running `chmod +x chromedriver`
4. Move it to `mv -f chromedriver /usr/local/bin/chromedriver` so it appears in PATH
This should solve an issue. I thought updating doesn't work because when I first tried it, I didn't remove the older version and I was still using it accidentally.
Upvotes: 1 <issue_comment>username_6: Reverting to older versions might also be a solution...
I am using Ubuntu 18.10 and installed the latest Selenium (3.141.0) and ChromeDriver (75.0.3770.8), but also had the same permission problems, and status code 127 afterwards.
I tried installing Chromium and noticed that Ubuntu was using Version 73. So I reverted from the latest version of Chromedriver (75 at this time), back to Version 73 and that worked for me.
Upvotes: 0 <issue_comment>username_7: I had a similar issue but it turned out that my problem was incorrectly set service\_log\_path which was pointing to a deleted folder.
```
webdriver.Chrome(executable_path='/path/to/chromedriver', service_log_path='/path/to/existing/folder')
```
Upvotes: 3 <issue_comment>username_8: I was receiving the same selenium trace error:
>
> WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 127
>
>
>
My issue was due to using a different version of chromedriver (version 78) than browser (version 79) when trying to manually run the chromedriver I would see
`Segmentation fault (core dumped)`
Once I updated my chromedriver to match the browser it was able to start successfully
>
> Starting ChromeDriver 79.0.3945.36 (3582db32b33893869b8c1339e8f4d9ed1816f143-refs/branch-heads/3945@{#614}) on port 9515
>
>
>
Only local connections are allowed.
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Upvotes: 1 <issue_comment>username_9: Run this command to troubleshoot: `./chromedriver` (where your chrome driver binary is).
You might see an error like this:
`./chromedriver: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory`
To solve this error, simply run: `sudo apt-get install libnss3`.
Then check again and see if it works this time: `./chromedriver`.
Some other packages might be missing as well. Here is an exhaustive list:
`gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget`
You probably don't need all of them but the ones you need are likely to be listed above.
Upvotes: 5 <issue_comment>username_10: I used such a script to install Chrome
```
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' | tee /etc/apt/sources.list.d/google-chrome.list
apt update -y
apt install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
apt install -y google-chrome-stable
```
Upvotes: 2 <issue_comment>username_11: This error can occur if you're running a IDE in the cloud. GITPOD etc. Try creating a local repo and VSCode (or your chosen IDE) and selenium should work fine.
Upvotes: 0 <issue_comment>username_12: @username_2 's answer need be updated
```
$ apt-get update -y
$ apt-get install -y libglib2.0-0 libnss3 libgconf-2-4 libfontconfig1
```
Upvotes: 2 <issue_comment>username_13: I get this issue fixed by upgrading the Selenium version.
A side note for those who install Selenium by `conda`, the latest Selenium version on conda is pretty far behind the version on `pip` so please switch to `pip` installation.
Upvotes: 0
|
2018/03/16
| 1,175
| 4,833
|
<issue_start>username_0: I'm trying to configure an instance of SSRS, but with little success.
I've installed SSRS on the server DWHFRONT. It runs under the Network Service credentials. On the Database tab of the Reporting Services Configuration Manager, I've set it up to use a database on DWHBACK, which it created succesfully. The connection uses a domain account DOM\SA\_DWH. I've added a Login to the server DWHBACK for DOM\SA\_DWH, and I can see that the Reporting Services Manager added the authorisations to the SSRS-databases. The Configuration Manager accepts these connection parameters.
However (after setting up the URLs, Virtual Directories etc) if I visit the URL of DWHFRONT/Reports/, I get an error saying:
>
> **The service is not available.**
>
>
> The report server isn’t configured properly. Contact your system administrator to resolve the issue. System administrators: The report server can’t connect to its database because it doesn’t have permission to do so. Use Reporting Services Configuration Manager to update the report server database credentials.
>
>
>
At first I thought it might be an issue because there is noting deployed to the SSRS instance yet. When I try to deploy something however, I get this error in BIDS:
>
> The report server cannot open a connection to the report server database. A connection to the database is required for all requests and processing. ---> Microsoft.ReportingServices.Library.ReportServerDatabaseUnavailableException: The report server cannot open a connection to the report server database. A connection to the database is required for all requests and processing. (Microsoft.ReportingServices.Designer)
>
>
><issue_comment>username_1: 1. Check SQL Logs for any error
2. Run SQL Profile on SQL Server to catch any connection/permissions errors
3. Try to re-enter permissions for a domain account
4. Check a domain account is not disabled / locked / password expired
5. Try to run SSRS Service under same AD account that connects Report Server DB.
6. Try to enable also Named Pipes on both servers (protocols and client protocols)
7. Try to delete encryption keys and to re-create them.
8. Try another domain user and check granted permissions in all DBs (ReportServer, ReportServer Temp, msdb), check if DB user in each DB maps to the valid SID)
9. Try to grant SQL server sysadmin permissions for that AD account (temporary).
10. Check SSRS Logs for any errors (it's good if you attach them).
11. Check Event Viewer --> Security --> Failed Audit on SQL Server
Upvotes: 3 <issue_comment>username_2: Can you perform following test to check will it resolve your issue?:
1. Add account [DOM\DWHFRONT$] with temporary high privileges, like
SYSADMIN to SQL Instance DWHBACK (SA is a temporal measure, to be removed after test)
2. Change SSRS Service account to: "Local System"
3. Recreate ReportServer database and choose credentials: "Service Credentials"
p.s. we had issues with recent versions of SSRS running under Network Service credentials..
Upvotes: 0 <issue_comment>username_3: This sounds a lot like a "double-hop" issue.
This can be confirmed by [enabling remote errors](https://learn.microsoft.com/en-us/sql/reporting-services/report-server/enable-remote-errors-reporting-services) on the Reporting Services instance. Then, when you connect to the report portal you should see your original error along with "login failed for user NTAUTHORITY\ANONYMOUS LOGON". If so, then it is highly likely that this is a Kerberos delegation issue.
When you host the SSRS service and the target database on different servers, you will almost always experience the "double-hop" issue, which will require Kerberos delegation to be configured to enable SSRS to reuse the user credentials to access the database across servers.
[<NAME>](https://blog.sqlauthority.com/2015/06/13/sql-server-login-failed-for-user-nt-authorityanonymous-logon/) wrote a nice blog about this in the context of SQL Server.
To enable delegation, you first need to ensure you have your SPNs configured correctly. [Here](https://www.microsoft.com/en-us/download/details.aspx?id=39046) is a great utility to help with this. I would run this on both your SSRS server AND your database server to ensure both servers have their SPNs configured correctly.
Once you have your SPNs configured, you then need to [enable delegation](https://blogs.msdn.microsoft.com/autz_auth_stuff/2011/05/03/kerberos-delegation/) in Active Directory. This will need to be done for your DOM\SA\_DWH account.
Upvotes: 0 <issue_comment>username_4: Thanks for all the suggestions, It put me on the right track!
The Domain Account SA\_DWH was prohibited from interactive logon. SSRS apparently does require this. The Interactive Logon restriction was lifted and now it works.
Upvotes: 1 [selected_answer]
|
2018/03/16
| 760
| 2,001
|
<issue_start>username_0: A simple table:
```
CREATE TABLE `tbl_type_test` (
`uid` varchar(31) NOT NULL DEFAULT '0',
`value` varchar(15) NOT NULL DEFAULT '',
PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
and some records:
```
'6011656821812318924', 'a'
'6011656821812318925', 'b'
'6011656821812318926', 'c'
```
when I execute the following SQL, 3 records will return
```
select * from tbl_type_test where uid = 6011656821812318924;
```
and this will return 3 records, too. Weird.
```
select * from tbl_type_test where uid = 6011656821812318900;
```
if I change the number to string type, as expected, only 1 record will return:
```
select * from tbl_type_test where uid = '6011656821812318924';
```
I think the number type and length in the query is the reason, but I don't known the exact.
Any comment will be greatly appreciated.<issue_comment>username_1: MySQL by default treats 1 and '1' the same however you can change that by setting the MySQL behavior to Strict mode.
set @@GLOBAL.sql\_mode = "STRICT\_ALL\_TABLES";
set @@SESSION.sql\_mode = "STRICT\_ALL\_TABLES";
or you can set these variables in your my.cnf file to be permanent in sql\_mode = ''. This way MySQL will throw an error if an incorrect type is used. Read
<https://dev.mysql.com/doc/refman/5.7/en/constraint-invalid-data.html>
Regards
Upvotes: 0 <issue_comment>username_2: *In all other cases, the arguments are compared as floating-point (real) numbers.* - <https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html>
for example
```
drop procedure if exists p;
delimiter $$
create procedure p (inval float, inval2 float, inval3 float)
select inval,inval2,inval3;
call p(6011656821812318924,6011656821812318925,6011656821812318926);
+------------+------------+------------+
| inval | inval2 | inval3 |
+------------+------------+------------+
| 6.01166e18 | 6.01166e18 | 6.01166e18 |
+------------+------------+------------+
1 row in set (0.00 sec)
```
Upvotes: 1
|
2018/03/16
| 912
| 3,674
|
<issue_start>username_0: In my Symfony 3.4 application, the user is automatically logged out after a certain period of time. I want to change this behaviour and make my application never log out automatically. It should log out the session only when the user clicks on the logout link.
I have read the documentation and tried by setting the cookie\_lifetime but it is not working for me. If anybody worked on this area please suggest how to proceed.
**Updates:**
I'm using this documentation page <http://symfony.com/doc/master/components/http_foundation/session_configuration.html#session-lifetime>
I'm using Symfony 3.4 flex based project.
I'm setting the configurations in config/packages/framework.yml. The configurations are as follows:
```
framework:
session:
handler_id: ~
cookie_lifetime: 31536000
gc_maxlifetime: 31536000
```<issue_comment>username_1: After a long debugging, I found out that the following configuration is telling Symfony to use the default PHP save handler and the default session file path.
```
framework:
session:
handler_id: ~
```
Hence Symfony session files are being stored in `/var/lib/php/sessions` directory. In Debian based operating systems, a cron job is deleting the session files every half an hour. This cron job is identifying the active sessions based on the `PIDs` associated with `apache2` and updating the last accessed time and last modification time of these active session files only.
Then the same cron job is deleting the session files which are having the last modification time before the `gc_maxlifetime` i.e; inactive sessions. The main problem is that `gc_maxlifetime` is determined based on the `php.ini` files only but not considering the Symfony's `.yaml` files. Hence the configurations in Symfony's `.yaml` files are ignored and the PHP's `gc_maxlifetime` is used.
This makes the session files being deleted after 20 minutes to 30 minutes. To fix this problem, I have updated the `.yaml` configurations as follows:
```
framework:
session:
handler_id: session.handler.native_file
save_path: '%kernel.project_dir%/var/sessions/%kernel.environment%'
cookie_lifetime: 31536000
gc_maxlifetime: 31536000
```
Now the session files are not stored inside the default `/var/lib/php/sessions` directory and hence the cron job is not deleting the session files. Now Symfony is taking care of this session handling job and it works perfectly now.
Upvotes: 5 [selected_answer]<issue_comment>username_2: This is the solution for symfony 4.
```
session:
#handler_id: ~
handler_id: session.handler.native_file
save_path: '%kernel.project_dir%/var/sessions/%kernel.environment%'
cookie_lifetime: 1800 // was "lifetime" but deprecated
```
Upvotes: 0 <issue_comment>username_3: Just in case there's [RedisSessionHandler configured for session storage](https://symfony.com/doc/current/session/database.html#store-sessions-in-a-key-value-database-redis), one should also consider increasing the `ttl` parameter passed into the service:
```
# config/services.yaml
services:
# ...
Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler:
arguments:
- '@Redis'
# you can optionally pass an array of options. The only options are 'prefix' and 'ttl',
# which define the prefix to use for the keys to avoid collision on the Redis server
# and the expiration time for any given entry (in seconds), defaults are 'sf_s' and null:
- { 'prefix': 'my_prefix', 'ttl': 600 } # also set equal 31536000
```
Upvotes: 0
|
2018/03/16
| 2,047
| 4,203
|
<issue_start>username_0: I have 2 sets of geo-codes as pandas series and I am trying to find the fastest way to get the minimum euclidean distance of points in set A from points in set B.
That is: the closest point to 40.748043 & -73.992953 from the second set,and so on.
Would really appreciate any suggestions/help.
```
Set A:
print(latitude1)
print(longitude1)
0 40.748043
1 42.361016
Name: latitude, dtype: float64
0 -73.992953
1 -71.020005
Name: longitude, dtype: float64
Set B:
print(latitude2)
print(longitude2)
0 42.50729
1 42.50779
2 25.56473
3 25.78953
4 25.33132
5 25.06570
6 25.59246
7 25.61955
8 25.33737
9 24.11028
Name: latitude, dtype: float64
0 1.53414
1 1.52109
2 55.55517
3 55.94320
4 56.34199
5 55.17128
6 56.26176
7 56.27291
8 55.41206
9 52.73056
Name: longitude, dtype: float64
```<issue_comment>username_1: You can try using the geopy library.
<https://pypi.python.org/pypi/geopy>
Here's an example from the documentation.
```
>>> from geopy.distance import vincenty
>>> newport_ri = (41.49008, -71.312796)
>>> cleveland_oh = (41.499498, -81.695391)
>>> print(vincenty(newport_ri, cleveland_oh).miles)
538.3904451566326
```
where vincenty is vincenty distance
<https://en.wikipedia.org/wiki/Vincenty%27s_formulae>
Upvotes: 2 <issue_comment>username_2: This is one way using just [`numpy.linalg.norm`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.norm.html).
```
import pandas as pd, numpy as np
df1['coords1'] = list(zip(df1['latitude1'], df1['longitude1']))
df2['coords2'] = list(zip(df2['latitude2'], df2['longitude2']))
def calc_min(x):
amin = np.argmin([np.linalg.norm(np.array(x)-np.array(y)) for y in df2['coords2']])
return df2['coords2'].iloc[amin]
df1['closest'] = df1['coords1'].map(calc_min)
# latitude1 longitude1 coords1 closest
# 0 40.748043 -73.992953 (40.748043, -73.992953) (42.50779, 1.52109)
# 1 42.361016 -71.020005 (42.361016, -71.020005) (42.50779, 1.52109)
# 2 25.361016 54.000000 (25.361016, 54.0) (25.0657, 55.17128)
```
**Setup**
```
from io import StringIO
mystr1 = """latitude1|longitude1
40.748043|-73.992953
42.361016|-71.020005
25.361016|54.0000
"""
mystr2 = """latitude2|longitude2
42.50729|1.53414
42.50779|1.52109
25.56473|55.55517
25.78953|55.94320
25.33132|56.34199
25.06570|55.17128
25.59246|56.26176
25.61955|56.27291
25.33737|55.41206
24.11028|52.73056"""
df1 = pd.read_csv(StringIO(mystr1), sep='|')
df2 = pd.read_csv(StringIO(mystr2), sep='|')
```
If performance is an issue, you can vectorize this calculation fairly easily via the underlying numpy arrays.
Upvotes: 3 [selected_answer]<issue_comment>username_3: For those closest point calculations, usually the efficient method is with one of those kd-tree based quick nearest-neighbor lookup. Employing [`Cython-powered implementation`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.query.html), we would have one approach like so -
```
from scipy.spatial import cKDTree
def closest_pts(setA_lat, setA_lng, setB_lat, setB_lng):
a_x = setA_lat.values
a_y = setA_lng.values
b_x = setB_lat.values
b_y = setB_lng.values
a = np.c_[a_x, a_y]
b = np.c_[b_x, b_y]
indx = cKDTree(b).query(a,k=1)[1]
return pd.Series(b_x[indx]), pd.Series(b_y[indx])
```
Sample run -
1) Inputs :
```
In [106]: setA_lat
Out[106]:
0 40.748043
1 42.361016
dtype: float64
In [107]: setA_lng
Out[107]:
0 -73.992953
1 -71.020005
dtype: float64
In [108]: setB_lat
Out[108]:
0 42.460000
1 0.645894
2 0.437587
3 40.460000
4 0.963663
dtype: float64
In [109]: setB_lng
Out[109]:
0 -71.000000
1 0.925597
2 0.071036
3 -72.000000
4 0.020218
dtype: float64
```
2) Outputs :
```
In [110]: c_x,c_y = closest_pts(setA_lat, setA_lng, setB_lat, setB_lng)
In [111]: c_x
Out[111]:
0 40.46
1 42.46
dtype: float64
In [112]: c_y
Out[112]:
0 -72.0
1 -71.0
dtype: float64
```
Upvotes: 1
|
2018/03/16
| 585
| 1,939
|
<issue_start>username_0: In Bootstrap 3 there were optional icons for each of the validation states. The icon would appear in the right side of the input using the `has-feedback`, `has-success`, `has-danger`, etc... classes.
[](https://i.stack.imgur.com/ZHi1G.png)
How can I get this same functionality in Bootstrap 4 using the `valid-feedback` or `invalid-feedback` classes?<issue_comment>username_1: **Bootstrap 4** doesn't include icons (glyphicons are gone), and there are now just 2 validation states (`is-valid` and `is-invalid`) that control display of the `valid-feedback` and `invalid-feedback` text.
With a little extra CSS, you can position an icon inside the input (to the right), and control its' display using `is-valid` or `is-invalid` on the `form-control` input. Use a font lib like fontawesome for the icons. I created a new `feedback-icon` class that you can add to the `valid/invalid-feedback`.
```
.valid-feedback.feedback-icon,
.invalid-feedback.feedback-icon {
position: absolute;
width: auto;
bottom: 10px;
right: 10px;
margin-top: 0;
}
```
HTML
```
Valid with icon
```
[Demo of input validation icons](https://www.codeply.com/go/w7E80lxnMD)
[Demo with working validation](https://www.codeply.com/go/ufSwK5njxe)
```css
.valid-feedback.feedback-icon,
.invalid-feedback.feedback-icon {
position: absolute;
width: auto;
bottom: 10px;
right: 10px;
margin-top: 0;
}
```
```html
Valid with icon
```
Notice that the containing `form-group` is `position:relative` using the `position-relative` class.
Upvotes: 4 [selected_answer]<issue_comment>username_2: A simple way to do it with Bootstrap 4 is using stylized [input groups](https://getbootstrap.com/docs/4.6/components/input-group/):
```html
```
Obviously you have to include the icons for it to show up:
```html
```
Upvotes: 2
|
2018/03/16
| 787
| 2,869
|
<issue_start>username_0: On Windows 7 Dev machines that have Asp.net Core 2, all our API specific routes that return data (not Views) work.
A developer recently got the API setup on his new Windows 10 machine. When we run the routes in Postman or swagger, he can only get one particular `GET` route to work (extra tidbit - so happens this is the only route that does not call `EntityFramework`).
All other routes return a `404 Not Found` as if the URLs don’t exist. It’s not our code returning the `404`, it’s the platform itself.
None of our code is being executed since the 404 is being returned by the server, so no useful logging either.
I also deployed and tested it on a Win server 2016 machine, and getting the same exact issue.
The last thing I did on this server was install the Asp.net Core 2 SDK but had no effect.
some code:
```
[Produces("application/json")]
[Route("api/v1/Signatures/Request")]
public class SignatureRequestController : ControllerBase
[HttpPost]
[Route("")]
public async Task
CreateSignatureRequestAsync([FromBody]SignatureRequest signatureRequest)
```
example POST url:
```
http://localhost/My.API/api/v1/signatures/request/
```
example json body:
```
{
"clientApplicationInstanceId" : "4318704B-7F90-4CAE-87A9-842F2925FE45",
"facilityId" : "PT",
"contact": "<EMAIL>",
"documentInstanceGuid" : "cc46c96f-cd78-448e-a376-cb4220d49a52",
"messageType" : "1",
"localeId": 1,
"field":
{
"fieldId": 45,
"signatureType": "4",
"displayName": "Short Display Name from iMed4Web",
"signerNameCaption": "signer name caption",
"signerAddressCaption": "address caption",
"signerCityStateZipCaption": "city state zip caption"
},
"documentPreviewHtml": "too long to show..."
}
```<issue_comment>username_1: You should apply Route attribute to controller methods (actions), and you can also point the http verb associated with the routes.
Take a look at t[his documentation](https://learn.microsoft.com/en-us/aspnet/core/mvc/controllers/routing#attribute-routing), it will help you.
Upvotes: 2 <issue_comment>username_2: This happened to me with the newest Core 3.1 Asp.Net Core Web App template. The issue was that the default wireup was this:
```
app.UseEndpoints(endpoints => {
endpoints.MapRazorPages();
endpoints.MapControllers();
});
```
But, it needed to be this:
```
app.UseEndpoints(endpoints => {
endpoints.MapRazorPages();
endpoints.MapControllers();
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}"
);
});
```
The previous answer lead me to the solution, but I wanted to post a more explicit answer in case it was helpful to others.
Upvotes: 2
|
2018/03/16
| 671
| 1,884
|
<issue_start>username_0: I have this data frame:
```
dftt <- data.frame(values = runif(11*4,0,1),
col = c(rep(2,4),
rep(1,4),
rep(5,9*4)),
x= rep(1:4, 11*4),
group=rep(factor(1:11), each=4)
)
> head(dftt)
values col x group
1 0.2576930 2 1 1
2 0.4436522 2 2 1
3 0.5258673 2 3 1
4 0.2751512 2 4 1
5 0.5941050 1 1 2
6 0.7596024 1 2 2
```
I plot it like this:
```
ggplot(dftt, aes(x=x, y=values, group=group, color=col)) + geom_line()
```
[](https://i.stack.imgur.com/45yO7.png)
How can I change the line colors, I want the first two lines to be black and red (in the above plot they are both black) and the remaining lines to be gray (they are light blue)?<issue_comment>username_1: Is there a reason why you even try adding colors via col? It seems to me like this is strange as col has fewer values than group. If you just want to color the groups, this would be the correct way:
```
ggplot(dftt, aes(x=x, y=values, group=group, color=group)) +
geom_line() +
scale_color_manual(values = c("black",
"red",
rep("gray", 9)))
```
[](https://i.stack.imgur.com/SdkV2.png)
Upvotes: 2 <issue_comment>username_2: Covert your `col` column to a factor and then add `scale_color_manual` to your plot function.
```
library(ggplot2)
dftt$col<-as.factor(dftt$col)
ggplot(dftt, aes(x=x, y=values, group=group, color=col)) + geom_line(size=1.2) +
scale_color_manual(values=c( "black", "red", "blue"))
```
You may need to rearrange the color scale to match your choice of color (1,2 and 5)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 412
| 1,264
|
<issue_start>username_0: I want to hide first word(ILS) from the span by using css..I have a div which have no class and inside that div there is a span...I just want to hide this word(ILS) from the span...Please guide me where i am wrong
**Text displayed like this on the site:**
[](https://i.stack.imgur.com/WGwQ0.png)
**Here is my div:**
```
##### Test Sample
Some Text Here
ILS 0.10 / מכירה
```
**Css :**
```
.purhstdtls-delvry-timesec>div:nth-child(4)>span:first-word{
display:none;
}
```<issue_comment>username_1: Preferably you add an id to the span or already send the right thing in the backend but:
You can just replace the content.
```html
$(document).ready(
function(){
$('span').text($('span').text().replace('ILS',''));
});
##### Test Sample
Some Text Here
ILS 0.10 / מכירה
```
It's a dirty solution but it'll work.
Upvotes: 2 <issue_comment>username_2: You can add wrapper around your required text to be hidden in your case ILS, Try following Code
```
$(document).ready(function(){
$("span:contains(ILS)").each(function(k,v){
var d = $(v).html();
d = d.replace("ILS", "ILS");
$(v).html(d);
});
});
```
Upvotes: 0
|
2018/03/16
| 1,388
| 5,495
|
<issue_start>username_0: I have a UserControl that must respond to TouchUp events and this sits within a Viewbox which needs to be panned and scaled with pinch manipulation. Touch events on the control are handled fine. However pinch manipulations only scale the ViewPort if both pinch points are contained entirely within either the user control or the Viewport space around it. If the pinch straddles the user control boundary then the ManipulationDelta loses one of the points and reports a scale of (1,1).
If I remove IsManipulationEnabled="True" from the control handling the TouchUp event then the scaling works but the touch event doesn’t fire.
What can I do to retain the manipulation across the ViewPort whilst also handling the touch event in the user control?
[](https://i.stack.imgur.com/DNBjV.gif)
[Test Solution](http://www.andyreedphotography.co.uk/Temp/TouchTest.zip "Test Solution")
```
```
Handlers in code-behind:
```
private void OnTouchUp(object sender, TouchEventArgs e)
{
TimeTextBlock.Text = DateTime.Now.ToString("H:mm:ss.fff");
}
private void OnManipulationStarting(object sender, ManipulationStartingEventArgs e)
{
e.ManipulationContainer = this;
}
private void OnManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
if (Viewbox == null)
{
return;
}
ManipulationDelta delta = e.DeltaManipulation;
ScaleTextBlock.Text = $"Delta Scale: {delta.Scale}";
MatrixTransform transform = Viewbox.RenderTransform as MatrixTransform;
if (transform == null)
{
return;
}
Matrix matrix = transform.Matrix;
Point position = ((FrameworkElement)e.ManipulationContainer).TranslatePoint(e.ManipulationOrigin, Viewbox);
position = matrix.Transform(position);
matrix = MatrixTransformations.ScaleAtPoint(matrix, delta.Scale.X, delta.Scale.Y, position);
matrix = MatrixTransformations.PreventNegativeScaling(matrix);
matrix = MatrixTransformations.Translate(matrix, delta.Translation);
matrix = MatrixTransformations.ConstrainOffset(Viewbox.RenderSize, matrix);
transform.Matrix = matrix;
}
```
Supporting class:
```
public static class MatrixTransformations
{
///
/// Prevent the transformation from being offset beyond the given size rectangle.
///
///
///
///
public static Matrix ConstrainOffset(Size size, Matrix matrix)
{
double distanceBetweenViewRightEdgeAndActualWindowRight = size.Width * matrix.M11 - size.Width + matrix.OffsetX;
double distanceBetweenViewBottomEdgeAndActualWindowBottom = size.Height * matrix.M22 - size.Height + matrix.OffsetY;
if (distanceBetweenViewRightEdgeAndActualWindowRight < 0)
{
// Moved in the x-axis too far left. Snap back to limit
matrix.OffsetX -= distanceBetweenViewRightEdgeAndActualWindowRight;
}
if (distanceBetweenViewBottomEdgeAndActualWindowBottom < 0)
{
// Moved in the x-axis too far left. Snap back to limit
matrix.OffsetY -= distanceBetweenViewBottomEdgeAndActualWindowBottom;
}
// Prevent positive offset
matrix.OffsetX = Math.Min(0.0, matrix.OffsetX);
matrix.OffsetY = Math.Min(0.0, matrix.OffsetY);
return matrix;
}
///
/// Prevent the transformation from performing a negative scale.
///
///
///
public static Matrix PreventNegativeScaling(Matrix matrix)
{
matrix.M11 = Math.Max(1.0, matrix.M11);
matrix.M22 = Math.Max(1.0, matrix.M22);
return matrix;
}
///
/// Translate the matrix by the given vector to providing panning.
///
///
///
///
public static Matrix Translate(Matrix matrix, Vector vector)
{
matrix.Translate(vector.X, vector.Y);
return matrix;
}
///
/// Scale the matrix by the given X/Y factors centered at the given point.
///
///
///
///
///
///
public static Matrix ScaleAtPoint(Matrix matrix, double scaleX, double scaleY, Point point)
{
matrix.ScaleAt(scaleX, scaleY, point.X, point.Y);
return matrix;
}
}
```<issue_comment>username_1: So, I'm not a wpf programmer. But have a suggestion/workaround which could possibly work for you.
You could code the thing as follows:
* set IsManipulationEnabled="True" (in this case OnTouchUp isn't fired for the grid colored in LightGreen)
* Set `OnTouchUp` to fire on either `Viewbox x:Name="Viewbox"` or the `Grid` above this `Viewbox` (rather than for the `800x800 Grid`)
* So now OnTouchUp would be fired whenever you touch anywhere in the Viewbox (not just inside the LightGreen area)
* When OnTouchUp is now fired, just check if the co-ordinates are in the region of LightGreen box. If YES-> update the time, if no, leave the time as it is.
I understand this is a workaround. Still posted an answer, in case it could prove useful.
Upvotes: 1 <issue_comment>username_2: i am not sure the sample you post reflect totally your code...but what i see : you do not manage the ManipulationCompleted and LostMouseCapture. Also you do not make any MouseCapture() MouseRelease() so when the manipulation outbound the window you loose it....search "mouse capture" on this repo, you will see even if no manipulation event that this is quite complicated....<https://github.com/TheCamel/ArchX/search?utf8=%E2%9C%93&q=mouse+capture&type=>
Upvotes: 0
|
2018/03/16
| 727
| 2,542
|
<issue_start>username_0: I am troubleshooting a solution in which I am setting up a HA cluster. Although I know the ports needed for the application to perform failover and failback, somehow the dockerized solution is not working. I suspect that there are some ports that I do not know about yet.
Currently, my `EXPOSE` statement says:
```
EXPOSE 8080 61616 5672 61613 5445 1883
```
I also start my docker containers with
```
docker run --network host -p 8080:8080 -p 61616:61616 -p 5672:5672 -p 61613:61613 -p 5445:5445 -p 1883:1883
```
But for the sake of troubleshooting, I want to expose ALL ports.
I tried something like:
```
EXPOSE 1-65535
```
But this gives an ERROR.
What is the best way I can expose ALL ports of the docker container?<issue_comment>username_1: When running using `--network host` there is no need to map the ports. All the docker container ports will be available since the network host mode makes the container use the host's network stack.
Also the `EXPOSE 8080 61616 5672 61613 5445 1883` is not needed. This instruction doesn't do anything. It is just a way to document which ports need to be mapped.
In short, running `docker run --network host ...` will expose all the container ports.
Upvotes: 7 [selected_answer]<issue_comment>username_2: >
> The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
>
>
>
More information on [the Docker documentation portal](https://docs.docker.com/network/host/).
Upvotes: 3 <issue_comment>username_3: Using `host` networking will expose almost all the ports just like you're running the application in the host machine. If port flags are used when running in host networking mode, those flags are ignored with a warning
>
> Note: Given that the container does not have its own IP-address when using host mode networking, port-mapping does not take effect, and the -p, --publish, -P, and --publish-all option are ignored, producing a warning instead:
>
>
>
```
WARNING: Published ports are discarded when using host network mode
```
Make sure your host is a Linux host because host networking is only supported by Linux hosts.
>
> The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
>
>
>
This is mentioned in Docker documentation it self. [View particular Documentation](https://docs.docker.com/network/host/)
Upvotes: 2
|
2018/03/16
| 550
| 1,859
|
<issue_start>username_0: In vb.net or c# WinForms, how would you make a form topmost over the other forms in the project, but not over the windows of other applications?
Using `form.topmost = True` puts the form above other applications.
EDIT
I am **NOT** looking for a splash screen.
Below is an example of the intended behavior of this form. It remains on top of everything else in the application, and you can interact with it and the form behind it.
[](https://i.stack.imgur.com/OoToP.png)<issue_comment>username_1: You can use the SetWindowPos method to bring a window to the front without activating it. You could call this in a timer to keep it on top (but that will probably put it in front of other apps, so you would only want to do that if you were the activate application) or you would have to detect when other forms fire the Activated event and then call this.
```
internal const int SWP_NOMOVE = 0x0002;
internal const int SWP_NOSIZE = 0x0001;
internal const int SWP_SHOWWINDOW = 0x0040;
internal const int SWP_NOACTIVATE = 0x0010;
internal const int SWP_NOOWNERZORDER = 0x0200;
[DllImport("user32.dll", EntryPoint = "SetWindowPos")]
private static extern IntPtr SetWindowPos(IntPtr hWnd, int hWndInsertAfter, int x, int Y, int cx, int cy, int wFlags);
```
Call it with:
`SetWindowPos(form.Handle,0,0,0,0,0,SWP_NOMOVE|SWP_NOSIZE|SWP_SHOWWINDOW|SWP_NOACTIVATE);`
Upvotes: 0 <issue_comment>username_2: To bring a form on top of other forms withon an application, you can use the `BringToFront` method.
```
Application.OpenForms["MyForm"].BringToFront();
```
The other forms will be accessible to the user.
Upvotes: 1 <issue_comment>username_3: The topmost=true should work fine for your application. There must be a user error occurring.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 545
| 1,765
|
<issue_start>username_0: Doing the following:
```
const toto = Object.freeze({a: 1});
const tata = Object.assign({}, toto);
tata.a = 3;
console.log(toto, Object.isFrozen(toto)); // {a: 1} true
console.log(tata, Object.isFrozen(tata)); // {a: 3} false
```
raises this error when compiling:
>
> error TS2540: Cannot assign to 'a' because it is a constant or a read-only property.
>
>
>
Even though the compilation succeeds and the code works as expected.
Is there a way to not get this error?
Is there a better way to copy a frozen object into a non-frozen version?<issue_comment>username_1: Would you try this?
```
const tata = Object.assign({}, Object.create(toto));
```
Upvotes: 1 <issue_comment>username_2: This is a limitation on the type definition of `assign`:
```
assign(target: T, source: U): T & U;
```
The definition will return `T & U` as the result type, and this will keep all details of both `T` and `U` including whether their fields are readonly or not (in fact all field should be mutable after the call to `assign`)
To make things more complicated there is no way to remove `readonly` from the type up until 2.8 (unreleased at the time of writing, will be released in March 2018 but you can get it via `npm install -g typescript@next`).
In typescript 2.8 we can do the following:
```
type Mutable = { -readonly [P in keyof T]: T[P] };
const tata: Mutable = Object.assign({}, toto);
tata.a = 3;
```
Until 2.8 you might be better off having a separate type for the unfrozen version, either explicitly or by having a variable that holds the unfrozen version:
```
const totoProto = {a: 1};
const toto = Object.freeze(totoProto);
const tata: typeof totoProto = Object.assign({}, toto);
tata.a = 3;
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,482
| 4,674
|
<issue_start>username_0: I have centos server on which 4.8.2 gcc is installed.Now I want to install 4.8.5 gcc on same server. The requirement is I need two gcc on same server. How can I proceed to install 4.8.5 gcc on server?<issue_comment>username_1: I think the best way is to use Docker. Any other way would probably be quite painful (unless CentOS has some facility for that). You may try this container <https://hub.docker.com/r/ryccoo/gcc-4.8/tags/> , but it's not specified what version it is.
Here's the dockerfile Dockerfile for 4.8, maybe you'd manage to make it use 4.8.5: <https://github.com/bincrafters/docker-centos-gcc48-i386/blob/master/Dockerfile>
If none of those works, just get container with some OS which has package for this version and install it. Remember that you need further steps to save state of image. I hope you don't have to compile it. Compiling GCC sucks unless you have right toolchain.
Remember that containers are not persistent, so the directory with the sources have to be attached separetely. The command line for compiling them would be something like that:
```
docker run -u root -it -v /local/path/to/source:/mounting/point/in/container --rm \
name-of-container:version -w /mounting/point/in/container /bin/bash -c \
"make clean && make; chown -R 1000:1000 ."
```
Upvotes: 0 <issue_comment>username_2: I would consider using environment modules: <http://modules.sourceforge.net>
On our systems we have quite a few compilers installed and can switch between them quite easily via commands similar to: `module add gcc/6.2.0`.
The general setup our administrators provided consists of:
1. A couple directories containing the module files, which are text files containing the directives required to change the relevant shell search paths. (/sw/RedHat-7-x86\_64/common/modules, for example)
2. Corresponding directories into which the actual software is installed. (/sw/RedHat-7-x86\_64/common/local)
The directories are organized by software/version. Thus for the case of gcc/6.2.0, the module file is comprised of:
```
#%Module 1.0
module add gmp/6.1.1
module add mpfr/3.1.4
module add mpc/1.0.3
module add ppl/1.2
module add cloog/0.18.4
module add dejagnu/1.6
module add autogen/5.18.7
module add isl/0.16.1
prepend-path PATH /opt/local/stow/gcc-6.2.0/bin
prepend-path MANPATH /opt/local/stow/gcc-6.2.0/share/man
prepend-path CPATH /opt/local/stow/gcc-6.2.0/include
prepend-path LIBRARY_PATH /opt/local/stow/gcc-6.2.0/lib64:/opt/local/stow/gcc-6.2.0/lib
prepend-path LD_RUN_PATH /opt/local/stow/gcc-6.2.0/lib64:/opt/local/stow/gcc-6.2.0/lib
prepend-path LD_LIBRARY_PATH /opt/local/stow/gcc-6.2.0/lib64:/opt/local/stow/gcc-6.2.0/lib
```
There are also module files for gmp and friends and each of those software packages was installed using a configure/cmake/whatever invocation which included an install prefix of /opt/local/stow/$package. (In the case of gcc/6.2.0, this invocation was something along the lines of: `./configure --prefix=/opt/local/stow/gcc-6.2.0`)
It is possible to make the module files quite elaborate; thus we have some which automatically detect the executable directories, libraries, manual/info pages, headers, pkgconfig directives, python virtual environments, etc. Here is an example for a python package living in its own virtual environment:
```
#%Module
set NAME [module-info name]
set MODULE_FILE_AUTHOR "bob"
set MODULE_FILE_AUTHOR_EMAIL "<EMAIL>"
set MODULE_FILE_MAINTAINER "<EMAIL>"
module-whatis "labnote: Make a lab notebook!"
if [ module-info mode load ] {
if {! [info exists ::env(MODULE_PRE)] } {
setenv COMMON "/cbcb/sw/RedHat-7-x86_64/common"
setenv MODULE_PRE "$::env(COMMON)/local"
}
}
set DIR $::env(MODULE_PRE)/[module-info name]
### Add pre-requisites here
module add Python3/common/3.6.0
### Add extra variables here
### Define a simple help message
proc ModulesHelp { } {
global NAME MODULE_FILE_AUTHOR MODULE_FILE_AUTHOR_EMAIL MODULE_FILE_MAINTAINER
puts "The $NAME module file was installed by $MODULE_FILE_AUTHOR ($MODULE_FILE_AUTHOR_EMAIL)
and is maintained by $MODULE_FILE_MAINTAINER."
}
set is_module_rm [module-info mode remove]
###
# Add your executable to PATH.
###
if { [file isdirectory $DIR/bin] == 1} {
prepend-path PATH $DIR/bin
}
###
# Add an include directory
###
if { [file isdirectory $DIR/include] == 1} {
prepend-path CPATH $DIR/include
}
###
# Set up library paths
###
if { [file isdirectory $DIR/lib] == 1} {
prepend-path LIBRARY_PATH $DIR/lib
prepend-path LD_RUN_PATH $DIR/lib
}
###
# Python virtual environments
###
if { [file isfile $DIR/bin/activate] == 1} {
setenv VIRTUAL_ENV $DIR
}
```
Upvotes: 1
|
2018/03/16
| 493
| 1,585
|
<issue_start>username_0: I need to add spaces to the end of a column I have called NPI (nvarchar(20)). NPI is always 10 digits, but the requirements for the report want the first 10 digits to be the NPI followed by 10 spaces (text file formatting issues I assume). I have tried the following:
```
cast([NPI] as nvarchar(20)) + ' '
cast([NPI] as nvarchar(20)) + Space(10)
```
However, the result set does not change. It just shows the 10 digit NPI and spaces aren't included.<issue_comment>username_1: Add the space inside the cast
```
cast([NPI] + ' ' as nchar(20))
```
*You're right @username_3, I was fooled by my editor.*
This works but I am using MariaDb (MySql) so it's maybe not relevant now.
```
select concat([NPI] , ' ')
```
Upvotes: 0 <issue_comment>username_2: This seems to work. The '\*' is added to show that spaces are present..
```
print cast([NPI] as nchar(20)) + '*'
```
Here are a couple of other cheesy ways to add padding...
```
print substring([NPI] + ' ',1,20) + '*'
print [NPI] + space(20 - len([NPI])) + '*'
```
Upvotes: 0 <issue_comment>username_3: It sounds like you are actually using SQL Server instead of MySQL. VARCHAR() is for variable length strings and it will trim end whitespace. Cast instead to char/nchar for the desired output. It won't look like it in SSMS, so check datalength to confirm nchar(20) = 20 bytes \* 2 for unicode.
```
SELECT CAST([NPI] AS NCHAR(20)) AS formattedNPI,
DATALENGTH(CAST([NPI] AS NCHAR(20))) AS confirmation
FROM your_table
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,299
| 6,996
|
<issue_start>username_0: I am writing a script to query the Bitbucket API and delete SNAPSHOT artifacts that have never been downloaded. This script is failing because it gets ALL snapshot artifacts, the select for the number of downloads does not appear to be working.
What is wrong with my `select` statement to filter objects by the number of downloads?
Of course the more direct solution here would be if I could just query the Bitbucket API with a filter. To the best of my knowledge the API does not support filtering by downloads.
My script is:
```
#!/usr/bin/env bash
curl -X GET --user "me:mykey" "https://api.bitbucket.org/2.0/repositories/myemployer/myproject/downloads?pagelen=100" > downloads.json
# get all values | reduce the set to just be name and downloads | select entries where downloads is zero | select entries where name contains SNAPSHOT | just get the name
#TODO i screwed up the selection somewhere its returning files that contain SNAPSHOT regardless of number of downloads
jq '.values | {name: .[].name, downloads: .[].downloads} | select(.downloads==0) | select(.name | contains("SNAPSHOT")) | .name' downloads.json > snapshots_without_any_downloads.js
#unique sort, not sure why jq gives me multiple values
sort -u snapshots_without_any_downloads.js | tr -d '"' > unique_snapshots_without_downloads.js
cat unique_snapshots_without_downloads.js | xargs -t -I % curl -Ss -X DELETE --user "me:mykey" "https://api.bitbucket.org/2.0/repositories/myemployer/myproject/downloads/%" > deleted_files.txt
```
A deidentified sample of the raw input from the API is:
```
{
"pagelen": 10,
"size": 40,
"values": [
{
"name": "myproject_1.1-SNAPSHOT_0210f77_mc_3.5.0.zip",
"links": {
"self": {
"href": "https://api.bitbucket.org/2.0/repositories/myemployer/myproject/downloads/myproject_1.1-SNAPSHOT_0210f77_mc_3.5.0.zip"
}
},
"downloads": 2,
"created_on": "2018-03-15T17:50:00.157310+00:00",
"user": {
"username": "me",
"display_name": "me",
"type": "user",
"uuid": "{3051ec5f-cc92-4bc3-b291-38189a490a89}",
"links": {
"self": {
"href": "https://api.bitbucket.org/2.0/users/me"
},
"html": {
"href": "https://bitbucket.org/me/"
},
"avatar": {
"href": "https://bitbucket.org/account/me/avatar/32/"
}
}
},
"type": "download",
"size": 430894
},
{
"name": "myproject_1.1-SNAPSHOT_thanks_for_the_reminder_charles_duffy_mc_3.5.0.zip",
"links": {
"self": {
"href": "https://api.bitbucket.org/2.0/repositories/myemployer/myproject/downloads/myproject_1.1-SNAPSHOT_0210f77_mc_3.5.0.zip"
}
},
"downloads": 0,
"created_on": "2018-03-15T17:50:00.157310+00:00",
"user": {
"username": "me",
"display_name": "me",
"type": "user",
"uuid": "{3051ec5f-cc92-4bc3-b291-38189a490a89}",
"links": {
"self": {
"href": "https://api.bitbucket.org/2.0/users/me"
},
"html": {
"href": "https://bitbucket.org/me/"
},
"avatar": {
"href": "https://bitbucket.org/account/me/avatar/32/"
}
}
},
"type": "download",
"size": 430894
},
{
"name": "myproject_1.0_mc_3.5.1.zip",
"links": {
"self": {
"href": "https://api.bitbucket.org/2.0/repositories/myemployer/myproject/downloads/myproject_1.1-SNAPSHOT_0210f77_mc_3.5.1.zip"
}
},
"downloads": 5,
"created_on": "2018-03-15T17:49:14.885544+00:00",
"user": {
"username": "me",
"display_name": "me",
"type": "user",
"uuid": "{3051ec5f-cc92-4bc3-b291-38189a490a89}",
"links": {
"self": {
"href": "https://api.bitbucket.org/2.0/users/me"
},
"html": {
"href": "https://bitbucket.org/me/"
},
"avatar": {
"href": "https://bitbucket.org/account/me/avatar/32/"
}
}
},
"type": "download",
"size": 430934
}
],
"page": 1,
"next": "https://api.bitbucket.org/2.0/repositories/myemployer/myproject/downloads?pagelen=10&page=2"
}
```
The output I want from this snippet is `myproject_1.1-SNAPSHOT_thanks_for_the_reminder_charles_duffy_mc_3.5.0.zip` - that artifact is a SNAPSHOT and has zero downloads.
I have used this intermediate step to do some debugging:
```
jq '.values | {name: .[].name, downloads: .[].downloads} | select(.downloads>0) | select(.name | contains("SNAPSHOT")) | unique' downloads.json > snapshots_with_downloads.js
jq '.values | {name: .[].name, downloads: .[].downloads} | select(.downloads==0) | select(.name | contains("SNAPSHOT")) | .name' downloads.json > snapshots_without_any_downloads.js
#this returns the same values for each list!
diff unique_snapshots_with_downloads.js unique_snapshots_without_downloads.js
```
This adjustment gives a cleaner and unique structure, it suggests that theres some sort of splitting or streaming aspect of `jq` that I do not fully understand:
```
#this returns a "unique" array like I expect, adding select to this still does not produce the desired outcome
jq '.values | [{name: .[].name, downloads: .[].downloads}] | unique' downloads.json
```
The data after this step looks like this. It just removed the cruft I didn't need from the raw API response:
```
[
{
"name": "myproject_1.0_2400a51_mc_3.4.0.zip",
"downloads": 0
},
{
"name": "myproject_1.0_2400a51_mc_3.4.1.zip",
"downloads": 2
},
{
"name": "myproject_1.1-SNAPSHOT_391f4d5_mc_3.5.0.zip",
"downloads": 0
},
{
"name": "myproject_1.1-SNAPSHOT_391f4d5_mc_3.5.1.zip",
"downloads": 2
}
]
```<issue_comment>username_1: Add the space inside the cast
```
cast([NPI] + ' ' as nchar(20))
```
*You're right @username_3, I was fooled by my editor.*
This works but I am using MariaDb (MySql) so it's maybe not relevant now.
```
select concat([NPI] , ' ')
```
Upvotes: 0 <issue_comment>username_2: This seems to work. The '\*' is added to show that spaces are present..
```
print cast([NPI] as nchar(20)) + '*'
```
Here are a couple of other cheesy ways to add padding...
```
print substring([NPI] + ' ',1,20) + '*'
print [NPI] + space(20 - len([NPI])) + '*'
```
Upvotes: 0 <issue_comment>username_3: It sounds like you are actually using SQL Server instead of MySQL. VARCHAR() is for variable length strings and it will trim end whitespace. Cast instead to char/nchar for the desired output. It won't look like it in SSMS, so check datalength to confirm nchar(20) = 20 bytes \* 2 for unicode.
```
SELECT CAST([NPI] AS NCHAR(20)) AS formattedNPI,
DATALENGTH(CAST([NPI] AS NCHAR(20))) AS confirmation
FROM your_table
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 478
| 1,675
|
<issue_start>username_0: So im fairly new to Java, anyway i have a implemented a simple Java program on IntelliJ which runs in the IntelliJ terminal, it basically ask the user to input some details and then records them along with the current time.
I now want to style it out using HTML/CSS to convert it into a webpage, where the user would enter the details into input boxes etc.
Im not sure how to approach this, what would be my best shot?
Also the user input is also being stored in a variable at the moment, would i have to use a database instead for a webpage?
Thanks.<issue_comment>username_1: Add the space inside the cast
```
cast([NPI] + ' ' as nchar(20))
```
*You're right @username_3, I was fooled by my editor.*
This works but I am using MariaDb (MySql) so it's maybe not relevant now.
```
select concat([NPI] , ' ')
```
Upvotes: 0 <issue_comment>username_2: This seems to work. The '\*' is added to show that spaces are present..
```
print cast([NPI] as nchar(20)) + '*'
```
Here are a couple of other cheesy ways to add padding...
```
print substring([NPI] + ' ',1,20) + '*'
print [NPI] + space(20 - len([NPI])) + '*'
```
Upvotes: 0 <issue_comment>username_3: It sounds like you are actually using SQL Server instead of MySQL. VARCHAR() is for variable length strings and it will trim end whitespace. Cast instead to char/nchar for the desired output. It won't look like it in SSMS, so check datalength to confirm nchar(20) = 20 bytes \* 2 for unicode.
```
SELECT CAST([NPI] AS NCHAR(20)) AS formattedNPI,
DATALENGTH(CAST([NPI] AS NCHAR(20))) AS confirmation
FROM your_table
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 474
| 1,679
|
<issue_start>username_0: I'm using the autocomplete input of materialize but I've noticed that the content is moved down when I do the search, I've try to use position absolute property but it doesn't work, it just makes the content enter on the input, what should I do?
this is my code
```
Servicios
Agregar
```
[](https://i.stack.imgur.com/AsEmB.png)
[](https://i.stack.imgur.com/PsxG4.png)<issue_comment>username_1: I've resolved it, the autocomplete dynamic ul has a static position settled by default, just set it to absolute and the problem will be solved
Upvotes: 2 [selected_answer]<issue_comment>username_2: the only change you need to make is in your .css file, you have to add the property `position: absolute !important;` to the "dropdown-content" class.
you do not need to change the other files.
See the example bellow:
```js
$(document).ready(function() {
$('#input').autocomplete({
data: {
"Apple": null,
"Microsoft": null,
"www.victorhnogueira.com.br": null,
"Google": 'https://placehold.it/250x250'
},
limit: 10, // The max amount of results that can be shown at once. Default: Infinity.
onAutocomplete: function(val) {
// Callback function when value is autcompleted.
},
minLength: 1, // The minimum length of the input for the autocomplete to start. Default: 1.
});
});
```
```css
/*THIS IS WHAT YOU NEED TO ADD TO YOUR CODE*/
.dropdown-content {
position: absolute !important;
}
```
```html
search
div bellow
```
Upvotes: 0
|
2018/03/16
| 706
| 1,922
|
<issue_start>username_0: I have a simple dataframe:
```
df = pd.DataFrame({'id': ['a','a','a','b','b'],'value':[0,15,20,30,0]})
df
id value
0 a 0
1 a 15
2 a 20
3 b 30
4 b 0
```
And I want a pivot table with the number of values greater than zero.
I tried this:
```
raw = pd.pivot_table(df, index='id',values='value',aggfunc=lambda x:len(x>0))
```
But returned this:
```
value
id
a 3
b 2
```
What I need:
```
value
id
a 2
b 1
```
I read lots of solutions with groupby and filter. Is it possible to achieve this only with pivot\_table command? If it is not, which is the best approach?
Thanks in advance
**UPDATE**
Just to make it clearer why I am avoinding filter solution. In my real and complex df, I have other columns, like this:
```
df = pd.DataFrame({'id': ['a','a','a','b','b'],'value':[0,15,20,30,0],'other':[2,3,4,5,6]})
df
id other value
0 a 2 0
1 a 3 15
2 a 4 20
3 b 5 30
4 b 6 0
```
I need to sum the column 'other', but when i filter I got this:
```
df=df[df['value']>0]
raw = pd.pivot_table(df, index='id',values=['value','other'],aggfunc={'value':len,'other':sum})
other value
id
a 7 2
b 5 1
```
Instead of:
```
other value
id
a 9 2
b 11 1
```<issue_comment>username_1: Need `sum` for count `True`s created by condition `x>0`:
```
raw = pd.pivot_table(df, index='id',values='value',aggfunc=lambda x:(x>0).sum())
print (raw)
value
id
a 2
b 1
```
As @Wen mentioned, another solution is:
```
df = df[df['value'] > 0]
raw = pd.pivot_table(df, index='id',values='value',aggfunc=len)
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can filter the dataframe before pivoting:
```
pd.pivot_table(df.loc[df['value']>0], index='id',values='value',aggfunc='count')
```
Upvotes: 1
|
2018/03/16
| 941
| 3,673
|
<issue_start>username_0: I am researching standard sample from Pentaho DI package: `GetXMLData - Read parent children rows`. It reads separately from same XML input `parent` rows & `children` rows. I need to do the same and update two different sheets of the same MS Excel Documents.
[](https://i.stack.imgur.com/xNspa.png)
My understanding is that normal way to achieve it is to put first sequence in one transformation file with XML Output or Writer, second to the second one & at the end create job with chain from start, through 1st & 2nd transformations.
My problems are:
* When I try to chain above sequences I loose content of first updated Excel sheet in the final document;
* I need to have at the end just one file with either Job or Transformation without dependencies (In case of above proposed scenario I would have 1 `KJB` job + 2 `KTR` transformation files).
Questions are:
* Is it possible to join 2 sequences from above sample with some `wait` node before starting update 2nd Excel sheet?
* If above doesn't work: Is it possible to embed transformations to the job instead of referencing them from external files?
* And extra question: What is better to use: Excel Output or Excel Writer?
=================
**UPDATE**:
Based on @AlainD proposal I have tried to put `Block` node in-between. Here is a result:
Looks like `Block` step can be an option, but somehow it doesn't work as expected with `Excel Output / Writers` node (or I do something wrong). What I have observed is that Pentaho tries to execute next after `Block` steps before Excel file is closed properly by the previous step. That leads to one of the following: I either get Excel file with one empty sheet or generated result file is malformed.
My input XML file (from Pentaho distribution) & test playground transformation are: [HERE](https://drive.google.com/drive/folders/1_KxHXLmguIE9ofnYT6aOsy926o7HhbhS?usp=sharing)
NOTE: While playing do not forget to remove generated MS Excel files between runs.
Screenshot:
[](https://i.stack.imgur.com/ZgAWm.png)
Any suggestions how to fix my transformation?<issue_comment>username_1: Question 1: Yes, the step you are looking after is named `Block until this (other) step finishes`, or `Blocking Step (untill all rows are processed)`.
Question 2: Yes, you can pass the rows from one transformation to an other via the job. But it would be wiser to first produce the parent sheet and, when finished, read it again in the second transformation. You can also pass the row in a sub-transformation, or use other architecture strategies...
Question 3: (Short answer) The `Excel Writer` appends data (new sheet or new rows) to an existing Excel file, while the `Excel Output` creates and feed a one sheet Excel file.
Upvotes: 0 <issue_comment>username_1: The pattern goes as follow:
* read data: 1 row per children, with the parent data in one or more column
* group the data : 1 row per parent, forget the children, keep the parent data. Transform and save as needed.
* back from the original data, lookup each row (children) and fetch the parent in the grouped data flow.
* the result is one row per children and the needed column of the transformed parent. Transform and save as needed.
It is a pattern, you may want to change the flow, and/or sort to speed up. But it will not lock, nor feed up the memory: the `group by` and `lookup` are pretty reliable.
[](https://i.stack.imgur.com/LB0gL.png)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 481
| 1,346
|
<issue_start>username_0: I want to get the \_id from my database, as soon as i do
```
var test = Exemple.findOne({_id: test_id});
```
I get undefined
but when i do
```
var test = Exemple.find({}).fetch()`
```
I get all the Data of the collection,like this.
```
{ _id: '17SRlRpRSzP339E41A',
creationIP: 'local',
state:
{ label: 'never connected',
date: Wed Mar 14 2018 12:20:08 GMT+0100 (CET) },
language: 'en',
batch: '9zLKCkvSAyxQ4jtDG7_32018',
creationDate: Wed Mar 14 2018 12:20:08 GMT+0100 (CET) } ]
```
i only want to get the \_id and store it to a variable like this
```
var test = Exemple.findOne({_id: test_id});
```<issue_comment>username_1: ```
(async function() {
const test = await Exemple.findOne({_id: test_id});
console.log(test)
})()
```
As pointed out by Tom, you have to await for promises to fulfill
Upvotes: -1 <issue_comment>username_2: This finds the item in the database with `_id` equal to `test_id`:
```
var test_id = 'abc';
var test = Exemple.findOne({_id: test_id});
```
If no item has `_id` equal to this, it will return `null`.
To get the id of an item you can do:
```
var test = Exemple.findOne({});
var docId = test._id;
console.log(docId);
```
This will choose one random document to return however. You probably want to query for a specific item.
Upvotes: 0
|
2018/03/16
| 502
| 1,787
|
<issue_start>username_0: I have problem with Microsoft.TeamFoundation.WorkItemTracking.WebApi.WorkItemTrackingHttpClient class.
I cannot update workitem in my VSTS because the class send empty body in a http patch request. Am I doing something wrong?
**Test code:**
```
private readonly WorkItemTrackingHttpClient _workItemTrackingHttpClient;
public RestApi(string baseUrl, string pat)
{
var vssConnection = new VssConnection(new Uri(baseUrl), new VssBasicCredential(string.Empty, pat));
_workItemTrackingHttpClient = vssConnection.GetClient();
var document = new JsonPatchDocument();
document.Add(new JsonPatchOperation()
{
Operation = Operation.Add,
Path = "/fields/Microsoft.VSTS.Scheduling.Effort",
Value = 1
});
var workItem = \_workItemTrackingHttpClient.UpdateWorkItemAsync(document, 233843).Result;
}
```
**Throws:** VssServiceException: You must pass a valid patch document in the body of the request.
I use Fiddler to analyze the request and found that the body is empty. Weird is that it works in february.
[Raw http patch request screen](https://i.stack.imgur.com/G6STp.png)<issue_comment>username_1: ```
(async function() {
const test = await Exemple.findOne({_id: test_id});
console.log(test)
})()
```
As pointed out by Tom, you have to await for promises to fulfill
Upvotes: -1 <issue_comment>username_2: This finds the item in the database with `_id` equal to `test_id`:
```
var test_id = 'abc';
var test = Exemple.findOne({_id: test_id});
```
If no item has `_id` equal to this, it will return `null`.
To get the id of an item you can do:
```
var test = Exemple.findOne({});
var docId = test._id;
console.log(docId);
```
This will choose one random document to return however. You probably want to query for a specific item.
Upvotes: 0
|
2018/03/16
| 467
| 1,925
|
<issue_start>username_0: I have following entry in `appSettings`:
```
```
In code, I tried to convert this CSV value into list of integers as:
```
var blackList = ConfigurationManager.AppSettings["blackListedIDs"]
.Split(',')
.Select(n => int.Parse(n.Trim()))
.ToList();
```
But it catches an error:
```
Message = "Input string was not in a correct format."
```
It is due to empty string value in `appSettings` key. When I fill it with some data, for example everything is ok. How to handle empty string value inline, in simplest way, maybe to add handling to LINQ expression above?<issue_comment>username_1: You can add a filter to retain only the non empty strings like this:
```
var blackList = ConfigurationManager.AppSettings["blackListedIDs"]
.Split(',')
.Where(n => !String.IsNullOrWhiteSpace(n)) // filter out empty strings
.Select(n => int.Parse(n.Trim()))
.ToList();
```
Upvotes: 2 <issue_comment>username_2: Use the String.Split overload that allows you to pass the StringSplitOptions.RemoveEmptyEntries option, eg :
```
var blackList = ConfigurationManager.AppSettings["blackListedIDs"]
.Split(new[]{','},StringSplitOptions.RemoveEmptyEntries)
.Select(n => int.Parse(n))
.ToList();
```
If the input string is empty or whitespace, `Split` won't return anything. This will also handle leading or trailing commas.
You don't need to trim the string either, `int.Parse()` can handle leading or trailing whitespace.
The following code :
```
var items = ",1 , 2 ,".Split(new[]{','},StringSplitOptions.RemoveEmptyEntries)
.Select(n => int.Parse(n))
.ToArray();
```
Will return `1, 2` even though there are spaces, a leading and a trailing separator
Upvotes: 2
|
2018/03/16
| 483
| 1,556
|
<issue_start>username_0: I tried to implement this simple layout using `ConstraintLayout`:
```
xml version="1.0" encoding="utf-8"?
```
This layout id then used in a `Dialog` like that:
```
val dialog = Dialog(_activity)
dialog.setContentView(R.layout.dialog)
dialog.show()
```
Here is what the preview renders:
[](https://i.stack.imgur.com/UIkhY.png)
[](https://i.stack.imgur.com/D4qtT.png)
And as you can see, the 16dp margin is not respected for the second `TextView` whereas is seems to be ok for the first `TextView` with the same parameters.
Nor for the `View` which seems to be glued to the right border of the parent.
Despite it seems take the right 16 dp for the first `TextView` we can see it a bit on the right on the second preview.
When I execute it, it's confirmed that the content is offset to the right.
Why ?
[](https://i.stack.imgur.com/J1Xij.png)<issue_comment>username_1: You should use either a width= of `"0dp"` to force the view to anchor itself to its constraints, or use a width of `"wrap_content"` in combination with `app:layout_constrainedWidth="true"`, to let the view resize itself based on its content, while also respecting the constraints.
Upvotes: 2 <issue_comment>username_2: Try the following:
```
xml version="1.0" encoding="utf-8"?
```
Output:
[](https://i.stack.imgur.com/UoPGV.png)
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,327
| 4,851
|
<issue_start>username_0: I'm using SQL Server 2008 to create an XML file based on a given structure. The query I'm using right now is below:
```
select 'ABC123' as SourceTradingPartner,
'ABC06EMP' as DestinationTradingPartner,
right(' ' + 'E_' + cast(WorkOrderHeader.EmpNumber as varchar), 20) as WorkOrder,
WorkOrderHeader.Name as WorkOrderDescription,
'OPR' as ResponsiblePersonID,
'MEXICO' as DivisionID,
'A' as SynchIndicatorID
, (
select '0001' as WorkOrderLineReference,
'CONSIGN' as Item,
'2018-03-09' as RequiredManufacturingStartDate,
'2018-03-09' as OpenDate,
'2018-03-09' as DueDate,
'1' as OrderQty,
'W' as RoutingUsedFlag,
'I' as BillOfMaterialUsedFlag,
'B' as ScheduleID,
'3' as QAStatusID,
'CNSG' as AccountingGroupCode,
'R' as StateCode,
'134800' as DueTime,
'144846' as OpenTime,
'134800' as RequiredManufacturingStartTime,
'2' as InventoryStatus,
'A' as SynchIndicatorID,
'Y' as SubstitutePriorityMethod
, (
select 'A' as SynchIndicatorID,
'10' as Increment
,(
select 'A' as SynchIndicatorID,
' 10' as Operation,
'CONSIGN' as [Function],
'I' AS OperationType,
rtrim(isnull(WorkOrderRouting.WorkCenter, '')) as WorkCenter
, (
select 'A' as SynchIndicatorID,
right(' ' + cast(row_number() over (partition by WorkOrderRoutingTool.Employee
order by WorkOrderRoutingTool.id desc, WorkOrderRoutingTool.PartNumber)
as varchar), 4) as ToolSeq,
WorkOrderRoutingTool.PartNumber as ToolID,
cast(WorkOrderRoutingTool.Qty as varchar) as ToolQuantity
from ZCONSIGN WorkOrderRoutingTool with (nolock)
where WorkOrderRoutingTool.Employee = WorkOrderHeader.OID
for xml auto, elements, type
)
from Employees WorkOrderRouting with (nolock)
where WorkOrderRouting.OID = WorkOrderHeader.OID
for xml auto, elements, type
)
,(
select 'A' as SynchIndicatorID
from Employees WorkOrderRoutingAddendum with (nolock)
where WorkOrderRoutingAddendum.OID = WorkOrderHeader.OID
for xml auto, elements, type
)
from Employees WorkOrderRoutingHeader with (nolock)
where WorkOrderRoutingHeader.OID = WorkOrderHeader.OID
for xml auto, elements, type
)
from Employees WorkOrderLine with (nolock)
where WorkOrderLine.OID = WorkOrderHeader.OID
for xml auto, elements, type
)
,(
select 'A' as SynchIndicatorID,
'4' as PlanningStatus,
'2' as ManufacturingStatus
from Employees WorkOrderLineAddendum with (nolock)
where WorkOrderLineAddendum.OID = WorkOrderHeader.OID
for xml auto, elements, type
)
from Employees WorkOrderHeader with (nolock)
where WorkOrderHeader.EmpNumber = 10171
order by WorkOrderHeader.EmpNumber
for xml auto, elements, type
```
When the query is executed, it gives me the result shown below:
```
ABC999
ABC06EMP
E\_10171
<NAME>
OPR
USA
A
0001
CONSIGN
2018-03-09
2018-03-09
2018-03-09
1
W
I
B
3
CNSG
R
134800
144846
134800
2
A
Y
A
10
A
10
CONSIGNA
I
1642
A
1
HT9001-003
19.00
A
A
4
2
```
The result looks fine, and it's almost complete. But it's missing the 'header' and 'footer' information (what i call 'footer' is just the closing tag for the 'header'). The header I need to have looks like this:
```
xml version="1.0" encoding="utf-8"?
```
And the footer is just this:
```
```
I've tried to use the WITH XMLNAMESPACES approach, but I get the namespace declaration repeated in some of the child nodes (I think this is a known issue). I'm not sure how to solve this.
Can anyone help?
Thanks in advance.<issue_comment>username_1: You should use either a width= of `"0dp"` to force the view to anchor itself to its constraints, or use a width of `"wrap_content"` in combination with `app:layout_constrainedWidth="true"`, to let the view resize itself based on its content, while also respecting the constraints.
Upvotes: 2 <issue_comment>username_2: Try the following:
```
xml version="1.0" encoding="utf-8"?
```
Output:
[](https://i.stack.imgur.com/UoPGV.png)
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,305
| 3,624
|
<issue_start>username_0: I read the instruction in [Customizing Location of Subplot Using GridSpec](https://matplotlib.org/2.1.1/tutorials/intermediate/gridspec.html) and try out the following codes and got the plot layout:
```
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
ax2 = plt.subplot(gs[1, :-1])
ax3 = plt.subplot(gs[1:, -1])
ax4 = plt.subplot(gs[-1, 0])
ax5 = plt.subplot(gs[-1, -2])
```
[](https://i.stack.imgur.com/FvcN2.png)
I understand that `gridspec.GridSpec(3, 3)` will give a 3\*3 layout, but what it means for `gs[0, :]` `gs[1, :-1]` `gs[1:, -1]` `gs[-1, 0]` `gs[-1, -2]`? I look up online but not found a detailed expanation, and I also try to change the index but not found a regular pattern. Could anyone give me some explanation or throw me a link about this?<issue_comment>username_1: Using `gs = gridspec.GridSpec(3, 3)`, you have created essentially a 3 by 3 "grid" for your plots. From there, you can use `gs[...,...]` to specify the location and size of each subplot, by the number of rows and columns each subplot fills in that 3x3 grid. Looking in more detail:
`gs[1, :-1]` specifies *where* on the gridspace your subplot will be. For instance `ax2 = plt.subplot(gs[1, :-1])` says: put the axis called `ax2` on the **first row** (denoted by `[1,...`) (remember that in python, there is zero indexing, so this essentially means "second row down from the top"), stretching **from the 0th column *up until* the last column** (denoted by `...,:-1]`). Because our gridspace is 3 columns wide, this means it will stretch 2 columns.
Perhaps it's better to show this by annotating each axis in your example:
```
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
ax2 = plt.subplot(gs[1, :-1])
ax3 = plt.subplot(gs[1:, -1])
ax4 = plt.subplot(gs[-1, 0])
ax5 = plt.subplot(gs[-1, -2])
ax1.annotate('ax1, gs[0,:] \ni.e. row 0, all columns',xy=(0.5,0.5),color='blue', ha='center')
ax2.annotate('ax2, gs[1, :-1]\ni.e. row 1, all columns except last', xy=(0.5,0.5),color='red', ha='center')
ax3.annotate('ax3, gs[1:, -1]\ni.e. row 1 until last row,\n last column', xy=(0.5,0.5),color='green', ha='center')
ax4.annotate('ax4, gs[-1, 0]\ni.e. last row, \n0th column', xy=(0.5,0.5),color='purple', ha='center')
ax5.annotate('ax5, gs[-1, -2]\ni.e. last row, \n2nd to last column', xy=(0.5,0.5), ha='center')
plt.show()
```
[](https://i.stack.imgur.com/xXcKO.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: * In a case where multiple rows of a varying number of evenly spaced subplots are required, or the opposite, the number of columns (or rows) may need to be a multiple of the number of subplots.
* For example, given 3 rows of 2, 3, and 4 evenly spaced subplots, use `ncols=12`, because 12 is the lowest common multiple for 2, 3, and 4.
```py
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
fig = plt.figure(figsize=(10, 8), tight_layout=True)
gs = GridSpec(nrows=3, ncols=12)
ax0 = plt.subplot(gs[0, :6])
ax1 = plt.subplot(gs[0, 6:])
ax2 = plt.subplot(gs[1, :4])
ax3 = plt.subplot(gs[1, 4:8])
ax4 = plt.subplot(gs[1, 8:])
ax5 = plt.subplot(gs[2, :3])
ax6 = plt.subplot(gs[2, 3:6])
ax7 = plt.subplot(gs[2, 6:9])
ax8 = plt.subplot(gs[2, 9:])
```
[](https://i.stack.imgur.com/JuVF1.png)
Upvotes: 0
|
2018/03/16
| 1,239
| 4,759
|
<issue_start>username_0: The current admin widget for ArrayField is one field, with comma as delimiter, like this (text list):
[](https://i.stack.imgur.com/gSWkC.png)
This isn't ideal because I would have longer texts (even 20 words) and contain commas. I could [change the delimiter to be something else](https://stackoverflow.com/questions/45133593/django-admin-model-arrayfield-change-delimiter) but that still doesn't help with unreadable content in admin.
What I would like is having a list of fields, that I can alter in admin. Something similar to the following image
[](https://i.stack.imgur.com/bR47G.png)
I could use another table to solve this, but I wonder if it's possible to solve it this way.<issue_comment>username_1: Try to take a look in this one :
[Better ArrayField admin widget?](https://stackoverflow.com/questions/31426010/better-arrayfield-admin-widget)
I think is more about a js thing after you have rendered the Array in a different way.
Upvotes: 2 <issue_comment>username_2: Unfortunately Django does not ship with a convenient widget for `ArrayField`s yet. I'd suggest you to create your own. Here is an example for **Django>=1.11**:
```
class DynamicArrayWidget(forms.TextInput):
template_name = 'myapp/forms/widgets/dynamic_array.html'
def get_context(self, name, value, attrs):
value = value or ['']
context = super().get_context(name, value, attrs)
final_attrs = context['widget']['attrs']
id_ = context['widget']['attrs'].get('id')
subwidgets = []
for index, item in enumerate(context['widget']['value']):
widget_attrs = final_attrs.copy()
if id_:
widget_attrs['id'] = '%s_%s' % (id_, index)
widget = forms.TextInput()
widget.is_required = self.is_required
subwidgets.append(widget.get_context(name, item, widget_attrs)['widget'])
context['widget']['subwidgets'] = subwidgets
return context
def value_from_datadict(self, data, files, name):
try:
getter = data.getlist
except AttributeError:
getter = data.get
return getter(name)
def format_value(self, value):
return value or []
```
Here is the widget template:
```
{% spaceless %}
{% for widget in widget.subwidgets %}
* {% include widget.template\_name %}
{% endfor %}
Add another
{% endspaceless %}
```
A few javascript (using jQuery for convenience):
```
$('.dynamic-array-widget').each(function() {
$(this).find('.add-array-item').click((function($last) {
return function() {
var $new = $last.clone()
var id_parts = $new.find('input').attr('id').split('_');
var id = id_parts.slice(0, -1).join('_') + '_' + String(parseInt(id_parts.slice(-1)[0]) + 1)
$new.find('input').attr('id', id);
$new.find('input').prop('value', '');
$new.insertAfter($last);
};
})($(this).find('.array-item').last()));
});
```
And you would also have to create your own form field:
```
from itertools import chain
from django import forms
from django.contrib.postgres.utils import prefix_validation_error
class DynamicArrayField(forms.Field):
default_error_messages = {
'item_invalid': 'Item %(nth)s in the array did not validate: ',
}
def __init__(self, base_field, **kwargs):
self.base_field = base_field
self.max_length = kwargs.pop('max_length', None)
kwargs.setdefault('widget', DynamicArrayWidget)
super().__init__(**kwargs)
def clean(self, value):
cleaned_data = []
errors = []
value = filter(None, value)
for index, item in enumerate(value):
try:
cleaned_data.append(self.base_field.clean(item))
except forms.ValidationError as error:
errors.append(prefix_validation_error(
error, self.error_messages['item_invalid'],
code='item_invalid', params={'nth': index},
))
if errors:
raise forms.ValidationError(list(chain.from_iterable(errors)))
if cleaned_data and self.required:
raise forms.ValidationError(self.error_messages['required'])
return cleaned_data
```
Finally, set it explicitly on your forms:
```
class MyModelForm(forms.ModelForm):
class Meta:
model = MyModel
fields = ['foo', 'bar', 'the_array_field']
field_classes = {
'the_array_field': DynamicArrayField,
}
```
Upvotes: 5 [selected_answer]
|
2018/03/16
| 720
| 2,304
|
<issue_start>username_0: I have an existing table with an NVARCHAR(8000) field. I also have a procedure that builds an audit table by examining `syscolumns` (among others) to replicate source table schema.
If I run the following statement against the existing source table:
```
ALTER TABLE dbo.MyTable ALTER COLUMN MyColumn NVARCHAR(4000);
```
and then run a query against syscolumns:
```
SELECT b.name, c.name as TypeName, b.length, b.isnullable, b.collation, b.xprec, b.xscale
FROM sysobjects a
INNER JOIN syscolumns b on a.id = b.id
INNER JOIN systypes c on b.xtype = c.xtype and c.name <> 'sysname'
WHERE a.id = OBJECT_ID(N'[dbo].[CalendarEvents]')
AND OBJECTPROPERTY(a.id, N'IsUserTable') = 1
ORDER BY b.colId
```
... the syscolumn length still reports 8000 instead of the new 4000.
Is there a way I can force-refresh this? Presumably there's some internal maintenance that updates these periodically, but I'm not sure what the rules are and it's interfering with my ability to build the audit objects directly after altering the column length in the source table.<issue_comment>username_1: >
> **max\_length** smallint Maximum length (**in bytes**) of the column.
>
>
> -1 = Column data type is varchar(max), nvarchar(max), varbinary(max), or xml.
>
>
> For text columns, the max\_length value will be 16 or the value set by
> sp\_tableoption 'text in row'.
>
>
>
<https://learn.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-columns-transact-sql>
And for NVARCHAR:
>
> The actual storage size, in bytes, is **two times the number of characters** ...
>
>
>
<https://learn.microsoft.com/en-us/sql/t-sql/data-types/nchar-and-nvarchar-transact-sql>
This means, that the reported max\_lenght for any NVARCHAR and NCHAR columns will be 2 times the allowed lenght (except for NVARCHAR(MAX) and CHAR(MAX) in which case, you'll see -1).
BTW, `syscolumns` is deprecated, use `sys.columns` instead.
Upvotes: 1 <issue_comment>username_2: The cause is that `NVARCHAR` stores 2 bytes per char, and 2 \* 4000 = 8000.
Also `NVARCHAR(8000)` isn't even possible, you can use either a length value upto 4000, or the special keyword `MAX` which makes it limited to 2GB but with values stored separately from the table if needed.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 720
| 2,925
|
<issue_start>username_0: Searching for a proper technology selection, preferably from MS Azure PaaS (so-called "serverless"), since this ***needs*** to run in Azure.
The problem / conditions:
Running a set of N completely independent tasks, whilst maximum of M tasks simultaneously.
1. Each task's start can be triggered asynchronously (basically it
is a start of SSIS package), so I don't need to have a blocking
wait.
2. Limit the number of concurrently progressing tasks (stated above already)
3. I can't subscribe to automatic notification that a task is complete, I can only explicitly query that info outside (from SSISDB - so actually can query-out statuses for all running tasks via a single query)
4. There are some additional requirements like task retries upon failure etc.
Considering relevant parts of this solution can be implemented in .NET, the idea is not writing the whole system from scratch (even though it could be the easiest), but use some Azure Cloud capabilities.
So far I've looked into Azure Queues / Service Bus, Functions, Azure Batches. But e.g. I don't see very good applicability to Bacthes here, since my tasks are async and they will consume computational resources out of SQL Server (SSIS). But probably I'm just mistaken and this can be still a good usage scenario for Azure Batches. Could you please advise smth?
Afterall it might not necessarily be an Azure solution, solved via some proper .NET technology / framework, and deployed to Azure as a durable function (or some other serverless approach), but this is less desirable.<issue_comment>username_1: I would build an SSIS "master package" that calls your SSIS sub-packages. That can meet all of your requirements:
1. On the Control Flow, create an Execute Package Task for each SSIS sub-package. Leave them unconnected by Precedence Constraints and they will start asynchronously.
2. For the master package, set the property Max Conncurrent Executables: <https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dts.runtime.package.maxconcurrentexecutables.aspx>
3. Query SSISDB SQL tables for progress e.g. <https://github.com/yorek/ssis-queries>
4. In the master package, place each Execute Package Task inside a For Loop Container e.g. <http://microsoft-ssis.blogspot.com.au/2014/06/retry-task-on-failure.html>
Upvotes: 1 <issue_comment>username_2: No sure if it fits perfectly, but you could look into using Service Fabric. You could have your Jobs/Tasks run as Service Fabric Actors which can be managed by a central service.
This would involve a bit more custom code than you probably would like, but you can implement very complex scenarios without having to handle too much of the infrastructure.
I posted a similar solution a while back:
[Service fabric task queue with completion task](https://stackoverflow.com/questions/46386243/service-fabric-task-queue-with-completion-task/46402385#46402385)
Upvotes: 0
|
2018/03/16
| 549
| 2,000
|
<issue_start>username_0: How to integrate FormFlow and QnA dialogs in a simple bot. I'm not able to call FormFlow context once QnA is completed. If there are any samples for the same, then please share.<issue_comment>username_1: One approach is to start of from a Luis template.
Then make a specific Intent to start the Form.
Then you can have an empty Luis Intent of ”” and even ”None” and you put your QnA there.
That way the Qna will be on the background LUIS will give you great flexibility to trigger a specific dialogue with intents
Here is an example
<http://www.garypretty.co.uk/2017/03/26/forwarding-activities-messages-to-other-dialogs-in-microsoft-bot-framework/>
Upvotes: -1 <issue_comment>username_2: If you want to use QnA and FormFlow, create a dialog QnADialog and you can send all your messages first to the root dialog from there you can call your QnA Dialog like
```
var qnadialog = new QnADialog();
var messageToForward = await message;
await context.Forward(qnadialog, ResumeAfterQnA, messageToForward, CancellationToken.None);
```
Once th QnADilalog is executed, it will call the ResumeAfterQnA and there you can call your FormFlow Dialog.
```
private async Task ResumeAfterQnA(IDialogContext context, IAwaitable results)
{
SampleForm form = new SampleForm();
var sampleForm = new FormDialog(form, SampleForm.BuildForm, FormOptions.PromptInStart);
context.Call(sampleForm, RootDialog.SampleFormSubmitted);
}
```
You need to have a SampleFormSubmitted method that will be called after you form is submitted.
```
private async Task SampleFormSubmitted(IDialogContext context, IAwaitable result)
{
try
{
var query = await result;
context.Done(true);
}
catch (FormCanceledException e)
{
string reply;
if (e.InnerException == null)
{
reply = $"You quit. Maybe you can fill some other time.";
}
else
{
reply = $"Something went wrong. Please try again.";
}
context.Done(true);
await context.PostAsync(reply);
}
}
```
Upvotes: 2
|
2018/03/16
| 791
| 2,954
|
<issue_start>username_0: I'm looking to change a fancybox item's data attribute with this line of code. I'm using fancyBox 3. For some reason it won't let me change the data attribute. I've checked the selector which responds on i.e. `hide()`.
```
$("#line-up .interactive .btns .btn").click(function(e){
// If has not class active
if (!$(this).hasClass("active"))
{
// Filter
e.preventDefault();
$target = $(this).data("id");
$("#line-up .interactive .btns .btn").removeClass("active");
$(this).addClass("active");
// Select the items and do action
$("#line-up .artists .item").removeClass("active");
$("#line-up .artists .item[data-id=" + $target + "]").addClass("active");
// Change fancybox data
$("#line-up .artists .item.active .artist").data("fancybox", "single");
} else {
// Reset
$(this).removeClass("active");
$("#line-up .artists .item").addClass("active");
// Change fancybox data
$("#line-up .artists .item .artist").data("fancybox", "gallery");
}
})
```
Would be awesome if someone could help<issue_comment>username_1: One approach is to start of from a Luis template.
Then make a specific Intent to start the Form.
Then you can have an empty Luis Intent of ”” and even ”None” and you put your QnA there.
That way the Qna will be on the background LUIS will give you great flexibility to trigger a specific dialogue with intents
Here is an example
<http://www.garypretty.co.uk/2017/03/26/forwarding-activities-messages-to-other-dialogs-in-microsoft-bot-framework/>
Upvotes: -1 <issue_comment>username_2: If you want to use QnA and FormFlow, create a dialog QnADialog and you can send all your messages first to the root dialog from there you can call your QnA Dialog like
```
var qnadialog = new QnADialog();
var messageToForward = await message;
await context.Forward(qnadialog, ResumeAfterQnA, messageToForward, CancellationToken.None);
```
Once th QnADilalog is executed, it will call the ResumeAfterQnA and there you can call your FormFlow Dialog.
```
private async Task ResumeAfterQnA(IDialogContext context, IAwaitable results)
{
SampleForm form = new SampleForm();
var sampleForm = new FormDialog(form, SampleForm.BuildForm, FormOptions.PromptInStart);
context.Call(sampleForm, RootDialog.SampleFormSubmitted);
}
```
You need to have a SampleFormSubmitted method that will be called after you form is submitted.
```
private async Task SampleFormSubmitted(IDialogContext context, IAwaitable result)
{
try
{
var query = await result;
context.Done(true);
}
catch (FormCanceledException e)
{
string reply;
if (e.InnerException == null)
{
reply = $"You quit. Maybe you can fill some other time.";
}
else
{
reply = $"Something went wrong. Please try again.";
}
context.Done(true);
await context.PostAsync(reply);
}
}
```
Upvotes: 2
|
2018/03/16
| 1,755
| 7,193
|
<issue_start>username_0: How to execute integration tests of spring boot application with using Hazelcast, because when run all tests got hazelcast.core.DuplicateInstanceNameException?
I use Spring Boot **2.0.0.RC1** and Hazelcast **3.9.2**
Use java configuration for hazelcast:
```
@Bean
public Config getHazelCastServerConfig() {
final Config config = new Config();
config.setInstanceName(hzInstance);
config.getGroupConfig().setName(hzGroupName).setPassword(hzGroupPassword);
final ManagementCenterConfig managementCenterConfig = config.getManagementCenterConfig();
managementCenterConfig.setEnabled(true);
managementCenterConfig.setUrl(mancenter);
final MapConfig mapConfig = new MapConfig();
mapConfig.setName(mapName);
mapConfig.setEvictionPolicy(EvictionPolicy.NONE);
mapConfig.setTimeToLiveSeconds(0);
mapConfig.setMaxIdleSeconds(0);
config.getScheduledExecutorConfig(scheduler)
.setPoolSize(16)
.setCapacity(100)
.setDurability(1);
final NetworkConfig networkConfig = config.getNetworkConfig();
networkConfig.setPort(5701);
networkConfig.setPortAutoIncrement(true).setPortCount(30);
final JoinConfig joinConfig = networkConfig.getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
joinConfig.getAwsConfig().setEnabled(false);
final TcpIpConfig tcpIpConfig = joinConfig.getTcpIpConfig();
tcpIpConfig.addMember(memberOne)
.addMember(memberTwo);
tcpIpConfig.setEnabled(true);
return config;
}
@Bean
public HazelcastInstance getHazelCastServerInstance() {
final HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(getHazelCastServerConfig());
hazelcastInstance.getClientService().addClientListener(new ClientListener() {
@Override
public void clientConnected(Client client) {
System.out.println(String.format("Connected %s %s %s", client.getClientType(), client.getUuid(), client.getSocketAddress()));
log.info(String.format("Connected %s %s %s", client.getClientType(), client.getUuid(), client.getSocketAddress()));
}
@Override
public void clientDisconnected(Client client) {
System.out.println(String.format("Disconnected %s %s %s", client.getClientType(), client.getUuid(), client.getSocketAddress()));
log.info(String.format("Disconnected %s %s %s", client.getClientType(), client.getUuid(), client.getSocketAddress()));
}
});
return hazelcastInstance;
}
```
I have simple test:
```
@RunWith(SpringRunner.class)
@SpringBootTest(classes = UpaSdcApplication.class)
@ActiveProfiles("test")
public class CheckEndpoints {
@Autowired
private ApplicationContext context;
private static final String HEALTH_ENDPOINT = "/actuator/health";
private static WebTestClient testClient;
@Before
public void init() {
testClient = org.springframework.test.web.reactive.server.WebTestClient
.bindToApplicationContext(context)
.configureClient()
.filter(basicAuthentication())
.build();
}
@Test
public void testHealth(){
testClient.get().uri(HEALTH_ENDPOINT).accept(MediaType.APPLICATION_JSON)
.exchange()
.expectStatus()
.isOk()
.expectBody()
.json("{\"status\": \"UP\"}");
}
}
```
If run with test class separate from other tests - it execute fine and passes.
If run wiith other tests - get exception:
```
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.hazelcast.core.HazelcastInstance]: Factory method 'getHazelCastServerInstance' threw exception; nested exception is com.hazelcast.core.DuplicateInstanceNameException: HazelcastInstance with name 'counter-instance' already exists!
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579)
... 91 more
Caused by: com.hazelcast.core.DuplicateInstanceNameException: HazelcastInstance with name 'counter-instance' already exists!
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:170)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:124)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
at net.kyivstar.upa.sdc.config.HazelcastConfiguration.getHazelCastServerInstance(HazelcastConfiguration.java:84)
at net.kyivstar.upa.sdc.config.HazelcastConfiguration$$EnhancerBySpringCGLIB$$c7da65f3.CGLIB$getHazelCastServerInstance$0()
at net.kyivstar.upa.sdc.config.HazelcastConfiguration$$EnhancerBySpringCGLIB$$c7da65f3$$FastClassBySpringCGLIB$$b920d5ef.invoke()
```
How do you solve this problem? How do you run integration tests?<issue_comment>username_1: `instanceName` configuration element is used to create a named Hazelcast member and should be unique for each Hazelcast instance in a JVM. In your case, either you should set a different instance name for each `HazelcastInstance` bean creation in the same JVM, or you can totally remove `instanceName` configuration if you don't recall instances by using instance name.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Had the same issue while running my tests. In my case reason was,that spring test framework was trying to launch new context, while keeping old one cached - thus trying to create another hazelcast instance with the same name, while one was already in the cached context.
>
> Once the TestContext framework loads an ApplicationContext (or
> WebApplicationContext) for a test, that context is cached and reused
> for all subsequent tests that declare the same unique context
> configuration within the same test suite.
>
>
>
Read [here](https://docs.spring.io/spring/docs/current/spring-framework-reference/testing.html#testcontext-ctx-management-caching) to understand more about how spring test framework manages test context.
I am working at the solution at the moment, will post it later. One possible solution I can see is droping test context after each test with `@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)`, but this is very expensive in terms of performance.
Upvotes: 0 <issue_comment>username_3: I had the same problem and I solved it checking if the instance already exists or not:
```
@Bean
public CacheManager cacheManager() {
HazelcastInstance existingInstance = Hazelcast.getHazelcastInstanceByName(CACHE_INSTANCE_NAME);
HazelcastInstance hazelcastInstance = null != existingInstance
? existingInstance
: Hazelcast.newHazelcastInstance(hazelCastConfig());
return new HazelcastCacheManager(hazelcastInstance);
}
```
You can see the rest of the code [here](https://github.com/username_3/Spring5Microservices/blob/master/pizza-service/src/main/java/com/pizza/configuration/cache/CacheConfiguration.java).
Upvotes: 2
|
2018/03/16
| 1,683
| 5,995
|
<issue_start>username_0: We use a 3rd party COM object, one of which methods under certain conditions returns a `VARIANT` of `VT_PTR` type. That upsets the default .NET marshaler, which throws the following error:
>
> Managed Debugging Assistant 'InvalidVariant' : 'An invalid VARIANT was
> detected during a conversion from an unmanaged VARIANT to a managed
> object. Passing invalid VARIANTs to the CLR can cause unexpected
> exceptions, corruption or data loss.
>
>
>
Method signatures:
```
// (Unmanaged) IDL:
HRESULT getAttribute([in] BSTR strAttributeName, [retval, out] VARIANT* AttributeValue);
// C#:
[return: MarshalAs(UnmanagedType.Struct)]
object getAttribute([In, MarshalAs(UnmanagedType.BStr)] string strAttributeName);
```
**Is there an elegant way to bypass such marshaler's behavior and obtain the underlying unmanaged pointer on the managed side?**
What I've considered/tried so far:
* A custom marshaler:
```
[return: MarshalAs(UnmanagedType.CustomMarshaler,
MarshalTypeRef = typeof(IntPtrMarshaler))]
object getAttribute([In, MarshalAs(UnmanagedType.BStr)] string strAttributeName);
```
I did implement `IntPtrMarshaler`, just to find the interop layer **crashing the process even before any of my `ICustomMarshaler` methods gets called**. Perhaps, the `VARIANT*` argument type is not compatible with custom marshalers.
* Rewrite (or clone) the C# interface definition with `getAttribute` method redefined (like below) and do all the marshaling for output `VARIANT` manually:
```
void getAttribute(
[In, MarshalAs(UnmanagedType.BStr)],
string strAttributeName,
IntPtr result);
```
This doesn't seem nice (the interface itself has 30+ other methods). It'd also break existing, unrelated pieces of code which already make use of `getAttribute` without issues.
* Obtain an unmanaged method address of `getAttribute` from vtable (using `Marshal.GetComSlotForMethodInfo` etc), then do the manual invocation and marshaling against my own custom delegate type (using `Marshal.GetDelegateForFunctionPointer` etc).
So far, I've taken this approach and it seem to work fine, but it feels as such an overkill for what should be a simple thing.
Am I missing some other feasible interop options for this scenario? Or, maybe there is a way to make `CustomMarshaler` work here?<issue_comment>username_1: What I would do is define a simple [VARIANT](https://msdn.microsoft.com/en-us/library/e305240e-9e11-4006-98cc-26f4932d2118(VS.85)) structure like this:
```
[StructLayout(LayoutKind.Sequential)]
public struct VARIANT
{
public ushort vt;
public ushort r0;
public ushort r1;
public ushort r2;
public IntPtr ptr0;
public IntPtr ptr1;
}
```
And the interface like this;
```
[Guid("39c16a44-d28a-4153-a2f9-08d70daa0e22"), InterfaceType(ComInterfaceType.InterfaceIsDual)]
public interface MyInterface
{
VARIANT getAttributeAsVARIANT([MarshalAs(UnmanagedType.BStr)] string strAttributeName);
}
```
Then, add an extension method somewhere in a static class like this, so the caller can have the same coding experience using MyInterface:
```
public static object getAttribute(this MyInterface o, string strAttributeName)
{
return VariantSanitize(o.getAttributeAsVARIANT(strAttributeName));
}
private static object VariantSanitize(VARIANT variant)
{
const int VT_PTR = 26;
const int VT_I8 = 20;
if (variant.vt == VT_PTR)
{
variant.vt = VT_I8;
}
var ptr = Marshal.AllocCoTaskMem(Marshal.SizeOf());
try
{
Marshal.StructureToPtr(variant, ptr, false);
return Marshal.GetObjectForNativeVariant(ptr);
}
finally
{
Marshal.FreeCoTaskMem(ptr);
}
}
```
This will do nothing for normal variants, but will just patch it for VT\_PTR cases.
Note this only works if the caller and the callee are in the same COM apartement.
If they are not, you will get the DISP\_E\_BADVARTYPE error back because marshaling must be done, and by default, it will be done by the COM universal marshaler (OLEAUT) which only support [Automation compatible data types](https://msdn.microsoft.com/en-us/library/cc237562.aspx) (just like .NET).
In this case, theoratically, you could replace this marshaler by another one (at COM level, not at NET level), but that would mean to add some code on C++ side and possibly in the registry (proxy/stub, IMarshal, etc.).
Upvotes: 3 [selected_answer]<issue_comment>username_2: For my own future reference, here's how I ended up doing it, using the 3rd option mentioned in the question:
```
[ComImport, Guid("75A67021-058A-4E2A-8686-52181AAF600A"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IInterface
{
[return: MarshalAs(UnmanagedType.Struct)]
object getAttribute([In, MarshalAs(UnmanagedType.BStr)] string strAttributeName);
}
private delegate int IInterface_getAttribute(
IntPtr pInterface,
[MarshalAs(UnmanagedType.BStr)] string name,
IntPtr result);
public static object getAttribute(this IInterface obj, string name)
{
var ifaceType = typeof(IInterface);
var ifaceMethodInfo = ((Func)obj.getAttribute).Method;
var slot = Marshal.GetComSlotForMethodInfo(ifaceMethodInfo);
var ifacePtr = Marshal.GetComInterfaceForObject(obj, ifaceType);
try
{
var vtablePtr = Marshal.ReadIntPtr(ifacePtr);
var methodPtr = Marshal.ReadIntPtr(vtablePtr, IntPtr.Size \* slot);
var methodWrapper = Marshal.GetDelegateForFunctionPointer(methodPtr);
var resultVar = new VariantClass();
var resultHandle = GCHandle.Alloc(resultVar, GCHandleType.Pinned);
try
{
var pResultVar = resultHandle.AddrOfPinnedObject();
VariantInit(pResultVar);
var hr = methodWrapper(ifacePtr, name, pResultVar);
if (hr < 0)
{
Marshal.ThrowExceptionForHR(hr);
}
if (resultVar.vt == VT\_PTR)
{
return resultVar.ptr;
}
try
{
return Marshal.GetObjectForNativeVariant(pResultVar);
}
finally
{
VariantClear(pResultVar);
}
}
finally
{
resultHandle.Free();
}
}
finally
{
Marshal.Release(ifacePtr);
}
}
```
Upvotes: 1
|
2018/03/16
| 1,604
| 4,696
|
<issue_start>username_0: I have written a script to change the java environment variables of the shell:
```
#!/bin/bash
#env variables can be changed only if we call the script with source setJavaVersion.sh
case $1 in
6)
export JAVA_HOME=/atgl/product/java/jdk-1.6.0_43/linux-redhat_x86_64/jdk1.6.0_43/
export PATH=$JAVA_HOME:$PATH ;
;;
7)
export JAVA_HOME=/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51
export PATH=$JAVA_HOME:$PATH ;
;;
8)
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.91-0.b14.el7_2.x86_64/jre/
export PATH=$JAVA_HOME:$PATH ;
;;
*)
error java version can only be 1.6, 1.7 or 1.8
;;
esac
```
To execute it, I enter:
```
source setJavaVersion.sh 6
```
to set the environment with jdk6, source setJavaVersion.sh 7 for jdk7 and so.
when I look at the environment variables with:
```
$ echo $JAVA_HOME
```
and
```
$ echo $PATH
```
I see that the variables are well updated.
However, when I execute the command
```
java -version
```
it is not updated.
If I enter the same export commands directly in the shell, `java -version` returns the updated result.
Why ?
**Edit:**
I have updated my script with the deathangel908 answer.
Here is the output of which java and PATH before and after the script execution:
```
$ which java
/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51/bin/java
$ echo $PATH
/CPD/SDNT/tools/bin:/CPD/SDNT/tools/x86_64-pc-unix11.0/bin:/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51/bin:/CPD/SDNT/tools/bin:/CPD/SDNT/tools/x86_64-pc-unix11.0/bin:/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/users/t0059888/bin:/users/t0059888/bin
$ source setJavaVersion 6
$ which java
/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51/bin/java
$ echo $PATH
/CPD/SDNT/tools/bin:/CPD/SDNT/tools/x86_64-pc-unix11.0/bin:/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51/bin:/CPD/SDNT/tools/bin:/CPD/SDNT/tools/x86_64-pc-unix11.0/bin:/atgl/product/java/jdk-1.7.0_51/linux-redhat_x86_64/jdk1.7.0_51/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/users/t0059888/bin:/users/t0059888/bin:/atgl/product/java/jdk-1.6.0_43/linux-redhat_x86_64/jdk1.6.0_43/
```<issue_comment>username_1: You're appending path every time, you need to remove it and add again
export
```
: export a="$a:3"
$ echo $a # :3
: export a="$a:3"
: echo $a # :3:3
```
When you executing java, bash starts lookup in `PATH` variable, finds the first occurrence and executes it.
You can use `which java` to check the real path of `java` command that's executed.
So to solve your issue just remember the path w/o java:
```
if [ -z ${PATH_SAVE+x} ]; then
export PATH_SAVE="$PATH"
fi
export PATH="$PATH_SAVE:$JAVA_HOME"
```
Remember to quote variables, in case they contain special symbols or spaces.
Also you can debug your script by running `echo $PATH`
Upvotes: 2 <issue_comment>username_2: Based on the output you added in your Edit, the new `PATH` was added at the end. Since Java 7 is at the beginning of your `PATH` that one is used when you run `which java`.
When you execute a command, the first occurrence found on on the `PATH` will be used, so, try adding it at the beginning of the variable (as you did in your original script, without the changes proposed in the other answer. I mean, it is a good idea what the other answer suggested, that you should not be appending the same paths over and over again, but if you add the Java path at the end of your `PATH` variable, make sure no other java is found on a previous path).
For what I can see in your original script, it should be working fine.
Try adding `set -x` at the beginning of your original script, and look at the output. It would be helpful if you could share that output as well.
Finally, make sure the binaries in Java 6 have the right file permissions (make sure java is executable).
Upvotes: 1 <issue_comment>username_3: My error was that the java executable was not accessible in the path. It is located in the bin folder. What I did before was wrong:
```
export PATH="$JAVA_HOME:$PATH"
```
This is the solution:
```
export PATH="$JAVA_HOME/bin:$PATH"
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,234
| 4,937
|
<issue_start>username_0: I am trying to get a stream of updates for certain tables from my PostgreSQL database. The regular way of getting all updates looks like this:
You create a logical replication slot
```
pg_create_logical_replication_slot('my_slot', 'wal2json');
```
And either connect to it using `pg_recvlogical` or making special SQL queries. This allows you to get **all** the actions from the database in json (if you used [wal2json](https://github.com/eulerto/wal2json) plugin or similar) and then do whatever you want with that data.
But in PostgreSQL 10 we have Publication/Subscription mechanism which allows us to replicate selected tables only. This is very handy because a lot of useless data is not being sent. The process looks like this:
First, you create a publication
```
CREATE PUBLICATION foo FOR TABLE herp, derp;
```
Then you subscribe to that publication from another database
```
CREATE SUBSCRIPTION mysub CONNECTION PUBLICATION foo;
```
This creates a replication slot on a master database under the hood and starts listening to updates and commit them to the same tables on a second database. This is fine if your job was to replicate some tables, but want to get a raw stream for my stuff.
As I mentioned, the `CREATE SUBSCRIPTION` query is creating a replication slot on the master database under the hood, but how can I create one manually without the subscription and a second database? [Here](https://www.postgresql.org/docs/10/static/sql-createsubscription.html) the docs say:
>
> To make this work, create the replication slot separately (using the function pg\_create\_logical\_replication\_slot with the plugin name pgoutput)
>
>
>
According to the docs, this is possible, but `pg_create_logical_replication_slot` only creates a regular replication slot. Is the `pgoutput` plugin responsible for all the magic? If yes, then it becomes impossible to use other plugins like `wal2json` with publications.
What am I missing here?<issue_comment>username_1: After you have created the logical replication slot and the publication, you can create a subscription this way:
```
CREATE SUBSCRIPTION mysub
CONNECTION
PUBLICATION foo
WITH (slot\_name=my\_slot, create\_slot=false);
```
Not sure if this answers your question.
Upvotes: 3 <issue_comment>username_2: I have limited experience with logical replication and logical decoding in Postgres, so please correct me if below is wrong. That being said, here is what I have found:
1. Publication support is provided by `pgoutput` plugin. You use it via [plugin-specific options](https://github.com/postgres/postgres/blob/2b27273435392d1606f0ffc95d73a439a457f08e/src/backend/replication/pgoutput/pgoutput.c#L123). It may be that other plugins have possibility to add the support, but I do not know whether the logical decoding plugin interface exposes sufficient details. I tested it against `wal2json` plugin version `9e962ba` and it doesn't recognize this option.
2. Replication slots are created independently from publications. Publications to be used as a filter are specified when fetching changes stream. It is possible to peek changes for one publication, then peek changes for another publication and observe different set of changes despite using the same replication slot (I did not find it documented and I was testing on Aurora with Postgres compatibility, so behavior could potentially vary).
3. Plugin output seems to include all entries for begin and commit, even if transaction did not touch any of tables included in publication of interest. It does not however include changes to other tables than included in the publication.
Here is an example how to use it in Postgres 10+:
```
-- Create publication
CREATE PUBLICATION cdc;
-- Create slot
SELECT pg_create_logical_replication_slot('test_slot_v1', 'pgoutput');
-- Create example table
CREATE TABLE replication_test_v1
(
id integer NOT NULL PRIMARY KEY,
name text
);
-- Add table to publication
ALTER PUBLICATION cdc ADD TABLE replication_test_v1;
-- Insert example data
INSERT INTO replication_test_v1(id, name) VALUES
(1, 'Number 1')
;
-- Peak changes (does not consume changes)
SELECT pg_logical_slot_peek_binary_changes('test_slot_v1', NULL, NULL, 'publication_names', 'cdc', 'proto_version', '1');
-- Get changes (consumes changes)
SELECT pg_logical_slot_get_binary_changes('test_slot_v1', NULL, NULL, 'publication_names', 'cdc', 'proto_version', '1');
```
To stream changes out of Postgres to other systems, you can consider using [Debezium](https://debezium.io/docs/connectors/postgresql) project. It is an open source distributed platform for change data capture, which among others provides a PostgreSQL connector. In version 0.10 they added support for `pgoutput` plugin. Even if your use case is very different from what the project offers, you can look at their code to see how they interact with replication API.
Upvotes: 4
|
2018/03/16
| 682
| 2,044
|
<issue_start>username_0: I have a file (`input.txt`) with a structure similar to this:
```
abc 1
bcd a
cde 1
def 4
efg a
fgh 3
```
I want to remove duplicates in column 2, in order to have only unique strings in that column (independently to what is in column 1). But the chosen line should be **selected aleatory**. The output could *for example* be:
```
bcd a
cde 1
def 4
fgh 3
```
I tried to create a file listing the duplicates (using `awk '{print $2}' input.txt | sort | uniq -D | uniq`) but then I only managed to remove them all with `awk '!A[$2]++'` instead of randomly keeping one of the duplicates.<issue_comment>username_1: Pre-process the input to randomize it:
```
shuf input.txt | awk '!A[$2]++'
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: With GNU awk for true multi-dimensional arrays:
```
$ awk '{a[$2][++cnt[$2]]=$0} END{srand(); for (k in a) print a[k][int(rand()*cnt[k])+1]}' file
efg a
cde 1
fgh 3
def 4
```
With other awks:
```
$ awk '{keys[$2]; a[$2,++cnt[$2]]=$0} END{srand(); for (k in keys) print a[k,int(rand()*cnt[k])+1]}' file
bcd a
abc 1
fgh 3
def 4
```
Upvotes: 1 <issue_comment>username_3: With `perl`
```
$ perl -MList::Util=shuffle -e 'print grep { !$seen{(split)[1]}++ } shuffle <>' input.txt
def 4
fgh 3
bcd a
abc 1
```
* `-MList::Util=shuffle` to get `shuffle` function from `List::Util` module
* `shuffle <>` here `<>` would get all input lines as array and then gets shuffled
* `grep { !$seen{(split)[1]}++ }` to filter lines based on 2nd field of each array element based on whitespace as separator
With `ruby`
```
$ ruby -e 'puts readlines.shuffle.uniq {|s| s.split[1]}' input.txt
abc 1
bcd a
fgh 3
def 4
```
* `readlines` will get all lines from input file as array
* `shuffle` to randomize the elements
* `uniq` to get unique elements
+ `{|s| s.split[1]}` based on 2nd field value, using whitespace as separator
* `puts` to print the array elements
Upvotes: 1
|
2018/03/16
| 840
| 2,192
|
<issue_start>username_0: I am currently trying to turn this list of dictionaries
```
[
{'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']},
]
```
into a csv file but with a header that includes Name, Age, Date of Birth, and Occupation. Does anyone have any ideas? I can't seem to make it work with the csv writer. Thank you.<issue_comment>username_1: You can try this:
```
import csv
d = [
{'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']},
]
new_d = [i for b in [[[a]+b for a, b in i.items()] for i in d] for i in b]
with open('filename.csv', 'a') as f:
write = csv.writer(f)
write.writerows([['Name', 'Age', 'DOB', 'Occupation']]+new_d)
```
Output:
```
Name,Age,DOB,Occupation
<NAME>,28,03171992,Student
<NAME>,22,02181982,Student
<NAME>,18,06291998,Student
```
Upvotes: 0 <issue_comment>username_2: All you need to do is go through each dictionary, get the name, then use the name to get the other three things in the list. You don't even need to use the csv library as it's pretty simple.
```
data = [
{'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']},
]
f = open("data.csv", "w")
f.write("Name,Age,DOB,Job\n")
for person in data:
for name in person:
f.write(name)
for i in range(3):
f.write("," + str(person[name][i]))
f.write("\n")
f.close()
```
Upvotes: 1 <issue_comment>username_3: This is one solution via `pandas` and `itertools.chain`.
```
from itertools import chain
import pandas as pd
lst = [ {'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']}
]
res = [[name] + list(*details) for name, *details in \
chain.from_iterable(i.items() for i in lst)]
df = pd.DataFrame(res, columns=['Name', 'Age', 'DOB', 'Occupation'])
df.to_csv('file.csv', index=False)
```
Upvotes: 0
|
2018/03/16
| 899
| 2,491
|
<issue_start>username_0: I am trying to create a website that has its main content on home always split (70% top and 30% bottom). I have tried using split.js but the result is still junky as whenever i go below any major breakpoint (1280, 1024, 1980px) everything breaks.
This is the site - <https://lknahk.ee/> i am working on and i will open it to the public soon but i just can't get the homepage to work correctly on both mobile and screen so i am asking here for help as to what should i do.
I know vertically splitting screen is easier because content can flow below, but i don't want any content to push scrolls for the user.
Link to barebone codepen of the site - <https://codepen.io/bleedeagle/pen/zWBqLj>
```
```
Thanks<issue_comment>username_1: You can try this:
```
import csv
d = [
{'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']},
]
new_d = [i for b in [[[a]+b for a, b in i.items()] for i in d] for i in b]
with open('filename.csv', 'a') as f:
write = csv.writer(f)
write.writerows([['Name', 'Age', 'DOB', 'Occupation']]+new_d)
```
Output:
```
Name,Age,DOB,Occupation
<NAME>,28,03171992,Student
<NAME>,22,02181982,Student
<NAME>,18,06291998,Student
```
Upvotes: 0 <issue_comment>username_2: All you need to do is go through each dictionary, get the name, then use the name to get the other three things in the list. You don't even need to use the csv library as it's pretty simple.
```
data = [
{'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']},
]
f = open("data.csv", "w")
f.write("Name,Age,DOB,Job\n")
for person in data:
for name in person:
f.write(name)
for i in range(3):
f.write("," + str(person[name][i]))
f.write("\n")
f.close()
```
Upvotes: 1 <issue_comment>username_3: This is one solution via `pandas` and `itertools.chain`.
```
from itertools import chain
import pandas as pd
lst = [ {'<NAME>': [28, '03171992', 'Student']},
{'<NAME>': [22, '02181982', 'Student']},
{'<NAME>': [18, '06291998', 'Student']}
]
res = [[name] + list(*details) for name, *details in \
chain.from_iterable(i.items() for i in lst)]
df = pd.DataFrame(res, columns=['Name', 'Age', 'DOB', 'Occupation'])
df.to_csv('file.csv', index=False)
```
Upvotes: 0
|
2018/03/16
| 462
| 1,564
|
<issue_start>username_0: [Edit: sorry, mixed up base & subclass initially]
Given
```
class Car : public Vehicle {...}
```
I would like to declare a global variable of type Vehicle (or &Vehicle if need be), initialized as a Car instance... something like
```
Vehicle &myCar = Car(red);
```
... except that the above doesn't compile.
What matters to me is that the instance is allocated in exactly the same way as if I had used (this is for AVR programming)
```
Car myCar(red);
```
I have searched but I just can't figure out the syntax for this??
Thanks!<issue_comment>username_1: You can do this:
```
Car myActualCar(red);
Vehicle &myCar = myActualCar;
```
First create a `Car`, then provide a `Vehicle` reference bound to the car.
If you don't want to expose `myActualCar` to the rest of the file, you can use a lambda function to hide it (this is analoguous to the IIFE technique in JavaScript):
```
Vehicle &myCar = []() -> Car & { static Car myActualCar; return myActualCar; }();
```
This technique requires C++11.
Upvotes: 2 <issue_comment>username_2: If you don't want to have an explicit variable of the derived type:
```
Vehicle &&myCar = Car(red);
```
The rvalue reference will extend the temporary's lifetime as needed. The advantage over [username_1's lambda solution](https://stackoverflow.com/questions/49323438/in-c-how-to-declare-a-global-statically-allocated-variable-with-a-virtual-bas/49323901#comment85648974_49323715) is that that `Car` keeps its automatic lifetime instead of becoming static.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,079
| 3,633
|
<issue_start>username_0: I am a beginner in JS and Jquery and cant find a solution to my task.
My goal is - whenever button with class='add-gold' clicked i want to add 20 to span with class='gold-vault'.
And when button with class='spend-gold' clicked subtract 20 from the same span
```html
Add Gold
Spend Gold
0
```<issue_comment>username_1: You can try this :
```js
var $span = $(".gold-storage");
$(".add-gold").on("click", function() {
var current = parseInt($span.html());
$span.html(current + 20);
});
$(".spend-gold").on("click", function() {
var current = parseInt($span.html());
$span.html(current - 20);
});
```
```html
Add Gold
Spend Gold
0
```
Edit :
* If you want to keep your Gold in positif (min gold = 0), change `$span.html(current - 20);` by `if (current > 0) { $span.html(current - 20); }` so you won't remove gold if gold is not > 0 (min gold to remove is 20 here)
* If you want to make ONE trigger event : instead of `$(".add-gold").on("click", function() { ... }` and `$(".spend-gold").on("click", function() { ... }` you can do `$("button").on("click", function() { ... }` and to know if you have to add or remove gold, check the button class : `if ($(this).hasClass('add-gold'))`(return true or false, with $(this) the button you click on).
Is it what you are looking for?
Upvotes: 3 [selected_answer]<issue_comment>username_2: Added a check so you cant have negative gold
```js
var gold = 0;
$(".add-gold").click(function(){
gold = +gold + 20;
$(".gold-storage").text(gold)
});
$(".spend-gold").click(function(){
if (gold >= 20){
gold = +gold - 20;
$(".gold-storage").text(gold)
}
});
```
```html
Add Gold
Spend Gold
0
```
Upvotes: 0 <issue_comment>username_3: Here is the solution for your question.
```js
$('.add-gold').click(function(){
var currentstock = parseInt($('.gold-storage').text());
$('.gold-storage').text(currentstock + 20);
});
$('.spend-gold').click(function(){
var currentstock = parseInt($('.gold-storage').text());
$('.gold-storage').text(currentstock - 20);
});
```
```html
Add Gold
Spend Gold
0
```
Upvotes: 0 <issue_comment>username_4: Here. I didn't use jQuery as I don't know if you can or want to use it.
Besides, this changes even if the span is empty, and you never get negative values for gold
```js
window.onload = function() {
var gold = document.querySelector('.gold-storage');
function changeGold(quantity) {
var current = parseInt(gold.textContent);
var newGold = isNaN(current) ? quantity : current + quantity;
if (newGold < 0) {
newGold = 0;
}
gold.innerHTML = newGold;
}
document.querySelector('.add-gold').addEventListener('click', function() {
changeGold(20)
});
document.querySelector('.spend-gold').addEventListener('click', function() {
changeGold(-20)
});
}
```
```html
Add Gold
Spend Gold
0
```
Upvotes: 0 <issue_comment>username_5: First you can turn the span text into a number so you can add/subtract numbers to/from it.
Then you use conditionals to see which button is clicked so you know if you add or sutract.
Hope it helps
```js
let button = $("button")
button.on("click", function() {
let number = parseInt($(".gold-storage").text(), 10);
if ($(this).hasClass("add-gold")) {
$(".gold-storage").text(number + 20);
}
if ($(this).hasClass("spend-gold")) {
$(".gold-storage").text(number - 20);
}
})
```
```html
Add Gold
Spend Gold
0
```
Upvotes: 0
|
2018/03/16
| 2,205
| 8,647
|
<issue_start>username_0: I am trying to populate my WebView with custom HTML string and trying to show progress when it is not loaded, and hide it when finished.
Here is my code:
```
webView.settings.javaScriptEnabled = true
webView.loadDataWithBaseURL(null, presentation.content, "text/html", "utf-8", null)
webView.webViewClient = object : WebViewClient() {
override fun onPageStarted(view: WebView, url: String, favicon: Bitmap) {
super.onPageStarted(view, url, favicon)
webViewProgressBar.visibility = ProgressBar.VISIBLE
webView.visibility = View.INVISIBLE
}
override fun onPageCommitVisible(view: WebView, url: String) {
super.onPageCommitVisible(view, url)
webViewProgressBar.visibility = ProgressBar.GONE
webView.visibility = View.VISIBLE
}
}
```
I am getting this error, which is not pointing to any line of my code:
>
> E/AndroidRuntime: FATAL EXCEPTION: main
>
>
>
```
java.lang.IllegalArgumentException: Parameter specified as non-null is null: method kotlin.jvm.internal.Intrinsics.checkParameterIsNotNull, parameter favicon
at com.hidglobal.tmt.app.mobiBadge.ui.presentation.PresentationActivity$showPresentation$1.onPageStarted(PresentationActivity.kt)
at com.android.webview.chromium.WebViewContentsClientAdapter.onPageStarted(WebViewContentsClientAdapter.java:215)
at org.chromium.android_webview.AwContentsClientCallbackHelper$MyHandler.handleMessage(AwContentsClientCallbackHelper.java:20)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:148)
at android.app.ActivityThread.main(ActivityThread.java:5443)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:728)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618)
```<issue_comment>username_1: TL; DR
------
The system passes a `null` `favicon` but it is defined as a non-nullable Kotlin type. You can fix it by changing the signature to `favicon: Bitmap?` to make it nullable.
Full response
-------------
The issue is that method `onPageStarted` is called (by the system) passing a `null` value for the `favicon` parameter. This may happen when there is some Kotlin code that is interoperating with Java code (by default you should get nullable objects).
Any platform type (e.g. any objects coming from Java) can be `null`, because Java has no special notation to tell that something can or cannot be `null`. For that reason when you use platform types in Kotlin you can choose to either:
* use it "as-is"; the consequence in such case is that (from documentation)
>
> Null-checks are relaxed for such types, so that safety guarantees for them are the same as in Java
>
>
>
Hence you may get `NullPointerException`s, like in the following example:
```
fun main(args: Array) {
val array = Vector() // we need to Vector as it's not mapped to a Kotlin type
array.add(null)
val retrieved = array[0]
println(retrieved.length) // throws NPE
}
```
* cast it to a specific type (either nullable or non-nullable); in this case the Kotlin compiler will treat it as a "normal" Kotlin type. Example:
```
fun main(args: Array) {
val array = Vector() // we need to Vector as it's not mapped to a Kotlin type
array.add("World")
val retrieved: String = array[0] // OK, as we get back a non-null String
println("Hello, $retrieved!") // OK
}
```
However, this will throw an exception if you enforce a non-nullable type but then get back `null`. Example:
```
fun main(args: Array) {
val array = Vector() // we need to Vector as it's not mapped to a Kotlin type
array.add(null)
val retrieved: String = array[0] // we force a non-nullable type but get null back -> throws NPE
println("Hello, World!") // will not reach this instruction
}
```
In such case you can "play it safe" and enforce the variable to be nullable – this will never fail, but could make the code harder to read:
```
fun main(args: Array) {
val array = Vector() // we need to Vector as it's not mapped to a Kotlin type
array.add(null)
val retrieved: String? = array[0] // OK since we use a nullable type
println("Hello, $retrieved!") // prints "Hello, null!"
}
```
You can use the latter example in your code to cope with the `bitmap` being null:
```
override fun onPageStarted(view: WebView, url: String, favicon: Bitmap?) {
...
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: I have the same problem.
```
java.lang.IllegalArgumentException: Parameter specified as non-null is null: method kotlin.jvm.internal.Intrinsics.checkParameterIsNotNull, parameter favicon
at com.haoyong.szzc.module.share.view.activity.WebActivity$MyWebViewClient.onPageStarted(WebActivity.kt:0)
at com.android.webview.chromium.WebViewContentsClientAdapter.onPageStarted(WebViewContentsClientAdapter.java:495)
at com.android.org.chromium.android_webview.AwContentsClientCallbackHelper$MyHandler.handleMessage(AwContentsClientCallbackHelper.java:122)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5313)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1116)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:809)
```
I just do it as follows:
**change**
```
override fun onPageStarted(view: WebView, url: String, favicon: Bitmap) {
super.onPageStarted(view, url, favicon)
}
```
**to**
```
override fun onPageStarted(view: WebView, url: String, favicon: Bitmap?) {
super.onPageStarted(view, url, favicon)
}
```
Because it is not allowed to use the null parameter in the Kotlinlang. just change the Bitmap to Bitmap? Then it will work well. Hope this can help the other people.
Upvotes: 6 <issue_comment>username_3: ```
private void initWebView() {
webView.setWebChromeClient(new MyWebChromeClient(getActivity()));
webView.setWebViewClient(new WebViewClient() {
@Override
public void onPageStarted(WebView view, String url, Bitmap favicon) {
super.onPageStarted(view, url, favicon);
progressBar.setVisibility(View.VISIBLE);
getActivity().invalidateOptionsMenu();
}
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
webView.loadUrl(url);
return true;
}
@Override
public void onPageFinished(WebView view, String url) {
super.onPageFinished(view, url);
progressBar.setVisibility(View.GONE);
getActivity().invalidateOptionsMenu();
}
@Override
public void onReceivedError(WebView view, WebResourceRequest request, WebResourceError error) {
super.onReceivedError(view, request, error);
progressBar.setVisibility(View.GONE);
getActivity().invalidateOptionsMenu();
}
});
webView.clearCache(true);
webView.clearHistory();
webView.getSettings().setJavaScriptEnabled(true);
webView.setHorizontalScrollBarEnabled(false);
webView.setOnTouchListener(new View.OnTouchListener() {
public boolean onTouch(View v, MotionEvent event) {
if (event.getPointerCount() > 1) {
//Multi touch detected
return true;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN: {
// save the x
m_downX = event.getX();
}
break;
case MotionEvent.ACTION_MOVE:
case MotionEvent.ACTION_CANCEL:
case MotionEvent.ACTION_UP: {
// set x so that it doesn't move
event.setLocation(m_downX, event.getY());
}
break;
}
return false;
}
});
}
private class MyWebChromeClient extends WebChromeClient {
Context context;
public MyWebChromeClient(Context context) {
super();
this.context = context;
}
}
```
Inside on create---
initWebView()
```
binding.webView.loadUrl(urlName)
```
Just try it for smooth web view URL load.
Upvotes: -1
|
2018/03/16
| 345
| 1,276
|
<issue_start>username_0: in Laravel documentation i found detach() method to detach all objects in many to many relationships. Can detach() method also be applied to one to many relationship in Laravel? if not, how can I detach all n objects in this case?<issue_comment>username_1: From the [docs](https://laravel.com/docs/5.5/eloquent-relationships#updating-belongs-to-relationships)
>
> When removing a belongsTo relationship, you may use the dissociate
> method. This method will set the relationship's foreign key to null
>
>
>
```
$user->account()->dissociate();
$user->save();
```
Upvotes: 3 <issue_comment>username_2: In many to many relationships, detach() method only delete pivot entry in your database, except if you have specific cascade deleting.
For one to many relationship, you want to use dissociate() method to unbind the relation and associate() to bind it on the belongsTo Side.
```
$comment->post()->associate($post->id);
```
On the contrary, you would add account using attach() on the hasMany side:
```
$post->comments()->attach($comment->id);
```
To delete all comments you would do :
```
$post->comments()->delete();
```
More info here:
<https://laravel.com/docs/5.6/eloquent-relationships>
Upvotes: 4 [selected_answer]
|
2018/03/16
| 1,425
| 5,006
|
<issue_start>username_0: i have the following problem, i have 2 Classes FooClass and BaseClass, and multiple SubClasses of BaseClass.
I want to add these various Subclasses into the same Vector in FooClass, because i am just implementing functions from baseclass, so i can access them through the vector key.
In the following example, each subclass sets the string name of the BaseClass with setName(), and returns it with getName().
Every subclass uses also thisFunctionisforAll() defined in the BaseClass.
The code does compile fine, except if i add vClasses.push\_back(thesubclass);
So i need help how i can put all these subclasses of BaseClass into the same vector.
I want to iterate through the varius subClasses of BaseClass in the FooClass vector to output their names.
Example is in main.cpp
I thought i can add different subclasses to a vector if i they share the baseclass and the vector is type of the baseclass.
Here is the source:
```
FooClass.h:
#ifndef TESTPROJECT_FOOCLASS_H
#define TESTPROJECT_FOOCLASS_H
#include
#include "BaseClass.h"
using namespace std;
class FooClass
{
private:
vector vClasses;
public:
void addClassToVector(BaseClass &classToAdd);
void getNames();
};
#endif //TESTPROJECT\_FOOCLASS\_H
```
---
```
FooClass.cpp
#include "FooClass.h"
void FooClass::addClassToVector(BaseClass &thesubclass)
{
vClasses.push_back(thesubclass);
}
void FooClass::getNames()
{
for (size_t i; i < vClasses.size(); i++)
{
cout << vClasses[i].getName() << endl;
}
}
```
---
```
BaseClass.h
#ifndef TESTPROJECT_BASECLASS_H
#define TESTPROJECT_BASECLASS_H
#include
using namespace std;
class BaseClass
{
protected:
string name;
public:
virtual void setName()= 0;
virtual string getName()=0;
void thisFunctionisforAll();
};
#endif //TESTPROJECT\_BASECLASS\_H
```
---
```
BaseClass.cpp
#include "BaseClass.h"
void BaseClass::thisFunctionisforAll() {
cout << "Every subclass uses me without implementing me" << endl;
}
```
---
```
SubClass.h
#ifndef TESTPROJECT_SUBCLASS_H
#define TESTPROJECT_SUBCLASS_H
#include "BaseClass.h"
class SubClass : public BaseClass {
virtual void setName();
virtual string getName();
};
#endif //TESTPROJECT_SUBCLASS_H
```
---
```
SubClass.cpp
#include "SubClass.h"
void SubClass::setName()
{
BaseClass::name = "Class1";
}
string SubClass::getName() {
return BaseClass::name;
}
```
---
```
SubClass2.h
#ifndef TESTPROJECT_SUBCLASS2_H
#define TESTPROJECT_SUBCLASS2_H
#include "BaseClass.h"
class SubClass2 : public BaseClass
{
virtual void setName();
virtual string getName();
};
#endif //TESTPROJECT_SUBCLASS2_H
```
---
```
SubClass2.cpp
#include "SubClass2.h"
void SubClass2::setName()
{
BaseClass::name = "Class 2";
}
string SubClass2::getName() {
return BaseClass::name;
}
```
---
```
main.cpp
#include "FooClass.h"
void FooClass::addClassToVector(BaseClass &thesubclass)
{
vClasses.push_back(thesubclass);
}
void FooClass::getNames()
{
for (size_t i; i < vClasses.size(); i++)
{
cout << vClasses[i].getName() << endl;
}
}
```
I think the solution will be simple, but i am experienced in PHP and there i hadn't such issues.<issue_comment>username_1: You need to use pointers or references. Polymorphism only works in C++ when you're using pointers. You cannot treat a subclass as a superclass unless you're using pointers or references. You'd need `std::vector` to be able to have a container of both the base class and subclass.
Since you're new to the language, I would recommend researching how pointers work.
Upvotes: 2 <issue_comment>username_2: Containers like vectors contain things directly, not references to things (as found in, e.g., Java - not sure about PHP). If you've got a class `Foo` then a `std::vector` will contain `Foo` instances. If there's a class `Bar` that *derives from Foo* and you put a `Bar` into this vector you'll only get the `Foo` part of it. The rest is cut off, or *sliced*, as described here: [C++ - What is object slicing?](https://stackoverflow.com/questions/274626/what-is-object-slicing). This is the way it works in C++.
Now, you can put pointers into the vector, e.g., a `vector`. But the vector object won't *own* the things pointed to (as it would in a language like `Java`, for example). It'll just be holding pointers. So you have to manage object lifetime separately. This is a main feature of C++. If you want the vector to *own* the objects you've got to put your pointers in there as a specific type of wrapped pointer, usually `std::unique_ptr` or `std::shared_ptr`.
But now you're getting into more complex but absolutely fundamental C++ stuff and you'll need to understand a lot about how ownership and containers work. Still, there are plenty of ways to learn that, I'm sure you know, and Stack Overflow's `c++` tag will have a lot of questions on those topics with useful answers for you.
Upvotes: 0
|
2018/03/16
| 475
| 1,912
|
<issue_start>username_0: I'd like to redirect the user to the general settings page, not the application settings page (with permissions, etc.). Is that possible and if so, how?<issue_comment>username_1: You need to use pointers or references. Polymorphism only works in C++ when you're using pointers. You cannot treat a subclass as a superclass unless you're using pointers or references. You'd need `std::vector` to be able to have a container of both the base class and subclass.
Since you're new to the language, I would recommend researching how pointers work.
Upvotes: 2 <issue_comment>username_2: Containers like vectors contain things directly, not references to things (as found in, e.g., Java - not sure about PHP). If you've got a class `Foo` then a `std::vector` will contain `Foo` instances. If there's a class `Bar` that *derives from Foo* and you put a `Bar` into this vector you'll only get the `Foo` part of it. The rest is cut off, or *sliced*, as described here: [C++ - What is object slicing?](https://stackoverflow.com/questions/274626/what-is-object-slicing). This is the way it works in C++.
Now, you can put pointers into the vector, e.g., a `vector`. But the vector object won't *own* the things pointed to (as it would in a language like `Java`, for example). It'll just be holding pointers. So you have to manage object lifetime separately. This is a main feature of C++. If you want the vector to *own* the objects you've got to put your pointers in there as a specific type of wrapped pointer, usually `std::unique_ptr` or `std::shared_ptr`.
But now you're getting into more complex but absolutely fundamental C++ stuff and you'll need to understand a lot about how ownership and containers work. Still, there are plenty of ways to learn that, I'm sure you know, and Stack Overflow's `c++` tag will have a lot of questions on those topics with useful answers for you.
Upvotes: 0
|
2018/03/16
| 492
| 1,594
|
<issue_start>username_0: Layout for 5 players:
[](https://i.stack.imgur.com/nAscX.jpg) [](https://i.stack.imgur.com/C70bW.jpg)
Layout for 6 players:
[](https://i.stack.imgur.com/2diJd.jpg) [](https://i.stack.imgur.com/eUbd5.jpg)
At the 6 players layout when you click the First Player it gets weird(right image).
Anyone know what is wrong?
Thanks for any help
My activty code:
```
//edit texts...
```<issue_comment>username_1: In your activity manifest just add this
```
```
The documentation for `adjustpan` states that
>
> The activity's main window is not resized to make room for the soft keyboard. Rather, the contents of the window are automatically panned so that the current focus is never obscured by the keyboard and users can always see what they are typing
>
>
>
Please see official doc [here](https://developer.android.com/guide/topics/manifest/activity-element.html#wsoft)
Upvotes: 2 <issue_comment>username_2: This seems to happen because of the keyboard that is popping up. Use
```
```
Use nosensor if you want to use the app only for portrait modes. Use stateAlwaysHidden in the if you don't want android to pop that keyboard up everytime your activity opens.
Check out this [link](https://developer.android.com/guide/topics/manifest/activity-element.html) the official documentation.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,034
| 4,228
|
<issue_start>username_0: I want to use `pytest` to check if the `argparse.ArgumentTypeError` exception is raised for an incorrect argument:
```
import argparse
import os
import pytest
def main(argsIn):
def configFile_validation(configFile):
if not os.path.exists(configFile):
msg = 'Configuration file "{}" not found!'.format(configFile)
raise argparse.ArgumentTypeError(msg)
return configFile
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--configFile', help='Path to configuration file', dest='configFile', required=True, type=configFile_validation)
args = parser.parse_args(argsIn)
def test_non_existing_config_file():
with pytest.raises(argparse.ArgumentTypeError):
main(['--configFile', 'non_existing_config_file.json'])
```
However, running `pytest` says `During handling of the above exception, another exception occurred:` and consequently the test fails. What am I doing wrong?<issue_comment>username_1: The problem is that if argument's type converter raises exception `ArgumentTypeError` `agrparse` [exits](https://docs.python.org/3/library/argparse.html#invalid-arguments) with error code 2, and exiting means raising builtin exception `SystemExit`. So you have to catch that exception and verify that the original exception is of a proper type:
```
def test_non_existing_config_file():
try:
main(['--configFile', 'non_existing_config_file.json'])
except SystemExit as e:
assert isinstance(e.__context__, argparse.ArgumentError)
else:
raise ValueError("Exception not raised")
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here's the `ArgumentTypeError` test in the `test_argparse.py` file (found in the development repository)
`ErrorRaisingAgumentParser` is a subclass defined at the start of the file, which redefines the `parser.error` method, so it doesn't exit, and puts the error message on `stderr`. That part's a bit complicated.
Because of the redirection I described the comment, it can't directly test for `ArgumentTypeError`. Instead it has to test for its message.
```
# =======================
# ArgumentTypeError tests
# =======================
class TestArgumentTypeError(TestCase):
def test_argument_type_error(self):
def spam(string):
raise argparse.ArgumentTypeError('spam!')
parser = ErrorRaisingArgumentParser(prog='PROG', add_help=False)
parser.add_argument('x', type=spam)
with self.assertRaises(ArgumentParserError) as cm:
parser.parse_args(['XXX'])
self.assertEqual('usage: PROG x\nPROG: error: argument x: spam!\n',
cm.exception.stderr)
```
Upvotes: 1 <issue_comment>username_3: Using `pytest` you can do the following in order to check that `argparse.ArugmentError` is raised. Additionally, you can check the error message.
```
with pytest.raises(SystemExit) as e:
main(['--configFile', 'non_existing_config_file.json'])
assert isinstance(e.value.__context__, argparse.ArgumentError)
assert 'expected err msg' in e.value.__context__.message
```
Upvotes: 1 <issue_comment>username_4: Inspired by @Giorgos's answer, here is a small context manager that makes the message extraction a bit more re-usable. I'm defining the following in a common place:
```py
import argparse
import pytest
from typing import Generator, Optional
class ArgparseErrorWrapper:
def __init__(self):
self._error: Optional[argparse.ArgumentError] = None
@property
def error(self):
assert self._error is not None
return self._error
@error.setter
def error(self, value: object):
assert isinstance(value, argparse.ArgumentError)
self._error = value
@contextmanager
def argparse_error() -> Generator[ArgparseErrorWrapper, None, None]:
wrapper = ArgparseErrorWrapper()
with pytest.raises(SystemExit) as e:
yield wrapper
wrapper.error = e.value.__context__
```
This allows to test for parser errors concisely:
```py
def test_something():
with argparse_error() as e:
# some parse_args call here
...
assert "Expected error message" == str(e.error)
```
Upvotes: 0
|
2018/03/16
| 745
| 1,922
|
<issue_start>username_0: I'm trying to write a postgres function that will sanitize a list of numbers into a list of comma-separated numbers. These numbers are being entered into an input field. I want to allow users to just enter a line of space-separated numbers (ex: `1 3 4 12`) and have it change it to `1,3,4,12`.
But, if they do enter it correctly (ex: `1,3,4,12` or `1, 3, 4, 12`), I still want it to sanitize it to `1,3,4,12`. I also have to account for things like (ex: `1, 3 4, 12`).
This is what I'm currently doing:
```
select regexp_replace(trim(list_of_numbers), '[^0-9.] | [^,]', ',', 'g')
```
If I have a list like this:
```
select regexp_replace(trim('1, 2, 4, 14'), '[^0-9.] | [^,]', ',', 'g')
```
it returns : "1,2,4,14" so that's good.
But, if I have a list like this:
```
select regexp_replace(trim('1 2 4 14'), '[^0-9.] | [^,]', ',', 'g')
```
it returns : `"1,,,4"`<issue_comment>username_1: You could split the string on any amount of whitespace or commas `(,|\s)+` and join it back together using commas:
```
select array_to_string(regexp_split_to_array('1 2 4 14', '(,|\s)+'), ', ');
```
Upvotes: 0 <issue_comment>username_2: I think the best option is to convert to an array using `regexp_split_to_array` then turn that back into a string:
The following:
```
with t(input) as (
values
('1 3 4 12'),
('1,3,4,12'),
('1, 3 4, 12'),
('1,3,4,12'),
('1 3 4 , 12'),
(' 1, 2 , 4 12 ')
)
select array_to_string(regexp_split_to_array(trim(input),'(\s+)|(\s*,\s*)'), ',')
from t;
```
returns:
```
array_to_string
---------------
1,3,4,12
1,3,4,12
1,3,4,12
1,3,4,12
1,3,4,12
1,3,4,12
```
Upvotes: 0 <issue_comment>username_3: If you change your regex to `[^0-9.]+` it'll replace all non-numerics (i.e. , `,`, `,`) with a `,`.
[Try it out here](https://regex101.com/r/qAqVY5/1)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 745
| 2,776
|
<issue_start>username_0: In my React Application I am using an API which is provided at runtime as a global variable by the Browser in which the application runs.
To make the Webpack compilation process work I have added this into the webpack config:
```
externals: {
overwolf : 'overwolf'
}
```
It is then imported like this
```
import overwolf from 'overwolf'
```
This works fine when I built my production application and run it inside the Browser.
However for the webpack development server, as well as my tests I want to be able to run them from a standard browser where the external will not be available. I am not quite sure how to make this work as the dev server will always complain about the import and my attempts to make conditional imports did not work out so far.
What I would like to achieve is to mock the overwolf variable, so that webpack dev server will compile and let me run my code with the mocked version.
My attempt was like this
```
import overwolf from 'overwolf'
export function overwolfWrapper() {
if(process.env.NODE_ENV !== 'production') {
return false;
}
else {
return overwolf;
}
}
```
Which results in the following error on the webpack development server
```
ReferenceError: overwolf is not defined
overwolf
C:/Users/jakob/Documents/private/projects/koreanbuilds-overwolf2/external "overwolf":1
```<issue_comment>username_1: One possible solution is to keep using the `overwolf` defined as an `external` (read more [here](https://stackoverflow.com/a/49215988/7248949)), and use a polyfill for other browsers:
In your `index.html` include an `overwolf.js` script which will provide the mock object to use.
Example using `HtmlWebpackPlugin` and `html-webpack-template` to generate the `index.html` as part of the build process. Include in your `plugins` array:
```
new HtmlWebpackPlugin({
template: './node_modules/html-webpack-template/index.ejs',
inject: false,
scripts: ['/overwolf.js']
})
```
And this is an example for the included `overwolf.js` previously:
```
if (!window.overwolf) {
window.overwolf = {
foo() {
console.info('overwolf now has foo function!');
}
};
}
```
Hope this helps!
Check also this [webpack-demo](https://github.com/carloluis/webpack-demo) project. I think it would help you with some configurations.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I also found a rather simple solution on my own.
Instead of importing the external this also works:
```
const overwolf = process.env.NODE_ENV === 'production' ? require('overwolf') : new MockedOverwolf();
```
Webpack will not complain about this in the dev environment and in production require will still give me the real API.
Upvotes: 0
|
2018/03/16
| 321
| 1,432
|
<issue_start>username_0: I am currently testing out AppX for a customer Appx is fairly easy for small applications such as Adobe Reader and so on.
does anyone have any experience in packing larger Applications with multiple MSI files and possible registry changes file changes and so on ?
how do you specify multiple MSI files is that possible ?
the documentation is not specific in how you achieve that.<issue_comment>username_1: With Desktop App Converter, making customisation while creating AppX packages is not straightforward. It may be easier to use one of the application repackaging tools which enable you to capture any number of MSI and make the necessary customisations, including any changes to the registry or file system. Here's the list of some of such tools on Microsoft website: <https://learn.microsoft.com/en-us/windows/uwp/porting/desktop-to-uwp-root>
Upvotes: 2 [selected_answer]<issue_comment>username_2: Converting the packages is now much easier with the new [free Express edition from Advanced Installer](https://www.advancedinstaller.com/express-edition.html), developed in partnership with Microsoft.
It has a GUI that allows for advanced customization of the APPX packages, without requiring you to have knowledge about the internals package schemas.
If you have any questions about it, let me know, would love to help.
*Disclaimer: I work on the team that builds Advanced Installer.*
Upvotes: 0
|