date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2018/03/16
| 575
| 2,014
|
<issue_start>username_0: I have to print a formatted double matrix in java, with in the first column the number of the line. I'm using this code:
```
for (int i = 0; i < 80; i++) {
System.out.printf("%d", i);
for (int j = 0; j < 80; j++) {
System.out.printf("%5f", P[i][j]);
System.out.print(" ");
}
System.out.println();
}
```
Part of the output is:
[](https://i.stack.imgur.com/DaxYz.png)
The float part is nice, but I don't understand why it add a zero to the integers (in fact multiplicating by ten???) and other decimal zeroes. How can i solve this?<issue_comment>username_1: Your code is not multiplying the integers by ten. It just seems that way because there is no space between the line number and the first double value.
Just add another call to
```
System.out.print(" ");
```
after this line:
```
System.out.printf("%d", i);
```
Then you should see the line numbers seperated from the matrix.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The decimal separator unfortunately is implemented as a char and with decimals always present. I have made it an underscore for demonstration. A space might be usable.
When the number has an integer part like 1.2 you seem to want to remove that part; did that.
```
DecimalFormat numberFormat = new DecimalFormat(".00000");
numberFormat.setMaximumIntegerDigits(0);
numberFormat.setDecimalSeparatorAlwaysShown(true); // Not sometimes gone.
DecimalFormatSymbols symbols = numberFormat.getDecimalFormatSymbols();
symbols.setDecimalSeparator('_');
numberFormat.setDecimalFormatSymbols(symbols);
for (int i = 0; i < 4; i++) {
System.out.printf("%d", i); // No space after!
for (int j = 0; j < 4; j++) {
System.out.print(numberFormat.format(0.4 * i));
//System.out.printf("%5f", 0.4 * i);
System.out.print(" ");
}
System.out.println();
}
```
Upvotes: 0
|
2018/03/16
| 1,077
| 4,062
|
<issue_start>username_0: **Update**
My small showcase is stored on Bitbucket
<https://bitbucket.org/solvapps/animationtest>
I have an Activity with a view in it. `Contentview` is set to this view.
```
public class MainActivity extends AppCompatActivity {
private MyView myView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
myView = new MyView(this);
setContentView(myView);
startMovie();
}
public void startMovie(){
MovieTask movieTask = new MovieTask(myView, this);
movieTask.doInBackground(null);
}
}
```
A MovieTask is an Asynctask and refreshes the view periodically.
But invalidate() doesn't refresh the view.
```
public class MovieTask extends AsyncTask {
MyView drawingView;
MainActivity mainActivity;
public MovieTask(MyView view, MainActivity mainActivity){
this.mainActivity = mainActivity;
this.drawingView =view;
}
@Override
protected String doInBackground(String... strings) {
for(int i=20;i<100;i++){
drawingView.myBall.goTo(i,i);
publishProgress();
try {
Thread.sleep(20);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return null;
}
@Override
protected void onProgressUpdate(String... values) {
super.onProgressUpdate(values);
mainActivity.runOnUiThread(new Runnable() {
@Override
public void run() {
Log.v("DEBUG\_DRAW","in onProgressUpdate()");
drawingView.invalidate();
}
});
}
}
```
Can someone help ?<issue_comment>username_1: There is two possible case, first as described in [documents](https://developer.android.com/reference/android/view/View.html#invalidate()):
>
> void invalidate ()
> Invalidate the whole view. If the view is visible,
> onDraw(android.graphics.Canvas) will be called at some point in the
> future.
>
>
>
So try to run your code in `onResume`, there is a chance that `View` is not visible yet.
Secondly `View#invalidate` tells the system to redraw the view as soon as the main UI thread goes idle. That is, calling invalidate schedules your view to be redrawn after all other immediate work has finished.
If you'd like to have your view updated periodically use `Handler#postDelay` or run it in a separate thread and use `View#postInvalidate` to update the `View` and trigger the call to `onDraw`.
Upvotes: 0 <issue_comment>username_2: See how you are launching the `AsyncTask`:
```
public void startMovie() {
MovieTask movieTask = new MovieTask(myView, this);
movieTask.doInBackground(null);
}
```
You are manually calling a method inside some class called `MovieTask`, thus you are running a code on the same thread. Obviously, that is not your intention, you intended to run the computation code on a background thread.
Correct way to launch `AsyncTask` is using [`execute(Params...)`](https://developer.android.com/reference/android/os/AsyncTask.html#execute(Params...)):
```
public void startMovie() {
MovieTask movieTask = new MovieTask(myView, this);
movieTask.execute("");
}
```
Now you will get the desired effect.
---
P.S.
Please, do not use that code: you do not need to launch a background thread in order to do that kind of stuff. As an alternative consider [Animators API](https://developer.android.com/reference/android/animation/Animator.html).
Declare `setBall(int pos)` method inside `MyBall` class:
```
public class MyView extends View {
...
public void setBall(int pos) {
myBall.setX(pos);
myBall.setY(pos);
invalidate();
}
}
```
Then change `startMovie()` to following:
```
public void startMovie() {
// "ball" means, that Animators API will search for `public setBall(int)` method inside MyView.java and call that method
ObjectAnimator ball = ObjectAnimator.ofInt(myView, "ball", 20, 100);
ball.setDuration(1000);
ball.start();
}
```
You'll get the same animation without a nasty code.
Upvotes: 4 [selected_answer]
|
2018/03/16
| 1,832
| 6,719
|
<issue_start>username_0: I am new to ionic I am following [ionic framework documents](https://ionicframework.com/docs/api/components/action-sheet/ActionSheetController/) to learn it.
Here is my method's code: **hello-ionic.ts**
```
openActionSheet(){
let actionSheet=this.actionsheetCtrl.create(
{
title: 'Modify your album',
cssClass: 'page-hello-ionic',
buttons:[
{
text: 'Delete',
role: 'destructive', //will always sort to be on top
icon: !this.platform.is('ios') ? 'trash' : null,
handler: () => {
console.log('Delete clicked');
}
},
{
text: 'Play',
icon: !this.platform.is('ios') ? 'arrow-dropright-circle' : null,
handler: () => {
console.log('Play clicked');
}
},
{
text: 'Favorite',
icon: !this.platform.is('ios') ? 'heart-outline' : null,
handler: () => {
console.log('Favorite clicked');
}
},
{
text: 'Cancel',
role: 'cancel', // will always sort to be on the bottom
icon: !this.platform.is('ios') ? 'close' : null,
handler: () => {
console.log('Cancel clicked');
}
}
]});
actionSheet.present();
}
```
The code works fine. But I want to know that where is `console.log()` printed. Can anyone help me with that?<issue_comment>username_1: To check the console log you can use browser and run the below command:
**Step 1:** `$ionic serve` (will run your app on localhost)
**Step 2:** In your respective browser (chrome, safari, etc...) where your app is running `Right` + `click` and **inspect** your app as per below screenshot.
[](https://i.stack.imgur.com/xyyQt.png)
**Step 3:** You will get a window with HTML elements on the right side of your browser window and on the left your app screen.
[](https://i.stack.imgur.com/N4RcL.png)
**Step 4:** On the right side window, you can find "Console" menu option on the top bar. Click on that you will get your console where you find your app **logs** or **error** or **warning** which ionic generated.
[](https://i.stack.imgur.com/RHYKq.png)
[](https://i.stack.imgur.com/JEwck.png)
**EDIT:**
*Real Device or Emulator debugging*
To check `real-device` or `emulator` or `genymotion` console logs follow the below steps & screenshots.
**Step 1:** Run this command to run your app on real-device or emulator
```
$ionic cordova run android
```
**Step 2:** After successfully launch the app on device or emulator Go to Chrome browser and `Right` + `click` and click on **"Inspect"** and you will get below screen at bottom of your browser.
[](https://i.stack.imgur.com/WrkD9.png)
**Step 3:** Click on **"Remote devices"** will show connected real device or emulator list.
From that device list click on **"Inspect"** button on the right side of that device name(*check screenshot for the same*) will open a new window with your device mirror now console is yours play around with this debugger.
[](https://i.stack.imgur.com/Astum.png)
[](https://i.stack.imgur.com/zXGp4.png)
Hope this will help you to debug your app.
Upvotes: 5 [selected_answer]<issue_comment>username_2: When I test websites on mobile using Ionic, I usually prefer not to run the `Remote devices` window as I need to choose the mobile, then tons of tabs in my mobile browser - then `inspect` and after a refresh on mobile - it disconnect ...
For real debugging purposes, I prefer running Ionic with `-c` like so, and then I see the console in the terminal w/o anything being disconnected.
```
my-server$ ionic serve -c --no-open --address 192.168.1.112
[INFO] Development server running!
Local: http://192.168.1.112:8100
Use Ctrl+C to quit this process
```
Browsing in my mobile device to: <http://192.168.1.112:8100>
logs will appear in terminal where I run Ionic cli:
```
[app-scripts] [09:49:42] console.log: Angular is running in the development mode. Call enableProdMode() to enable the production
[app-scripts] mode.
[app-scripts] [09:49:42] console.log: cookieEnabled: true
```
Upvotes: 2 <issue_comment>username_3: >
> if you want to get Live console logs in the ionic application (real
> device or emulator)
>
>
>
```
ionic cordova run android --livereload --consolelogs --serverlogs
```
Upvotes: 3 <issue_comment>username_4: you can use chrome inspect to get console logs while debugging with emulator.
open chrome and enter following :
chrome://inspect:device
you can see ionic device listed in the devices and you can use inspect and check the console.logs as described [in this video](https://youtu.be/NBXjlIRw4ek)
Upvotes: 3 <issue_comment>username_5: ### Test on iOs
1. Run `ionic capacitor run ios --livereload`, this will show you a list with the iPhone emulators options, choose the one that you want and it will be opened
2. After the application be opened in the emulator you have to go and open the Safari browser, then in the top options go-to `develop->simulator->localhost` and that will open a window which will show you the logs that you want
### Test on Android
3. Run `ionic capacitor run android --livereload` this will show you a list with the Android device emulators that you have created on Android Studio (be sure to have Android Studio and adb properly installed, updated and configurated), choose the one that you want and it will be opened
4. After the application be opened in the emulator you have to go and open the Google Chrome browser, then go to `chrome://inspect/#devices` and find your emulated or physical device in the bottom list (some times could take a few minutes to appear) and then click on `inspect` after that, a console window will be open and you will be able to see all your logs
NOTE: This has been done and tested on Ionic 5
Upvotes: 2 <issue_comment>username_6: Please use web browser to see the console logs.
Run the `ionic serve` command by terminal to run the ionic in your browser.
If you couldn't see the console logs in your browser, simply use `console.warn`... it's a alternative solution to see the logs
Upvotes: 0
|
2018/03/16
| 257
| 976
|
<issue_start>username_0: i changed the images directly in the coding of the home page in the panel
and now its showing on the home page only before doing this it was showing on every product page any help will be appreciated.
Home Page - Screenshot
[](https://i.stack.imgur.com/0m40Q.png)
Product page - Screenshot
[](https://i.stack.imgur.com/YK0LK.png)<issue_comment>username_1: I think this is cms block which are calling on different positions first find block id for this and from admin disabled that block if you want to remove it thanks
Upvotes: 0 <issue_comment>username_2: I think in homepage you have used 2columns-left layout and for product detail page 2columns-right layout, so you have to check into both layout's phtml file and check if anything(e.g. cms-block) missing for product detail page.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,145
| 4,164
|
<issue_start>username_0: I have a use-case where I need to have temporary AWS STS token made available for each authenticated user (auth using company IDP). These tokens will be used to push some data in AWS S3. I am able to get this flow, by using SAML assertion in IDP response and integrating with AWS as SP (IDP initiated sign-on) similar to one shown here.
<https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html#CreatingSAML-configuring>
But as STS allows token validity to be max for 1 hour, I want to refresh those tokens before expiry so that I don't have to prompt user to give credentials again (bad user experience). Also as these are company login credentials, I cant store them in the application.
I was looking at AWS IAM trust policy, and one way to do this is adding 'AssumeRole' entry to the existing SAML trust policy as shown below (second entry in the policy)
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxxxxxxx:saml-provider/myidp.com"
},
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": "https://signin.aws.amazon.com/saml"
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::xxxxxxxxxxxx:assumed-role/testapp/testuser"
},
"Action": "sts:AssumeRole"
}
]
}
```
So for first time when testuser logs in as uses AssumeRoleWithSAML API/CLI, he will get temporary credentials. Next, he can use 'AssumeRole' API/CLI with those credentials, so that he can keep on refreshing the tokens without requires IDP credentials.
As can be seen, this works only for STS user with ARN of "arn:aws:sts::xxxxxxxxxxxx:assumed-role/testapp/testuser" for refreshing tokens as he/she can assume that role. but I need a generic way, where for any logged in user, he can generate STS tokens.
One way is to use wildcard characters in Trust policy for Principal, but looks like it is not supported. So I am stuck with tacking credentials every time the tokens expire. Is there a way to solve this?
thanks,
Rohan.<issue_comment>username_1: AWS STS supports longer role sessions (up to 12 hours) for the AssumeRole\* APIs. This was launched on 3/28/18, here is the AWS whats-new link: <https://aws.amazon.com/about-aws/whats-new/2018/03/longer-role-sessions/>. By that you need not to do a refresh as I assume a typical workday is < 12 hours :-)
Upvotes: 0 <issue_comment>username_2: I have been able to get this working by specifying a role instead of an assumed-role in the IAM trust policy. Now my users can indefinitely refresh their tokens if they have assumed the `testapp` role.
```
"Principal": {
"AWS": "arn:aws:sts::xxxxxxxxxxxx:role/testapp"
},
```
Upvotes: 2 <issue_comment>username_3: Your question is one I was working on solving myself, we have a WPF Desktop Application that is attempting to log into AWS through Okta, then use the [AssumeRoleWithSaml](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithSAML.html) API to get the STS Token.
Using this flow invoked the [Role Chaining](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-role-chaining) rules and thus our token would expire every hour.
What I did to overcome this is to cache the initial SAMLResponse Data from Okta (after the user does MFA) and use that information to ask for a new Token every 55 minutes. I then use that new token for any future AWS resource calls.
Once 12 hours passes, I ask the user to authenticate with Okta again.
For those wondering about implementation for their own WPF apps, we use the [AWS Account Federation App](https://www.okta.com/integrations/aws-account-federation/#overview) in Okta.
The application uses 2 packages:
* [Okta .NET Authentication SDK](https://github.com/okta/okta-auth-dotnet)
* [AWS SDK for .NET](https://github.com/aws/aws-sdk-net/)
After setting up your AWS Account Federation App in Okta, use the AWS Embed Url and SAML Redirect Url in your application to get your SAMLResponse data.
Upvotes: 0
|
2018/03/16
| 387
| 1,307
|
<issue_start>username_0: We are using WP Bakery Page Builder on for a client website. The plugin works fine, but sometimes the settings in Role Manager, for what post types the composer should be on just resets.
We are researching the possibility to hack the settings programmatically to set it to On be default.
Just wanted to check if anyone else have noticed this issue.<issue_comment>username_1: Just found this on another thread, which recommended adding this to your theme's custom code:
```
$vc\_list = array( ‘page’, ‘post’ ); vc\_editor\_set\_post\_types( $vc\_list );
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a bug in older versions. The developers posted a fix for it here:
<https://codecanyon.net/item/visual-composer-page-builder-for-wordpress/242431/comments?utf8=%E2%9C%93&term=add_custom_post_type_here&from_buyers_and_authors_only=0>
```
php
/*
You can set the post type for which the editor should be
available by adding the following code to functions.php:
*/
add_action( 'vc_before_init', 'Use_wpBakery' );
function Use_wpBakery() {
$vc_list = array('page','capabilities','add_custom_post_type_here');
vc_set_default_editor_post_types($vc_list);
vc_editor_set_post_types($vc_list);
}
</code
```
Edit: Updated link to dev comment.
Upvotes: 3
|
2018/03/16
| 389
| 1,122
|
<issue_start>username_0: I'm trying to get my display my list(s) into a html table but it's printing the whole list instead of each variable inside it. I'm using jinja2 templates.
```
| Date: | Scheduled: |
| --- | --- |
{% for day in working\_days %}
| {{working\_days}} |
{% endfor %}
```
Right now it's just displaying the whole list 7 times instead of each variable from the list in a seperate td.
>
> ['maandag 12 maart', 'dinsdag 13 maart', 'woensdag 14 maart',
> 'donderdag 15 maart', 'vrijdag 16 maart', 'zaterdag 17 maart', 'zondag
> 18 maart']
>
>
><issue_comment>username_1: This would solve the problem:
```
{% for day in working_days %}
| {{day}} | {# <- not working\_days #}
{% endfor %}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
| Date: | Scheduled: |
| --- | --- |
{% for day in working\_days %}
| {{ day }} |
{% endfor %}
```
replace `{{day}}` with `{{working_days}}`
Hope , It'll Help you.
Upvotes: 0 <issue_comment>username_3: You just need more for
```
{% for day in working_days%}
{% for d in day %}
| {{d} |
{% endfor %}
{% endfor %}
```
Upvotes: 0
|
2018/03/16
| 1,103
| 3,918
|
<issue_start>username_0: How can I pass a category name to new `WP_Query` when I click specific button with category name?
I've got this in my `functions.php`
```
php
add_action('wp_ajax_my_action', 'data_fetch');
add_action('wp_ajax_nopriv_my_action', 'data_fetch');
function data_fetch(){
$the_query = new WP_Query(array('post_type'='wydarzenie','posts_per_page'=>2, 'category_name'=>'2017'));
if($the_query->have_posts()):
while($the_query->have_posts()): $the_query->the_post(); ?>
php the\_title(); ?
-------------------
php the\_content(); ?
php endwhile;
wp_reset_postdata();
endif;
die();
}
?
```
and this on page with my default loop posts
```
function fetch(){
$.post('/PRACA/FundacjaWP/wp-admin/admin-ajax.php', {'action':'my_action'}, function(response){
$("#pick-event").html(response);
});
}
$(".show-specific-events").on("click", function(e){
e.preventDefault();
var category = $(this).text();
fetch();
});
```
I want load a new query with new loop based on category choose when I click a button. Now I set category '2017' but I want it to be dynamic.<issue_comment>username_1: Your code should look like this.
```
$args=array(
'posts_per_page' => 50,
'post_type' => 'my_custom_type'
'cat' => $cat_id,
);
$wp_query = new WP_Query( $args );
```
and when you use jquery at that time you need to pass category id on that.
Upvotes: 0 <issue_comment>username_2: Here we will learn how to use AJAX in WordPress. We will see how WordPress AJAX works as Beginner level. In this, we will pass a variable from JavaScript and pass it to WordPress theme function file. After doing the necessary process, we will pass the resulting content back to the JavaScript.
We are assuming that you already know how to enqueue JavaScript, etc.
**JavaScript:**
```
jQuery(document).ready(function($) {
$(".show-specific-events").on("click", function(e){
e.preventDefault();
var category = $(this).text();
// This does the ajax request
$.ajax({
url: codecanal_ajax_object.ajax_url,
data: {
'action':'codecanal_ajax_request',
'category_name' : category
},
success:function(data) {
// The OutPut after successfull receiveing content
console.log(data);
},
error: function(errorThrown){
console.log(errorThrown);
}
});
});
});
```
**Implementation of Argument**
If you are using in theme custom coding then put the below code in theme’s `functions.php` file
```
function codecanal_ajax_request() {
// The $_REQUEST contains all the data sent via ajax
if ( isset($_REQUEST) ) {
// You can check what data is received in the function by debugging it
// print_r($_REQUEST);
$category_name = $_REQUEST['category_name'];
$the_query = new WP_Query(array('post_type'=>'wydarzenie','posts_per_page'=>2, 'category_name'=> $category_name));
if($the_query->have_posts()):
while($the_query->have_posts()): $the_query->the_post(); ?>
php the\_title(); ?
-------------------
php the\_content(); ?
php endwhile;
wp_reset_postdata();
endif;
die();
}
// To return to the front page, always finish after echoing the desired content.
die();
}
add_action( 'wp_ajax_codecanal_ajax_request', 'codecanal_ajax_request' );
// For allowing non-logged in users to use AJAX function
// add_action( 'wp_ajax_nopriv_codecanal_ajax_request', 'codecanal_ajax_request' );
/* We can define the AJAX url with using wp_localize_script */
function codecanal_ajax_enqueue() {
wp_localize_script( 'ajax-script', 'codecanal_ajax_object',
array( 'ajax_url' = admin_url( 'admin-ajax.php' ) ) );
}
add_action( 'wp_enqueue_scripts', 'codecanal_ajax_enqueue' );
```
Upvotes: 1
|
2018/03/16
| 906
| 3,288
|
<issue_start>username_0: There is this great [ui guide](https://uxdesign.cc/good-to-great-ui-animation-tips-7850805c12e5) which shows some cool ui transitions.
I would like to know how to do this animation (the great one).
[](https://i.stack.imgur.com/0kUOZ.gif)
I know how to do the good one. I also have some success with the great one (by taking snapshot of the next `viewcontroller` to be presented and expanding it inside `animateTransition(using transitionContext: UIViewControllerContextTransitioning)`).
However I do not know how the great one **pushes** it's adjecent views on both sides.
I would like get an idea on how to do exactly that(push out). I do not need code. Just a general guide is fine.
**EDIT:**
I ended up implementing [Kubba](https://stackoverflow.com/a/49403159/3480088)'s idea.
[trungduc](https://stackoverflow.com/a/49427792/3480088)'s idea of animating `tableview` cell height has some drawbacks. The cell position is not proper before and after transition. Also, syncing the animation of `viewcontroller` frame with tableview cell height proved to be futile. Nevertheless it was a good solution although perhaps for a slightly different problem.
[Project](https://github.com/rishabdutta/SharedElementTransition)<issue_comment>username_1: I'd try something like this:
* take a snapshot of current controller
* extract two pieces from it: (1) from top of the card to top of the screen and (2) from the bottom of the card to bottom of the screen
* place them in places where they should be
* move (1) tho the top and (2) to the bottom along with your current transition
Another idea, which could be easier to achieve: you can just divide the snapshot on two pieces in the middle of expanding card but I'm not sure how animation would behave
Upvotes: 3 [selected_answer]<issue_comment>username_2: My solution is quite simple without snapshots.
1. When tapping on cell, save `tableView.contentOffset`.
2. Increase height of selected cell to screen height.
3. Update cell height, move cell to top of screen with an animation and pushing transition.
4. When you want to come back, decrease height of selected cell, change `contentOffset` by cached value with animation and transition.
***Result.***

I use the below code to make animation for `tableView`, put it inside `didSelectRowAtIndexPath` method.
```
[UIView beginAnimations:@"animation" context:nil];
[UIView setAnimationDuration:4.f];
[CATransaction begin];
[tableView beginUpdates];
[tableView endUpdates];
[tableView scrollToRowAtIndexPath:indexPath atScrollPosition:UITableViewScrollPositionTop animated:NO];
[CATransaction commit];
[UIView commitAnimations];
```
Sorry I'm not good at Swift ;)
Upvotes: 2 <issue_comment>username_3: You can use [Hero](https://github.com/lkzhao/Hero) `UIViewController` based transitions animation
library which support many type of animation.
Integration is very simple, to achieve beautiful animation.
You can use below sample screenshot, Which is create using Hero.
[](https://i.stack.imgur.com/oYrrZ.gif)
Upvotes: 1
|
2018/03/16
| 224
| 731
|
<issue_start>username_0: ```
Add Schedule Time +
........
Open Time
{{Opentime}}
Close Time
{{Closetime}}
```
.ts
```
AddNewSchedule(){
for(var i =0; i<10; i++){
this.OpenNewSchedule = true;
}
}
```
i need the div *OpenNewSchedule* to be added repeated, Now its getting added only once, and each added div have to be different value
How this can be done?<issue_comment>username_1: try this
```
public OpenNewSchedule = [];
AddNewSchedule() {
this.OpenNewSchedule.push('some value');
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this one:
```
public OpenNewSchedule = [];
AddNewSchedule() {
this.OpenNewSchedule.push('Element');
}
Element to display : {{element}}
```
Upvotes: 1
|
2018/03/16
| 1,440
| 5,478
|
<issue_start>username_0: I'm learning `SpringBoot2.0` with `Java8`.
And I followed some blog-making tutorial example.
The tutorial source code is:
```
@GetMapping("/{id}/edit")
public String edit(@PathVariable Long id, Model model) {
model.addAttribute("categoryDto", categoryService.findOne(id));
return "category/edit";
}
```
But this code is throwing this error:
>
> categoryService.findOne(id)
>
>
>
I'm thinking about changing the JPA `findOne()` method to `Optional< S >`
How to solve that?
More info:
This is the categoryService method:
```
public Category findOne(Long id) {
return categoryRepository.findOne(id);
}
```<issue_comment>username_1: Indeed, in the latest version of Spring Data, findOne returns an optional. If you want to retrieve the object from the Optional, you can simply use get() on the Optional. First of all though, a repository should return the optional to a service, which then handles the case in which the optional is empty. afterwards, the service should return the object to the controller.
Upvotes: 3 <issue_comment>username_2: `Optional` api provides methods for getting the values. You can check `isPresent()` for the presence of the value and then make a call to `get()` or you can make a call to `get()` chained with `orElse()` and provide a default value.
The last thing you can try doing is using `@Query()` over a custom method.
Upvotes: 2 <issue_comment>username_3: From at least, the `2.0` version, `Spring-Data-Jpa` modified `findOne()`.
Now, `findOne()` has neither the same signature nor the same behavior.
Previously, it was defined in the `CrudRepository` interface as:
```
T findOne(ID primaryKey);
```
Now, the single `findOne()` method that you will find in `CrudRepository` is the one defined in the `QueryByExampleExecutor` interface as:
```
~~Optional ~~findOne(Example ~~example);~~~~~~
```
That is implemented finally by `SimpleJpaRepository`, the default implementation of the `CrudRepository` interface.
In fact, the method with the same behavior is still there in the new API, but the method name has changed.
```
Optional findById(ID id);
```
Now it returns an `Optional`, which is not so bad to prevent `NullPointerException`.
**So, the actual method to invoke is now `Optional findById(ID id)`.**
How to use that?
Learning [`Optional`](https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html) usage.
Here's important information about its specification:
>
> A container object which may or may not contain a non-null value. If a
> value is present, isPresent() will return true and get() will return
> the value.
>
>
> Additional methods that depend on the presence or absence of a
> contained value are provided, such as orElse() (return a default value
> if value not present) and ifPresent() (execute a block of code if the
> value is present).
>
>
>
---
**Some hints on how to use `Optional` with `Optional findById(ID id)`.**
Generally, as you look for an entity by id, you want to return it or make a particular processing if that is not retrieved.
Here are three classical usage examples.
1. Suppose that if the entity is found you want to get it otherwise you want to get a default value.
You could write :
```
Foo foo = repository.findById(id)
.orElse(new Foo());
```
or get a `null` default value if it makes sense (same behavior as before the API change) :
```
Foo foo = repository.findById(id)
.orElse(null);
```
2. Suppose that if the entity is found you want to return it, else you want to throw an exception.
You could write :
```
return repository.findById(id)
.orElseThrow(() -> new EntityNotFoundException(id));
```
3. Suppose you want to apply a different processing according to if the entity is found or not (without necessarily throwing an exception).
You could write :
```
Optional fooOptional = fooRepository.findById(id);
if (fooOptional.isPresent()) {
Foo foo = fooOptional.get();
// processing with foo ...
} else {
// alternative processing....
}
```
Upvotes: 9 [selected_answer]<issue_comment>username_4: The method has been renamed to `findById(…)` returning an `Optional` so that you have to handle absence yourself:
```
Optional result = repository.findById(…);
result.ifPresent(it -> …); // do something with the value if present
result.map(it -> …); // map the value if present
Foo foo = result.orElse(null); // if you want to continue just like before
```
Upvotes: 4 <issue_comment>username_5: I always write a default method **"findByIdOrError"** in widely used CrudRepository repos/interfaces.
```
@Repository
public interface RequestRepository extends CrudRepository {
default Request findByIdOrError(Integer id) {
return findById(id).orElseThrow(EntityNotFoundException::new);
}
}
```
Upvotes: 2 <issue_comment>username_6: Consider an **`User`** entity and **`UserRepository`**. In service package code like below.
```
Optional resultUser = UserRepository.findById(userId); //return Optional
User createdUser = resultUser.get(); //return User
```
Now you can access all the User entity attributes using getter.
```
createdUser.getId();
createdUser.getName();
```
like that.
Upvotes: 0 <issue_comment>username_7: The `findOne` method of the `CrudRepository interface` has been replaced by `findById` since version 2.0 of Spring Data Commons.
you replace `findOne(id)` by:
```
findById(id).orElse(null)
```
Upvotes: 2
|
2018/03/16
| 1,300
| 4,312
|
<issue_start>username_0: I have a file defined this way:
```
A R RCONVTXT
A TEXT 100A COLHDG('Text')
A TEXT2B 100G COLHDG('Text')
A CCSID(1200 *NOCONVERT)
```
I'm working with characters in polish language stored in the "TEXT" field.
If I use this code in my rpgle program:
```
exec sql
UPDATE CONVTXT set TEXT2B =
CAST(CAST(TEXT as char(100) CCSID 65535)
AS CHAR(100) CCSID 870);
```
all the text in first field "TEXT" is converted perfectly and updated in the other field in unicode.
But if the text that I want to convert is in a text field (100c), how can I convert it using SQL?<issue_comment>username_1: Are you asking how you use an RPG defined variable in embedded SQL?
If so, the answer is fairly simple. You simply need to put a leading ':' in front of the RPG variable in the SQL statement.
```
Dcl-S Text Char(100) Inz('blah');
Exec Sql
Update ConvTxt Set Text2B =
Cast(Cast(:Text As Char(100) CCSID 65535)
As Char(100) CCSID 870);
```
Also, you can set a character string to a particular CCSID using the following which may be an even better solution to your problem.
```
Dcl-S Text Char(100) CCSID(870) Inz('blah');
Exec Sql
Update ConvTxt Set Text2B = :Text;
```
Upvotes: 0 <issue_comment>username_2: RPG will automatically convert between CCSID's. This is all it takes:
```
**free
dcl-s ucs2str Ucs2(100) Inz('this is a test');
dcl-s charstr Varchar(100) Inz('');
charstr = ucs2str;
```
Here is a memory dump of `ucs2str`:
[](https://i.stack.imgur.com/Flmkd.png)
Here is a memory dump of `charstr` after the assignment:
[](https://i.stack.imgur.com/itr30.png)
---
Here is a little more info on this topic. TL/DR The following code works, and just a side note, our system is set to CCSID 65535. That isn't necessarily a good choice, it is just the way things are.
```
exec sql
create table jmmlib/mytable
(charfld Char(100) ccsid 37,
ucs2fld NChar(100));
exec sql
insert into jmmlib/mytable
values('Constant Test', 'Constant Test'),
(:ucs2str, :ucs2str),
(:charstr, :ucs2str);
exec sql
declare c1 cursor for
select cast(ucs2fld as char(100) ccsid 37), charfld from jmmlib/mytable;
exec sql
open c1;
exec sql
fetch c1 into :ucs2str, :charstr; ((1))
exec sql
fetch c1 into :charstr, :ucs2str; ((2))
exec sql
fetch c1 into :ucs2str, :charstr; ((3))
exec sql
close c1;
```
So here things are a bit jumbled to help keep things seperate. I wouldn't necessarily code it this way normally. The table columns are in the following order (UCS2, CHAR). The fetch columns are in the following order (CHAR, UCS2).
First look at the insert. I can insert constants in each of the fields, and the character sets are converted properly. I can insert a UCS2 string into either the UCS2 field or the CHAR field. But, I can only insert the CHAR field into the CHAR field. There appears to be some issue with converting between 65535 and UCS2. I believe that this is an issue for me because our box has the QCCSID system value set to 65535. This is true even though the default CCSID for our jobs is 37. I do not think this would be an issue if QCCSID was set some other way.
Next look at the declaration for cursor `C1`. I have cast UCS2FLD to CCSID 37. This is the only way I could get FETCH ((2)) to work. This was that conversion issue again. CCSID 37 can likely be put into a 65535 field because it is an EBCDIC CCSID, so the field is still EBCDIC, even though no conversion happens, and RPG is ok with that (or SQL since it was an SQL error message). But it can't put UCS2FLD into an EBCDIC field without converting it first, and it can't convert from UCS2 to CCSID 65535. Once again, I don't think this would be a problem if we weren't using CCSID 65535.
Upvotes: 2
|
2018/03/16
| 1,401
| 3,978
|
<issue_start>username_0: I have two list and i try to combining in one `dict`
```
list1 = ['sys_time', 'sys_time', 'sys_time']
list2 = ['2018-03-16T11:00:00.000-07:00', '2018-03-12T00:00:00.000-07:00', '2018-03-14T00:00:00.000-07:00']
dict(zip(list1, list2))
```
**output :**
```
{'sys_time': '2018-03-14T00:00:00.000-07:00'}
```
how to I combining same keys in appand multipal values
like that :
```
{'sys_time': ['2018-03-16T11:00:00.000-07:00', 2018-03-12T00:00:00.000-07:00, '2018-03-14T00:00:00.000-07:00']}
```<issue_comment>username_1: Are you asking how you use an RPG defined variable in embedded SQL?
If so, the answer is fairly simple. You simply need to put a leading ':' in front of the RPG variable in the SQL statement.
```
Dcl-S Text Char(100) Inz('blah');
Exec Sql
Update ConvTxt Set Text2B =
Cast(Cast(:Text As Char(100) CCSID 65535)
As Char(100) CCSID 870);
```
Also, you can set a character string to a particular CCSID using the following which may be an even better solution to your problem.
```
Dcl-S Text Char(100) CCSID(870) Inz('blah');
Exec Sql
Update ConvTxt Set Text2B = :Text;
```
Upvotes: 0 <issue_comment>username_2: RPG will automatically convert between CCSID's. This is all it takes:
```
**free
dcl-s ucs2str Ucs2(100) Inz('this is a test');
dcl-s charstr Varchar(100) Inz('');
charstr = ucs2str;
```
Here is a memory dump of `ucs2str`:
[](https://i.stack.imgur.com/Flmkd.png)
Here is a memory dump of `charstr` after the assignment:
[](https://i.stack.imgur.com/itr30.png)
---
Here is a little more info on this topic. TL/DR The following code works, and just a side note, our system is set to CCSID 65535. That isn't necessarily a good choice, it is just the way things are.
```
exec sql
create table jmmlib/mytable
(charfld Char(100) ccsid 37,
ucs2fld NChar(100));
exec sql
insert into jmmlib/mytable
values('Constant Test', 'Constant Test'),
(:ucs2str, :ucs2str),
(:charstr, :ucs2str);
exec sql
declare c1 cursor for
select cast(ucs2fld as char(100) ccsid 37), charfld from jmmlib/mytable;
exec sql
open c1;
exec sql
fetch c1 into :ucs2str, :charstr; ((1))
exec sql
fetch c1 into :charstr, :ucs2str; ((2))
exec sql
fetch c1 into :ucs2str, :charstr; ((3))
exec sql
close c1;
```
So here things are a bit jumbled to help keep things seperate. I wouldn't necessarily code it this way normally. The table columns are in the following order (UCS2, CHAR). The fetch columns are in the following order (CHAR, UCS2).
First look at the insert. I can insert constants in each of the fields, and the character sets are converted properly. I can insert a UCS2 string into either the UCS2 field or the CHAR field. But, I can only insert the CHAR field into the CHAR field. There appears to be some issue with converting between 65535 and UCS2. I believe that this is an issue for me because our box has the QCCSID system value set to 65535. This is true even though the default CCSID for our jobs is 37. I do not think this would be an issue if QCCSID was set some other way.
Next look at the declaration for cursor `C1`. I have cast UCS2FLD to CCSID 37. This is the only way I could get FETCH ((2)) to work. This was that conversion issue again. CCSID 37 can likely be put into a 65535 field because it is an EBCDIC CCSID, so the field is still EBCDIC, even though no conversion happens, and RPG is ok with that (or SQL since it was an SQL error message). But it can't put UCS2FLD into an EBCDIC field without converting it first, and it can't convert from UCS2 to CCSID 65535. Once again, I don't think this would be a problem if we weren't using CCSID 65535.
Upvotes: 2
|
2018/03/16
| 859
| 2,872
|
<issue_start>username_0: Why did the pointer "precord" (in code sample 1) needed an address-of operator to change the value in structure member?
```
//Code Sample 1
#include
struct student
{int a;};
void input(struct student \*);
void main() {
struct student age;
printf("Enter Your Age\n");
input(&age);
printf("%d",age.a);
}
void input(struct student \*precord) {
scanf("%d",&precord->a);
}
```
While pointer successfully managed to change the value of another variable without address-of operator in Code Sample 2.
```
//Code Sample 2
#include
void input(int \*);
void main() {
int age;
printf("Enter Your Age\n");
input(&age);
printf("%d",age);
}
void input(int \*precord) {
scanf("%d",precord);
}
```<issue_comment>username_1: When I built your program, I get the following warnings, which cover what was said in the comments:
```
sc.c: In function ‘main’:
sc.c:15:3: warning: implicit declaration of function ‘input’ [-Wimplicit-function-declaration]
input(&record);
^
sc.c: At top level:
sc.c:19:6: warning: conflicting types for ‘input’
void input(struct student *precord)
^
sc.c:15:3: note: previous implicit declaration of ‘input’ was here
input(&record);
^
sc.c: In function ‘input’:
sc.c:22:10: warning: format ‘%d’ expects argument of type ‘int *’, but argument 3 has type ‘int’ [-Wformat=]
scanf("%[^\n] %d",precord->name,precord->age);
^
```
There is one more problem, namely that %[^\n] eats up the whole line, so that entering for example "<NAME>" makes "<NAME>" be the name, and then it waits for the age on the next line.
I recommend against ever using 'scanf'. Read lines into a string instead, and then use 'sscanf', and always check the result so you get the number of matched values that you expect.
Upvotes: 1 <issue_comment>username_2: You need to pass the address of the integer you want to read in to `scanf()`.
```
scanf("%[^\n] %d", precord->name, &(precord->age));
```
This will allow the user to type in the value for `name`, hit `RETURN` and then type in the value for `age` and hit `RETURN`.
If you want the user to type in both `name` and `age` on the same line, separated by a space, and `name` is not to include any spaces, you can do
```
scanf("%[^ \n] %d", precord->name, &(precord->age));
```
to have `scanf()` stop reading characters for `name` when it hits a space.
Upvotes: 2 <issue_comment>username_3: In `scanf` function, we have to provide the address of the `variable`. But in your case for the **first argument**, you are providing the `base address of the array but for the second argument`, you are dereferencing the structure member age through the pointer. you have to provide the address of the variable age. Update your `scanf` arguments as follows:
---
```
scanf("%[^\n] %d",precord->name,precord->age);
```
I will print the inputs.
Upvotes: 1
|
2018/03/16
| 692
| 2,413
|
<issue_start>username_0: Is there a difference between different compilers, for example if I use node-sass or gulp-sass, and I've seen a few places with Postcss-nesting, postcss-variables and so on, is this the same thing? Is there a difference between them syntax-wise?<issue_comment>username_1: When I built your program, I get the following warnings, which cover what was said in the comments:
```
sc.c: In function ‘main’:
sc.c:15:3: warning: implicit declaration of function ‘input’ [-Wimplicit-function-declaration]
input(&record);
^
sc.c: At top level:
sc.c:19:6: warning: conflicting types for ‘input’
void input(struct student *precord)
^
sc.c:15:3: note: previous implicit declaration of ‘input’ was here
input(&record);
^
sc.c: In function ‘input’:
sc.c:22:10: warning: format ‘%d’ expects argument of type ‘int *’, but argument 3 has type ‘int’ [-Wformat=]
scanf("%[^\n] %d",precord->name,precord->age);
^
```
There is one more problem, namely that %[^\n] eats up the whole line, so that entering for example "<NAME>" makes "<NAME>" be the name, and then it waits for the age on the next line.
I recommend against ever using 'scanf'. Read lines into a string instead, and then use 'sscanf', and always check the result so you get the number of matched values that you expect.
Upvotes: 1 <issue_comment>username_2: You need to pass the address of the integer you want to read in to `scanf()`.
```
scanf("%[^\n] %d", precord->name, &(precord->age));
```
This will allow the user to type in the value for `name`, hit `RETURN` and then type in the value for `age` and hit `RETURN`.
If you want the user to type in both `name` and `age` on the same line, separated by a space, and `name` is not to include any spaces, you can do
```
scanf("%[^ \n] %d", precord->name, &(precord->age));
```
to have `scanf()` stop reading characters for `name` when it hits a space.
Upvotes: 2 <issue_comment>username_3: In `scanf` function, we have to provide the address of the `variable`. But in your case for the **first argument**, you are providing the `base address of the array but for the second argument`, you are dereferencing the structure member age through the pointer. you have to provide the address of the variable age. Update your `scanf` arguments as follows:
---
```
scanf("%[^\n] %d",precord->name,precord->age);
```
I will print the inputs.
Upvotes: 1
|
2018/03/16
| 786
| 2,675
|
<issue_start>username_0: Hi I am new to exposing Ml models as flask API. Below is my code:
```
import numpy as np
from nltk.corpus import wordnet
from nltk.stem.wordnet import WordNetLemmatizer
import re
from sklearn.externals import joblib
import warnings
warnings.filterwarnings('ignore')
from flask import Flask, jsonify, request
app = Flask(__name__)
@app.route("/glcoding", methods=['POST'])
def mylemmatize(token):
lmtzr = WordNetLemmatizer()
lemmas = {}
lemma = None
if not token in lemmas:
lemma = wordnet.morphy(token)
if not lemma:
lemma = token
if lemma == token:
lemma = lmtzr.lemmatize(token)
lemmas[token] = lemma
return lemmas[token]
def cleanmytext(text):
words = map(mylemmatize,text.lower().split())
return ' '.join(words)
def glcoding():
if request.method == 'POST':
json_data = request.get_json()
data = pd.read_json(json_data, orient='index')
data['Invoice line item description'] = data['Invoice line item description'].apply(cleanmytext)
return jsonify(data)
if __name__ == '__main__':
app.run()
```
With the below code I am calling the API:
```
from flask import Flask, jsonify, request
import requests, json
BASE_URL = "http://127.0.0.1:5000"
data = '{"0":{"Vendor Number": "166587","Invoice line item description":"Petrol charges with electricity"}}'
response = requests.post("{}/glcoding".format(BASE_URL), json = data)
response.json()
```
I am getting a error as mentioned below:
```
Traceback (most recent call last):
TypeError: mylemmatize() takes exactly 1 argument (0 given)
127.0.0.1 - - [16/Mar/2018 14:31:51] "POST /glcoding HTTP/1.1" 500 -
```
The above code is working fine when I am not exposing it as an API. But it is throwing up an error only when called from an API. Please help<issue_comment>username_1: You defined your request handler `mylemmatize(token)` to take a variable called `token` but you route is not aware of that and so does not pass your data to the request handler.
Change your route from :
`@app.route("/glcoding", methods=['POST'])`
to this instead:
`@app.route("/glcoding/", methods=['POST'])`
See the [doc on variable rule](http://flask.pocoo.org/docs/0.12/quickstart/#variable-rules) for more info.
Also if you do not need to pass the token as a variable, then you need
to remove it from your `mylemmatize` function definition.
Upvotes: 0 <issue_comment>username_2: You decorated the wrong method with the `app.route()` decorator. Just move the decorator above the `glcoding()` method and everything should be working.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 3,283
| 7,563
|
<issue_start>username_0: I have a dataframe with date column, some of the data are missing based on year and month. i have to display are the months for all the years in my dataset and the corresponding columns should display with zeros.
My dataframe looks like this
```
Date Churn Churnrate customerID
2008,01 726.0 0.542398 2763
2008,02 345.0 0.257751 1351
2012,11 NaN NaN 6
2013,01 3.0 0.002241 24
2013,02 10.0 0.007471 34
2013,03 25.0 0.018678 73
2013,04 25.0 0.018678 75
2013,05 14.0 0.010459 61
2013,06 19.0 0.014195 69
2013,07 27.0 0.020172 103
2013,08 22.0 0.016436 79
2013,09 19.0 0.014195 70
2013,10 28.0 0.020919 83
2013,11 22.0 0.016436 78
2013,12 19.0 0.014195 75
2014,01 17.0 0.012701 63
2014,02 21.0 0.015689 55
2014,03 7.0 0.005230 66
2014,04 24.0 0.017931 86
2014,05 18.0 0.013448 90
2014,06 14.0 0.010459 50
```
For example in the year 2018, i have only two month records, but i want to display all the 12 months with 0s in the corresponding columns
My another dataframe looks like this
```
Months Retention_Rate Customer_Count
0 2008/01 0.145916 133
1 2008/02 0.924663 762
2 2008/03 0.074544 67
3 2014/07 0.058684 45
4 2014/08 0.069786 61
5 2014/09 0.076130 64
6 2014/10 0.061856 60
7 2014/11 0.082474 69
```
I have used the same answer which is given below
```
predicted_retention_rate = predicted_retention_rate.set_index('Months')
idx =(pd.MultiIndex.from_product(predicted_retention_rate.index.str.split('/', expand=True).levels)
.map('/'.join))
final_retention_rate_predicted = predicted_retention_rate.reindex(idx, fill_value=0).rename_axis('Months').reset_index()
print (final_retention_rate_predicted)
```
But some of the months are missing in this output
```
Months Retention_Rate Customer_Count
0 2008/01 0.145916 133
1 2008/02 0.924663 762
2 2008/03 0.074544 67
3 2008/07 0.000000 0
4 2008/08 0.000000 0
5 2008/09 0.000000 0
6 2008/10 0.000000 0
7 2008/11 0.000000 0
8 2014/01 0.000000 0
9 2014/02 0.000000 0
10 2014/03 0.000000 0
11 2014/07 0.058684 45
12 2014/08 0.069786 61
13 2014/09 0.076130 64
14 2014/10 0.061856 60
15 2014/11 0.082474 69
```
Look at the above dataframe, year 2008 contains 01,02,03 but not 04,05,06 and the same in 2014. May i known where i went wrong here.<issue_comment>username_1: I think need [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) by new `index` created by [`split`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html) `Date` to `MultiIndex` and `map` with `join`:
```
df = df.set_index('Date')
idx =(pd.MultiIndex.from_product(df.index.str.split(',', expand=True).levels)
.map(','.join))
df = df.reindex(idx, fill_value=0).rename_axis('Date').reset_index()
print (df.head())
Date Churn Churnrate customerID
0 2008,01 726.0 0.542398 2763
1 2008,02 345.0 0.257751 1351
2 2008,03 0.0 0.000000 0
3 2008,04 0.0 0.000000 0
4 2008,05 0.0 0.000000 0
```
EDIT: Solution with define all `Month`s by `range(1,13)`
```
df = df.set_index('Months')
years = df.index.str.split('/', expand=True).levels[0]
idx = (pd.MultiIndex.from_product([years,
pd.Series(range(1, 13)).astype(str).str.zfill(2)])
.map('/'.join))
df = df.reindex(idx, fill_value=0).rename_axis('Date').reset_index()
print (df)
Date Retention_Rate Customer_Count
0 2008/01 0.145916 133
1 2008/02 0.924663 762
2 2008/03 0.074544 67
3 2008/04 0.000000 0
4 2008/05 0.000000 0
5 2008/06 0.000000 0
6 2008/07 0.000000 0
7 2008/08 0.000000 0
8 2008/09 0.000000 0
9 2008/10 0.000000 0
10 2008/11 0.000000 0
11 2008/12 0.000000 0
12 2014/01 0.000000 0
13 2014/02 0.000000 0
14 2014/03 0.000000 0
15 2014/04 0.000000 0
16 2014/05 0.000000 0
17 2014/06 0.000000 0
18 2014/07 0.058684 45
19 2014/08 0.069786 61
20 2014/09 0.076130 64
21 2014/10 0.061856 60
22 2014/11 0.082474 69
23 2014/12 0.000000 0
```
If need replace the missing year and the corresponding columns with zeros:
```
print (df)
Year Churn_Count Churn_Rate Customer_Count
2008 1071.0 0.800149 4114
2012 0.0 0.000000 6
2013 233.0 0.174075 824
2014 101.0 0.075458 410
```
Then use:
```
df1 = (df.set_index('Year')
.reindex(range(2008, 2015), fill_value=0)
.reset_index())
print (df1)
Year Churn_Count Churn_Rate Customer_Count
0 2008 1071.0 0.800149 4114
1 2009 0.0 0.000000 0
2 2010 0.0 0.000000 0
3 2011 0.0 0.000000 0
4 2012 0.0 0.000000 6
5 2013 233.0 0.174075 824
6 2014 101.0 0.075458 410
```
More dynamic solution for `reindex` by min and max year:
```
df1 = df.set_index('Year')
df1 = (df1.reindex(range(df1.index.min(), df1.index.max() + 1), fill_value=0)
.reset_index())
print (df1)
Year Churn_Count Churn_Rate Customer_Count
0 2008 1071.0 0.800149 4114
1 2009 0.0 0.000000 0
2 2010 0.0 0.000000 0
3 2011 0.0 0.000000 0
4 2012 0.0 0.000000 6
5 2013 233.0 0.174075 824
6 2014 101.0 0.075458 410
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think that another simple approach could be this.
```
import pandas as pd
df = pd.DataFrame({"date":["2010-01", "2010-02", "2011-01"],
"a": [1, 2, 3],
"b":[0.2,-0.1,0.4]})
df["date"] = pd.to_datetime(df["date"])
all_dates = pd.DataFrame({"date":pd.date_range(start=df["date"].min(),
end=df["date"].max(),
freq="MS")})
df = pd.merge(all_dates, df, how="left", on="date").fillna(0)
```
If `date` is your index you can just play with `.reset_index()` and `.set_index()`. Then if you want to maintain the same date format just add `df["date"] = df["date"].dt.strftime("%Y-%m")`
Upvotes: 1
|
2018/03/16
| 2,272
| 6,022
|
<issue_start>username_0: Getting this Error In Xamarin Form Android Project I am refering Entity framework library int android project. Error as below
>
> Severity Code Description Project File Line Suppression State Error
> Exception while loading assemblies: System.IO.FileNotFoundException:
> Could not load assembly 'EntityFramework, Version=6.0.0.0,
> Culture=neutral, PublicKeyToken=b77a5c561934e089'. Perhaps it doesn't
> exist in the Mono for Android profile? File name:
> 'EntityFramework.dll' at
> Java.Interop.Tools.Cecil.DirectoryAssemblyResolver.Resolve(AssemblyNameReference
> reference, ReaderParameters parameters) at
> Xamarin.Android.Tasks.ResolveAssemblies.AddAssemblyReferences(DirectoryAssemblyResolver
> resolver, ICollection`1 assemblies, AssemblyDefinition assembly,
> Boolean topLevel) at
> Xamarin.Android.Tasks.ResolveAssemblies.AddAssemblyReferences(DirectoryAssemblyResolver
> resolver, ICollection`1 assemblies, AssemblyDefinition assembly,
> Boolean topLevel) at
> Xamarin.Android.Tasks.ResolveAssemblies.AddAssemblyReferences(DirectoryAssemblyResolver
> resolver, ICollection`1 assemblies, AssemblyDefinition assembly,
> Boolean topLevel) at
> Xamarin.Android.Tasks.ResolveAssemblies.Execute(DirectoryAssemblyResolver
> resolver) RadLoc.Android C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Community\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets
> 1500
>
>
>
I am stuck here please help me out of this..<issue_comment>username_1: I think need [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) by new `index` created by [`split`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html) `Date` to `MultiIndex` and `map` with `join`:
```
df = df.set_index('Date')
idx =(pd.MultiIndex.from_product(df.index.str.split(',', expand=True).levels)
.map(','.join))
df = df.reindex(idx, fill_value=0).rename_axis('Date').reset_index()
print (df.head())
Date Churn Churnrate customerID
0 2008,01 726.0 0.542398 2763
1 2008,02 345.0 0.257751 1351
2 2008,03 0.0 0.000000 0
3 2008,04 0.0 0.000000 0
4 2008,05 0.0 0.000000 0
```
EDIT: Solution with define all `Month`s by `range(1,13)`
```
df = df.set_index('Months')
years = df.index.str.split('/', expand=True).levels[0]
idx = (pd.MultiIndex.from_product([years,
pd.Series(range(1, 13)).astype(str).str.zfill(2)])
.map('/'.join))
df = df.reindex(idx, fill_value=0).rename_axis('Date').reset_index()
print (df)
Date Retention_Rate Customer_Count
0 2008/01 0.145916 133
1 2008/02 0.924663 762
2 2008/03 0.074544 67
3 2008/04 0.000000 0
4 2008/05 0.000000 0
5 2008/06 0.000000 0
6 2008/07 0.000000 0
7 2008/08 0.000000 0
8 2008/09 0.000000 0
9 2008/10 0.000000 0
10 2008/11 0.000000 0
11 2008/12 0.000000 0
12 2014/01 0.000000 0
13 2014/02 0.000000 0
14 2014/03 0.000000 0
15 2014/04 0.000000 0
16 2014/05 0.000000 0
17 2014/06 0.000000 0
18 2014/07 0.058684 45
19 2014/08 0.069786 61
20 2014/09 0.076130 64
21 2014/10 0.061856 60
22 2014/11 0.082474 69
23 2014/12 0.000000 0
```
If need replace the missing year and the corresponding columns with zeros:
```
print (df)
Year Churn_Count Churn_Rate Customer_Count
2008 1071.0 0.800149 4114
2012 0.0 0.000000 6
2013 233.0 0.174075 824
2014 101.0 0.075458 410
```
Then use:
```
df1 = (df.set_index('Year')
.reindex(range(2008, 2015), fill_value=0)
.reset_index())
print (df1)
Year Churn_Count Churn_Rate Customer_Count
0 2008 1071.0 0.800149 4114
1 2009 0.0 0.000000 0
2 2010 0.0 0.000000 0
3 2011 0.0 0.000000 0
4 2012 0.0 0.000000 6
5 2013 233.0 0.174075 824
6 2014 101.0 0.075458 410
```
More dynamic solution for `reindex` by min and max year:
```
df1 = df.set_index('Year')
df1 = (df1.reindex(range(df1.index.min(), df1.index.max() + 1), fill_value=0)
.reset_index())
print (df1)
Year Churn_Count Churn_Rate Customer_Count
0 2008 1071.0 0.800149 4114
1 2009 0.0 0.000000 0
2 2010 0.0 0.000000 0
3 2011 0.0 0.000000 0
4 2012 0.0 0.000000 6
5 2013 233.0 0.174075 824
6 2014 101.0 0.075458 410
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think that another simple approach could be this.
```
import pandas as pd
df = pd.DataFrame({"date":["2010-01", "2010-02", "2011-01"],
"a": [1, 2, 3],
"b":[0.2,-0.1,0.4]})
df["date"] = pd.to_datetime(df["date"])
all_dates = pd.DataFrame({"date":pd.date_range(start=df["date"].min(),
end=df["date"].max(),
freq="MS")})
df = pd.merge(all_dates, df, how="left", on="date").fillna(0)
```
If `date` is your index you can just play with `.reset_index()` and `.set_index()`. Then if you want to maintain the same date format just add `df["date"] = df["date"].dt.strftime("%Y-%m")`
Upvotes: 1
|
2018/03/16
| 2,172
| 5,499
|
<issue_start>username_0: I am trying to web scrape the data from the Flipkart site. The link for the webpage is as follows:
<https://www.flipkart.com/mi-a1-black-64-gb/product-reviews/itmexnsrtzhbbneg?aid=overall&pid=MOBEX9WXUSZVYHET>
I need to automate navigation to the NEXT page by clicking on NEXT button the webpage. Below is the code I'm using
```
nextButton <-remDr$findElement(value ='//div[@class="_2kUstJ"]')$clickElement()
```
Error
```
Selenium message:Element is not clickable at point
```
I even tried scrolling the webpage as suggested by many stackoverflow questions using the below code
```
remDr$executeScript("arguments[0].scrollIntoView(true);", nextButton)
```
But this code is also giving error as
```
Error in checkError(res) : Undefined error in httr call. httr output: No method for S4 class:webElement
```
Kindly suggest the solution. I'm using `firefox browser` and `selenium` to automate using `R` programming.<issue_comment>username_1: I think need [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) by new `index` created by [`split`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html) `Date` to `MultiIndex` and `map` with `join`:
```
df = df.set_index('Date')
idx =(pd.MultiIndex.from_product(df.index.str.split(',', expand=True).levels)
.map(','.join))
df = df.reindex(idx, fill_value=0).rename_axis('Date').reset_index()
print (df.head())
Date Churn Churnrate customerID
0 2008,01 726.0 0.542398 2763
1 2008,02 345.0 0.257751 1351
2 2008,03 0.0 0.000000 0
3 2008,04 0.0 0.000000 0
4 2008,05 0.0 0.000000 0
```
EDIT: Solution with define all `Month`s by `range(1,13)`
```
df = df.set_index('Months')
years = df.index.str.split('/', expand=True).levels[0]
idx = (pd.MultiIndex.from_product([years,
pd.Series(range(1, 13)).astype(str).str.zfill(2)])
.map('/'.join))
df = df.reindex(idx, fill_value=0).rename_axis('Date').reset_index()
print (df)
Date Retention_Rate Customer_Count
0 2008/01 0.145916 133
1 2008/02 0.924663 762
2 2008/03 0.074544 67
3 2008/04 0.000000 0
4 2008/05 0.000000 0
5 2008/06 0.000000 0
6 2008/07 0.000000 0
7 2008/08 0.000000 0
8 2008/09 0.000000 0
9 2008/10 0.000000 0
10 2008/11 0.000000 0
11 2008/12 0.000000 0
12 2014/01 0.000000 0
13 2014/02 0.000000 0
14 2014/03 0.000000 0
15 2014/04 0.000000 0
16 2014/05 0.000000 0
17 2014/06 0.000000 0
18 2014/07 0.058684 45
19 2014/08 0.069786 61
20 2014/09 0.076130 64
21 2014/10 0.061856 60
22 2014/11 0.082474 69
23 2014/12 0.000000 0
```
If need replace the missing year and the corresponding columns with zeros:
```
print (df)
Year Churn_Count Churn_Rate Customer_Count
2008 1071.0 0.800149 4114
2012 0.0 0.000000 6
2013 233.0 0.174075 824
2014 101.0 0.075458 410
```
Then use:
```
df1 = (df.set_index('Year')
.reindex(range(2008, 2015), fill_value=0)
.reset_index())
print (df1)
Year Churn_Count Churn_Rate Customer_Count
0 2008 1071.0 0.800149 4114
1 2009 0.0 0.000000 0
2 2010 0.0 0.000000 0
3 2011 0.0 0.000000 0
4 2012 0.0 0.000000 6
5 2013 233.0 0.174075 824
6 2014 101.0 0.075458 410
```
More dynamic solution for `reindex` by min and max year:
```
df1 = df.set_index('Year')
df1 = (df1.reindex(range(df1.index.min(), df1.index.max() + 1), fill_value=0)
.reset_index())
print (df1)
Year Churn_Count Churn_Rate Customer_Count
0 2008 1071.0 0.800149 4114
1 2009 0.0 0.000000 0
2 2010 0.0 0.000000 0
3 2011 0.0 0.000000 0
4 2012 0.0 0.000000 6
5 2013 233.0 0.174075 824
6 2014 101.0 0.075458 410
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think that another simple approach could be this.
```
import pandas as pd
df = pd.DataFrame({"date":["2010-01", "2010-02", "2011-01"],
"a": [1, 2, 3],
"b":[0.2,-0.1,0.4]})
df["date"] = pd.to_datetime(df["date"])
all_dates = pd.DataFrame({"date":pd.date_range(start=df["date"].min(),
end=df["date"].max(),
freq="MS")})
df = pd.merge(all_dates, df, how="left", on="date").fillna(0)
```
If `date` is your index you can just play with `.reset_index()` and `.set_index()`. Then if you want to maintain the same date format just add `df["date"] = df["date"].dt.strftime("%Y-%m")`
Upvotes: 1
|
2018/03/16
| 662
| 2,383
|
<issue_start>username_0: In low-level-api, we can use
```
print(session.run(xx_tensor_after_xx_operation, feed_dict=feed_dict))
```
to get the real data for debugging. But in custom estimator, how to debug these tensors?
Here is my snippet for a vivid sample:
```
import tensorflow as tf
FLAGS = tf.app.flags.FLAGS
def yichu_dssm_model_fn(
features, # This is batch_features from input_fn
labels, # This is batch_labels from input_fn
mode, # An instance of tf.estimator.ModeKeys
params):
# word_id sequence in content
content_input = tf.feature_column.input_layer(features, params['feature_columns'])
content_embedding_matrix = tf.get_variable(name='content_embedding_matrix',
shape=[FLAGS.max_vocab_size, FLAGS.word_vec_dim])
content_embedding = tf.nn.embedding_lookup(content_embedding_matrix, content_input)
content_embedding = tf.reshape(content_embedding, shape=[-1, FLAGS.max_text_len, FLAGS.word_vec_dim, 1])
content_conv = tf.layers.Conv2D(filters=100, kernel_size=[3, FLAGS.word_vec_dim])
content_conv_tensor = content_conv(content_embedding)
"""
in low-level-api, we can use `print(session.run(content_conv_tensor))` to get the real data to debug.
But in custom estimator, how to debug these tensors?
"""
```<issue_comment>username_1: You can use [tf.Print](https://www.tensorflow.org/api_docs/python/tf/Print). It adds operation to the graph that prints tensors content to standard error when executed.
```py
content_conv_tensor = tf.Print(content_conv_tensor, [content_conv_tensor], 'content_conv_tensor: ')
```
Upvotes: 2 <issue_comment>username_2: `sess = tf.InteractiveSession()
test = sess.run(features)
print('features:')
print(test)`
Although this causes error, it still prints out the tensor values. Error occurs right after the print so you can only use it for checking the tensor values.
Upvotes: 0 <issue_comment>username_3: tf.Print is deprecated, use tf.print, but it's not easy to use
best option is a logging hook
```
hook = \
tf.train.LoggingTensorHook({"var is:": var_to_print},
every_n_iter=10)
return tf.estimator.EstimatorSpec(mode, loss=loss,
train_op=train_op,
training_hooks=[hook])
```
Upvotes: 2
|
2018/03/16
| 501
| 1,949
|
<issue_start>username_0: I want to create an AsyncTask that will handle my communication with the server (the client is the android app and the server is python).
My main activity will need, depends on user interaction, send data to the server.
How can I pass the string that changes all the time to the AsyncTask?
For example, I have this variable in my main activity:
```
String toSend = "Something"
```
The user pressed a button and now the string contains this data:
```
toSend = "After Button Pressed"
```
The question is how can I pass the always changing `toSend` string to the Async Task?
UPDATE:
**I know how to create an AsyncTask**. The AsyncTask will be started at the beginning of the activity. It is not a private class in the activity. The input to the AsyncTask is dinamically changing (based in user interacts). Is there a way to have a dinamically changing input to the task? Maybe pass it by ref?<issue_comment>username_1: You can use [tf.Print](https://www.tensorflow.org/api_docs/python/tf/Print). It adds operation to the graph that prints tensors content to standard error when executed.
```py
content_conv_tensor = tf.Print(content_conv_tensor, [content_conv_tensor], 'content_conv_tensor: ')
```
Upvotes: 2 <issue_comment>username_2: `sess = tf.InteractiveSession()
test = sess.run(features)
print('features:')
print(test)`
Although this causes error, it still prints out the tensor values. Error occurs right after the print so you can only use it for checking the tensor values.
Upvotes: 0 <issue_comment>username_3: tf.Print is deprecated, use tf.print, but it's not easy to use
best option is a logging hook
```
hook = \
tf.train.LoggingTensorHook({"var is:": var_to_print},
every_n_iter=10)
return tf.estimator.EstimatorSpec(mode, loss=loss,
train_op=train_op,
training_hooks=[hook])
```
Upvotes: 2
|
2018/03/16
| 513
| 1,989
|
<issue_start>username_0: I get this error when I run create-stack for a cloudformation template that contains IAM policies.
```
aws cloudformation create-stack --stack-name iam-stack --template-body file://./iam.yml --capabilities CAPABILITY_IAM --profile dev
```
An error occurred (InsufficientCapabilitiesException) when calling the CreateStack operation: Requires capabilities : [CAPABILITY\_NAMED\_IAM]<issue_comment>username_1: Change `--capabilities` to `CAPABILITY_NAMED_IAM`
>
> If you have IAM resources with custom names, you must specify
> CAPABILITY\_NAMED\_IAM. If you don't specify this parameter, this action
> returns an InsufficientCapabilities error.
>
>
>
<https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html>
Upvotes: 7 [selected_answer]<issue_comment>username_2: In my case I needed both `CAPABILITY_IAM` and `CAPABILITY_NAMED_IAM` capabilities for a resource of type "AWS::IAM::Role".
<https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#createStack-property>
Upvotes: 0 <issue_comment>username_3: As per AWS docs,
**If you specify a Role name in cloud formation, you must specify the CAPABILITY\_NAMED\_IAM value to acknowledge your template's capabilities** [Link](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html#cfn-iam-role-rolename)
So your command should be
```
aws cloudformation create-stack --stack-name iam-stack --template-body file://./iam.yml --capabilities CAPABILITY_NAMED_IAM --profile dev
```
Upvotes: 4 <issue_comment>username_4: If you are using AWS CodePipeline to deploy an EC2 using a CloudFormation stack, there is an option called "Capabilities" from which you can select CAPABILITY\_NAMED\_IAM.
Upvotes: 0 <issue_comment>username_5: You must pass capability as below if you're not letting CloudFormation name your IAM resources.
Change from `--capabilities CAPABILITY_IAM` to `--capabilities CAPABILITY_NAMED_IAM`.
Upvotes: 0
|
2018/03/16
| 1,278
| 4,608
|
<issue_start>username_0: I'm building an application using Spring Boot 2 with Kotlin.
Somehow I just can't get ConfigurationProperties to work.
As far as I know when Maven `compile` is run then a file `spring-configuration-metadata.json` should be created in `target/classes/META-INF`
My setup so far:
pom.xml
=======
```
xml version="1.0" encoding="UTF-8"?
4.0.0
com.brabantia.log
logdb
0.0.1-SNAPSHOT
jar
logdb
Logging database Brabantia
org.springframework.boot
spring-boot-starter-parent
2.0.0.RELEASE
UTF-8
UTF-8
1.8
1.2.21
2.0.0.RELEASE
org.springframework.boot
spring-boot-starter
${spring.boot.version}
org.springframework.boot
spring-boot-starter-actuator
${spring.boot.version}
org.springframework.boot
spring-boot-starter-security
${spring.boot.version}
org.springframework.boot
spring-boot-starter-data-jpa
${spring.boot.version}
org.springframework.boot
spring-boot-configuration-processor
${spring.boot.version}
true
org.springframework.boot
spring-boot-starter-test
${spring.boot.version}
test
org.springframework.boot
spring-boot-devtools
${spring.boot.version}
runtime
org.flywaydb
flyway-core
org.jetbrains.kotlin
kotlin-stdlib-jdk8
1.2.30
org.jetbrains.kotlin
kotlin-reflect
1.2.30
com.microsoft.sqlserver
mssql-jdbc
runtime
${project.basedir}/src/main/kotlin
${project.basedir}/src/test/kotlin
org.springframework.boot
spring-boot-maven-plugin
kotlin-maven-plugin
org.jetbrains.kotlin
-Xjsr305=strict
spring
org.jetbrains.kotlin
kotlin-maven-allopen
${kotlin.version}
org.apache.maven.plugins
maven-surefire-plugin
org.flywaydb
flyway-maven-plugin
5.0.7
esb\_logging\_user
esb\_logging\_user
jdbc:sqlserver://localhost;database=esb\_logging
```
LogDbProperties.kt
==================
```
import org.springframework.boot.context.properties.ConfigurationProperties
import org.springframework.context.annotation.Configuration
@Configuration
@ConfigurationProperties(prefix = "logdb")
class LogDbProperties {
var enabled: Boolean = false
}
```
LogDbApplication.kt
===================
```
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.boot.CommandLineRunner
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
@SpringBootApplication
class LogdbApplication: CommandLineRunner {
override fun run(vararg args: String?) {
println(logDbProperties.enabled)
}
@Autowired
lateinit var logDbProperties: LogDbProperties
}
fun main(args: Array) {
runApplication(\*args)
}
```
How can I get this to work?
Update
======
It seems that the annotations are picked up by Spring, but IntelliJ just doesn't create the `spring-configuration-metadata.json` file, which means it's just autocompletions that's not working.
So how could I make IntelliJ create the `spring-configuration-metadata.json` file?<issue_comment>username_1: Please see [official Spring documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/configuration-metadata.html#configuration-metadata-annotation-processor) regarding to configuration properties generation.
Shortly, you have to add this into maven:
```
org.springframework.boot
spring-boot-configuration-processor
true
```
Important: IntelliJ idea could not work with kapt, which will generate metadata. So you have to ask gradle/maven to make a full build. As a result:
1. Your output folders will have `spring-configuration-metadata.json` generated.
2. Your output jar will have this file too
3. IntelliJ Idea will read this file and will show highlights.
Upvotes: -1 <issue_comment>username_2: Thanks to [this post on StackOverflow](https://stackoverflow.com/questions/37858833/spring-configuration-metadata-json-file-is-not-generated-in-intellij-idea-for-ko) I finally have the answer
Just to make the solution complete:
Add the following dependency to pom.xml
```
org.springframework.boot
spring-boot-configuration-processor
true
```
And (this is what all other answers are missing) add this execution to the `kotlin-maven-plugin`
```
kapt
kapt
src/main/kotlin
org.springframework.boot
spring-boot-configuration-processor
1.5.3.RELEASE
```
Now an example ConfigurationProperties class:
```
@ConfigurationProperties(prefix = "logdb")
class LogDbProperties {
var enabled: Boolean = false
}
```
Now run `mvn compile` or `mvn clean compile`
And presto: you have a file named `spring-configuration-metadata.json` in `target/classes/META-INF`.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 486
| 1,865
|
<issue_start>username_0: This question for .NET framework. Putting a try/catch block in the call method is a solution for this problem but I want to handle it in upper level. Is there any solution for this?<issue_comment>username_1: Please see [official Spring documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/configuration-metadata.html#configuration-metadata-annotation-processor) regarding to configuration properties generation.
Shortly, you have to add this into maven:
```
org.springframework.boot
spring-boot-configuration-processor
true
```
Important: IntelliJ idea could not work with kapt, which will generate metadata. So you have to ask gradle/maven to make a full build. As a result:
1. Your output folders will have `spring-configuration-metadata.json` generated.
2. Your output jar will have this file too
3. IntelliJ Idea will read this file and will show highlights.
Upvotes: -1 <issue_comment>username_2: Thanks to [this post on StackOverflow](https://stackoverflow.com/questions/37858833/spring-configuration-metadata-json-file-is-not-generated-in-intellij-idea-for-ko) I finally have the answer
Just to make the solution complete:
Add the following dependency to pom.xml
```
org.springframework.boot
spring-boot-configuration-processor
true
```
And (this is what all other answers are missing) add this execution to the `kotlin-maven-plugin`
```
kapt
kapt
src/main/kotlin
org.springframework.boot
spring-boot-configuration-processor
1.5.3.RELEASE
```
Now an example ConfigurationProperties class:
```
@ConfigurationProperties(prefix = "logdb")
class LogDbProperties {
var enabled: Boolean = false
}
```
Now run `mvn compile` or `mvn clean compile`
And presto: you have a file named `spring-configuration-metadata.json` in `target/classes/META-INF`.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 509
| 1,788
|
<issue_start>username_0: ```
function clickCounter() {
if(typeof(Storage) !== "undefined") {
if (localStorage.clickcount) {// <----- here!
localStorage.clickcount = Number(localStorage.clickcount)+1;
} else {
localStorage.clickcount = 1;
}
document.getElementById("result").innerHTML = "You have clicked the button " + localStorage.clickcount + " time(s).";
} else {
document.getElementById("result").innerHTML = "Sorry, your browser does not support web storage...";
}
}
Click me!
Click the button to see the counter increase.
Close the browser tab (or window), and try again, and the counter will continue to count (is not reset).
```
`localStorage.clickcount` <-- it returns boolean ???
I can't understand it.
because there is no expression like `==` at all.<issue_comment>username_1: ECMASCript defines rules to convert expressions to boolean values:
>
> **9.2 ToBoolean**
>
>
> * Number :
> The result is **false** if the argument is **+0**, **-0**, or **NaN**; otherwise the result is **true**.
>
>
>
Then the statement is equivalent to:
```
if (localStorage.clickcount != 0)
```
Upvotes: 2 <issue_comment>username_2: A term that you may see often in this context is **truthy**. MDN describes it as follows:
>
> In JavaScript, a **truthy** value is a value that is considered **true** when evaluated in a Boolean context.
>
>
>
Source: **[Truthy](https://developer.mozilla.org/en-US/docs/Glossary/Truthy)**
The *if*-statement wants/needs a boolean value. It checks what it got actually fed, and interprets that as either true (truthy) or false (falsy).
Upvotes: 1 <issue_comment>username_3: These cases return false:
boolean is false
number is 0\NaN
string is ''
var is undefined
var is null
otherwise -> return true
Upvotes: 0
|
2018/03/16
| 789
| 3,046
|
<issue_start>username_0: I am creating a dashboard app that fetches data from an endpoint and uses the setState method to assign variables from the JSON returned by the endpoint to state variables. When I make a change to state, some components like 'react-svg-gauge' will update but 'react-chartjs-2' does not update.
The below code is an example of how my state changes in my actual app. This example will display the correct value of the state variables in the chrome developer console but does not update the DOM accordingly.
```
import React, { Component } from 'react';
import {Doughnut} from 'react-chartjs-2';
class DoughnutExample extends Component {
state = {
data: {
labels: [
'Red',
'Green',
'Yellow'
],
datasets: [{
data: [300, 50, 100],
backgroundColor: [
'#FF6384',
'#36A2EB',
'#FFCE56'
],
hoverBackgroundColor: [
'#FF6384',
'#36A2EB',
'#FFCE56'
]
}]
}
}
componentDidMount() {
this.timer = setInterval(
() => this.increment(),
1000
)
}
componentWillUnmount() {
clearInterval(this.timer)
}
increment() {
let datacopy = Object.assign({}, this.state.data)
datacopy.datasets[0].data[0] = datacopy.datasets[0].data[0] + 10
console.log(datacopy.datasets[0].data[0])
this.setState({data: datacopy})
}
render(){
return(
)
}
}
export default DoughnutExample;
```
Am I using the lifecycle methods correctly? This code will update the value of the pie chart, the state variable but will not correctly render the DOM.<issue_comment>username_1: Check this link. You can see that he binds the function to the component.
<https://codepen.io/gaearon/pen/xEmzGg?editors=0010>
```
// This binding is necessary to make `this` work in the callback
this.handleClick = this.handleClick.bind(this);
```
Upvotes: 1 <issue_comment>username_2: The potential issue I see is that you try to update nested property while you mutate it. So if `Doughnut` is passing parts of `data` to other components they will not be notified that props have changed. So it is necessary to do deep clone:
```
increment() {
const datasetsCopy = this.state.data.datasets.slice(0);
const dataCopy = datasetsCopy[0].data.slice(0);
dataCopy[0] = dataCopy[0] + 10;
datasetsCopy[0].data = dataCopy;
this.setState({
data: Object.assign({}, this.state.data, {
datasets: datasetsCopy
})
});
}
```
You may also need to bind the function like @<NAME> suggests.
Upvotes: 5 [selected_answer]<issue_comment>username_3: you can also empty the state variable and populate it again
like
```
this.setState({
mainData: {}
});
this.setState({
mainData: md
});
```
while `md` been initialized with the value of `mainData`
Upvotes: 0
|
2018/03/16
| 2,270
| 5,770
|
<issue_start>username_0: I need to build a bash command in a script depending on some cuote or normal parameters. For example:
```
BAYES)
class="weka.classifiers.bayes.BayesNet"
A="-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E"
B="weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5" ;;
LOGISTIC)
class="weka.classifiers.functions.Logistic"
A="-R 1.0E-8 -M -1 -num-decimal-places 4" ;;
SIMPLELOG)
class="weka.classifiers.functions.SimpleLogistic"
A="-I 0 -M 500 -H 50 -W 0.0" ;;
SMO)
class="weka.classifiers.functions.SMO"
A="-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K"
A1="weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0" ;;
IBK)
class="weka.classifiers.lazy.IBk"
A="-K 1 -W 0 -A "
A1="weka.core.neighboursearch.LinearNNSearch -A"
A2="weka.core.EuclideanDistance -R first-last" ;;
KSTAR)
class="weka.classifiers.lazy.KStar"
A="-B 20 -M a" ;;
...
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" $class -s $i -t "$file" $A "$A1" $B "$B1"
```
However, my problem is that in some conditions, when $A1 is empty, the "$A1" parameter is not valid. The same with "$B1". And the parameter could be in any combination ($A1 with $B1, $A1 without $B2, ...).
Also I've tried include $A1 in $A as following:
```
A="-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K \"weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0\""
```
and execute:
```
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" $class -s $i -t "$file" $A
```
but this doesn't work.<issue_comment>username_1: From your question, I did:
* Initialized the variables
* Completed the case statement
* Removed some not required double quotes
* Defined some variables for which you did not provide values for
* backslash your double quotes if you must have then in the java command
If you need double quotes for certain variables, put these in the variables. This way you will not have "" in your java command if the variables is empty. I did this for A1 in case IBK.
This will get you started, modify as required:
```
#!/bin/bash
#
mem="512"
WEKA_INSTALL_DIR='/opt/weka'
class=""
i="value-of-i"
A=""
A1=""
B=""
B1=""
file="SOMEFILE"
case $1 in
'BAYES')
class="weka.classifiers.bayes.BayesNet"
A="-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E"
B="weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5"
;;
'LOGISTIC')
class="weka.classifiers.functions.Logistic"
A="-R 1.0E-8 -M -1 -num-decimal-places 4"
;;
'SIMPLELOG')
class="weka.classifiers.functions.SimpleLogistic"
A="-I 0 -M 500 -H 50 -W 0.0"
;;
'SMO')
class="weka.classifiers.functions.SMO"
A="-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K"
A1="weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0"
;;
'IBK')
class="weka.classifiers.lazy.IBk"
A="-K 1 -W 0 -A "
A1="\"weka.core.neighboursearch.LinearNNSearch -A\""
A2="weka.core.EuclideanDistance -R first-last"
;;
'KSTAR')
class="weka.classifiers.lazy.KStar"
A="-B 20 -M a"
;;
*)
# default options
;;
esac
echo java -Xmx${mem}m -cp $WEKA_INSTALL_DIR/weka.jar $class -s $i -t $file $A $A1 $B $B1
```
Example:
```
./test.bash LOGISTIC
java -Xmx512m -cp /opt/weka/weka.jar weka.classifiers.functions.Logistic -s value-of-i -t SOMEFILE -R 1.0E-8 -M -1 -num-decimal-places 4
./test.bash IBK
java -Xmx512m -cp /opt/weka/weka.jar weka.classifiers.lazy.IBk -s value-of-i -t SOMEFILE -K 1 -W 0 -A "weka.core.neighboursearch.LinearNNSearch -A"
```
Upvotes: -1 <issue_comment>username_2: You cannot safely and reliably store multiple arguments in a single string; you need to use arrays; this is their intended use case. Make sure to initialize any arrays that won't be used, so that they "disappear" when expanded.
```
# If A is undefined, "${A[@]}" is an empty string.
# But if A=(), then "${A[@]}" simply disappears from the command line.
A=()
B=()
A1=()
A2=()
case $something in
BAYES)
class="weka.classifiers.bayes.BayesNet"
A=(-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E)
B=(weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5);;
LOGISTIC)
class="weka.classifiers.functions.Logistic"
A=(-R 1.0E-8 -M -1 -num-decimal-places 4);;
SIMPLELOG)
class="weka.classifiers.functions.SimpleLogistic"
A=(-I 0 -M 500 -H 50 -W 0.0) ;;
SMO)
class="weka.classifiers.functions.SMO"
A=(-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K)
A1=(weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0) ;;
IBK)
class="weka.classifiers.lazy.IBk"
A=(-K 1 -W 0 -A)
A1=(weka.core.neighboursearch.LinearNNSearch -A)
A2=(weka.core.EuclideanDistance -R first-last);;
KSTAR)
class="weka.classifiers.lazy.KStar"
A=(-B 20 -M a) ;;
esac
```
and *always* quote parameter expansions.
```
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" \
"$class" -s "$i" -t "$file" "${A[@]}" "${A1[@]}" "${B[@]}" "${B1[@]}"
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: SOLUTION:
I solved all my problems using only a parameter A like this:
```
BAYES)
class="weka.classifiers.bayes.BayesNet"
A=(-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5);;
SMO)
class="weka.classifiers.functions.SMO"
A=(-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K "weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0");;
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" $class -s $i -t "$file" "${A[@]}"
```
Upvotes: 0
|
2018/03/16
| 1,595
| 4,785
|
<issue_start>username_0: I am making a tic-tac-toe game , where in i need to create button and then add events to it . There are three buttons in a row and next row 3 buttons and next row another three buttons. I have used break tag as well as \r\n notation still it doesnot work. Please help.
```
var table= [];
var blocks = 9;
var player,boardId;
winningCombinations = [[0,1,2],[3,4,5],[6,7,8],[0,3,6],[1,4,7],[2,5,8],[0,4,8],[2,4,6]];
$(document).ready(function(){
buttonId = 0;
for (var index = 0; index < blocks; index++) {
button1 = document.createElement("button");
if((index==2||index==5||index==8)&&(buttonId==3||buttonId==6||buttonId==9)){
button1.innerHTML="\r\n"+"<br>";
}
button1.innerHTML = " + " ;
button1.id = buttonId;
button1.setAttribute("value", buttonId);
button1.setAttribute("text", buttonId);
button1.style.fontFamily = "Times New Roman";
button1.style.backgroundSize = "50px";
button1.style.backgroundColor = "#C0C0C0";
button1.style.fontSize = "25px";
button1.style.marginBottom = "10px";
button1.style.marginLeft = "5px";
button1.style.marginRight = "5px";
document.body.appendChild(button1);
buttonId++;
}
});
```<issue_comment>username_1: You condition of appending `br` is incorrect, I have added modified code in the [js bin](https://jsbin.com/zaqacuxemi/edit?html,js,console,output)
```js
var table= [];
var blocks = 9;
var player,boardId;
winningCombinations = [[0,1,2],[3,4,5],[6,7,8],[0,3,6],[1,4,7],[2,5,8],[0,4,8],[2,4,6]];
$(document).ready(function(){
buttonId = 0;
for (var index = 0; index < blocks; index++) {
button1 = document.createElement("button");
button1.innerHTML = " + " ;
button1.id = buttonId;
button1.setAttribute("value", buttonId);
button1.setAttribute("text", buttonId);
button1.style.fontFamily = "Times New Roman";
button1.style.backgroundSize = "50px";
button1.style.backgroundColor = "#C0C0C0";
button1.style.fontSize = "25px";
button1.style.marginBottom = "10px";
button1.style.marginLeft = "5px";
button1.style.marginRight = "5px";
document.body.appendChild(button1);
if((index==2||index==5||index==8)){
document.body.appendChild(document.createElement('br'));
}
buttonId++;
}
});
```
```html
```
Upvotes: 0 <issue_comment>username_2: CSS Solution
============
You can do that by using CSS, display the buttons as blocks and float them to the left.
Clear every third button.
```js
var table = [];
var blocks = 9;
var player, boardId;
winningCombinations = [
[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[0, 3, 6],
[1, 4, 7],
[2, 5, 8],
[0, 4, 8],
[2, 4, 6]
];
$(document).ready(function() {
buttonId = 0;
for (var index = 0; index < blocks; index++) {
button1 = document.createElement("button");
if ((index == 2 || index == 5 || index == 8) && (buttonId == 3 || buttonId == 6 || buttonId == 9)) {
button1.innerHTML = "\r\n" + "
";
}
button1.innerHTML = " + ";
button1.id = buttonId;
button1.setAttribute("value", buttonId);
button1.setAttribute("text", buttonId);
button1.style.fontFamily = "Times New Roman";
button1.style.backgroundSize = "50px";
button1.style.backgroundColor = "#C0C0C0";
button1.style.fontSize = "25px";
button1.style.marginBottom = "10px";
button1.style.marginLeft = "5px";
button1.style.marginRight = "5px";
document.body.appendChild(button1);
buttonId++;
}
});
```
```css
button {
display: block;
float: left;
}
button:nth-of-type(3n+1) {
clear: both;
}
```
```html
```
JS Solution
===========
```js
var table = [];
var blocks = 9;
var player, boardId;
winningCombinations = [
[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[0, 3, 6],
[1, 4, 7],
[2, 5, 8],
[0, 4, 8],
[2, 4, 6]
];
$(document).ready(function() {
buttonId = 0;
for (var index = 0; index < blocks; index++) {
button1 = document.createElement("button");
button1.innerHTML = " + ";
button1.id = buttonId;
button1.setAttribute("value", buttonId);
button1.setAttribute("text", buttonId);
button1.style.fontFamily = "Times New Roman";
button1.style.backgroundSize = "50px";
button1.style.backgroundColor = "#C0C0C0";
button1.style.fontSize = "25px";
button1.style.marginBottom = "10px";
button1.style.marginLeft = "5px";
button1.style.marginRight = "5px";
document.body.appendChild(button1);
if( (index+1) % 3 == 0 ) {
document.body.appendChild( document.createElement("br") );
}
buttonId++;
}
});
```
```html
```
Upvotes: 1
|
2018/03/16
| 1,155
| 3,836
|
<issue_start>username_0: I am experimenting with *Vue.js* (version 2.5.16) and its `v-for` directive which should be able to repeat an element according to some integer range, according to the [official Vue.js documentation](https://v2.vuejs.org/v2/guide/list.html#v-for-with-a-Range). Concretely, I am trying to write a component that draws a number of circular counters based on an integer-valued property.
The following snippet, containing the hard-coded literal value `10`, does indeed render precisely ten circles: ([jsfiddle](https://jsfiddle.net/StephenM/vvj23rae/17/))
```
```
However, hard-coding the value is of limited utility. I have added an integer-valued property to my component as follows: (*typescript*)
```
export default Vue.extend({
props: {
counter: Number
},
...
```
... and tried the following variants of the `v-for` directive:
* `v-for="n in counter" :key="n"` ([jsfiddle](https://jsfiddle.net/StephenM/vvj23rae/19/))
* `v-for="n in {counter}" :key="n"` ([jsfiddle](https://jsfiddle.net/StephenM/vvj23rae/20/))
But neither of them achieve a variable number of rendered circles. (*Note: I have employed the Vue developer tools to ensure that the `counter` property of the component does, in fact, hold the correct value.*)
This brings me to my question: **How do you use `v-for` with an integer range set by a property of your component?**
If this is not possible, then the integer-range support of `v-for` is indeed rather useless. How often does one want to use a hard-coded range?
However, I still want the behaviour. How would one implement it without `v-for`? I can think of several possible alternatives:
1. Write my own render function.
2. Use the `counter` property in a *computed* property that returns an array of the desired size and use `v-for` on that array.
3. Bind `v-for` to an array and hook into changes of the `counter` property to update that internal array using only array mutations that are listed on the [array change detection page](https://v2.vuejs.org/v2/guide/list.html#Mutation-Methods) so that *Vue* does not discard and rebuild the entire DOM substructure on every change.
Option 1 seems like a tonne of work for such a simple use-case. Option 2 is trivially easy but I fear that it will cause *Vue* to discard and regenerate ***all*** the repeated child elements on every change. Option 3 seems like it would perform the best, if it is possible, but I don't really know how to go about it. (As I said, I am investigating *Vue* for the first time.)
What to do?<issue_comment>username_1: As mentioned in the [`v-for` docs](https://v2.vuejs.org/v2/guide/list.html#v-for-with-a-Range), you can use `v-for` with a numeric range directly:
>
> `v-for` can also take an integer. In this case it will repeat the template that many times.
>
>
>
So you can just use `v-for="n in counter"`, as in the example below:
```js
new Vue({
el: '#app',
data() {
return {
counter: 10
}
}
});
```
```css
.counter {
height: 100px;
}
```
```html
### Select number of circles
```
Upvotes: 1 <issue_comment>username_2: You just have to bind value to your `counter` property.Let's assume your component is called `circ`.
```
```
Demo: <http://jsbin.com/vebijiyini/edit?html,js,output>
Upvotes: 3 [selected_answer]<issue_comment>username_3: Seeing your [fiddle](https://jsfiddle.net/StephenM/vvj23rae/19/) , you are passing the prop named `value` as
```
```
You are not dynamically binding the `value` prop using `v-bind` or `:` (shorthand)
So the number `14` you are passing as `value` is evaluated as a string
So bind the prop to consider it as a number
```
counter-component v-bind:value="14" />
```
Or
```
counter-component :value="14" />
```
Here is your [updated fiddle](https://jsfiddle.net/vvj23rae/35/)
Upvotes: 1
|
2018/03/16
| 239
| 857
|
<issue_start>username_0: I have a toast.js file which contains the following code:
```
$(document).ready(function() {
$(".tstInfo").on("click",function(){
$.toast({
heading: 'Welcome to my Elite admin',
text: 'Use the predefined ones, or specify a custom position object.',
position: 'top-right',
loaderBg:'#ff6849',
icon: 'info',
hideAfter: 3000,
stack: 6
});
});});
```
on my aspx page i have button
```
```
now i want to call js function when i click on the button. can somebody please tell me how to do it.
Thanks in advance<issue_comment>username_1: ```
```
in your code replace btnShow with `tstInfo` it will work and remove OnClick
Upvotes: 2 <issue_comment>username_2: You can use `OnClientClick` like this :
`OnClientClick="javascript:alert('Hello')"`
Upvotes: 1
|
2018/03/16
| 505
| 1,705
|
<issue_start>username_0: i get an "no database selected" error and i can't figure out why. Would be nice if someone could help me out. Code below. There are no typos in there and im using XAMPP/Apache as server so localhost should be right i guess?
```
php
$servername = "localhost";
$dbname = "databank";
$conn = mysqli_connect($servername, $dbname);
if(!$conn)
{
die("Connection failed: " . mysqli_connect_error());
}
$Kundennummer = $_POST["id"];
$Vorname = $_POST["vorname"];
$Nachname = $_POST["nachname"];
$plz = $_POST["plz"];
$strasse = $_POST["strasse"];
$hausnummer = $_POST["hausnummer"];
$sql = "INSERT INTO kundendaten (Kundennummer, ProduktID, Vorname,Nachname, Hausnummer, Strasse, PLZ)
Values ('$Kundennummer', '0', '$Vorname', '$Nachname', '$hausnummer', '$strasse', '$plz')";
if(mysqli_query($conn, $sql))
{
echo "DONE";
}
else
{
echo "ERROR: " . $sql . "<br" . mysqli_error($conn);
}
mysqli_close($conn);
?>
```<issue_comment>username_1: Learn **[mysqli\_connect()](https://www.w3schools.com/php/func_mysqli_connect.asp)** in php
The valid syntex is
```
mysqli_connect(host,username,password,dbname,port,socket);
```
Upvotes: 2 <issue_comment>username_2: You have forgot to add username and password into the mysqli\_connect.
Please check below sample.
```
php
$con = mysqli_connect("localhost","mysqli-user","mysqli-password","<PASSWORD>");
// Check connection
if (mysqli_connect_errno())
{
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
?
```
I hope this might be helpful for you to resolve your issue.
Upvotes: 1 [selected_answer]
|
2018/03/16
| 602
| 2,209
|
<issue_start>username_0: I have seen few web posts and solutions but do not seem to work.
Am i missing the obvious?
this is what I have
At Solution level and not where the package resides
I have a folder like the pic below
[](https://i.stack.imgur.com/K0G5k.png)
.tfignore contains the following
```
# Ignore NuGet Packages
*.nupkg
# Ignore the NuGet packages folder in the root of the repository. If needed, prefix 'packages'
# with additional folder names if it's not in the same folder as .tfignore.
packages
# Omit temporary files
project.lock.json
project.assets.json
*.nuget.props
```
nuget.config.xml contains
```
xml version="1.0" encoding="utf-8"?
```
When I try to checkin items in visual studio 2017 it still shows all the packages.
Can somebody help with what I am doing wrong?
thanks<issue_comment>username_1: 1. In my opinion .tfignore should be located in solution folder directly and the file name is .tfignore and not .tfignore.txt
2. The nuget config name is NuGet.config and not nuget.config.xml
3. If packages are installed before .tfignore created, then you have may be to "undo" package folder first in Source Control explorer.
4. Personally I would ignore the entire nuget packages folder with exception of targets
Upvotes: 1 <issue_comment>username_2: According to your screenshot, **you are not using the correctly `.tfignore` file.** This file does **not have any suffix**. One way to create it, suggest you to rename a new.txt file with `"tfignore."` It will auto change to right .tfignore file.
You can also use the **auto automatically generated .tfignore file**, follow below steps in [my answer](https://stackoverflow.com/a/36784750/5391065) here.
More detail info about the `.tfignore` file please refer this [tutorial](https://learn.microsoft.com/en-us/vsts/tfvc/add-files-server#tfignore-file-rules).
***Note:*** This `.tfignore` file will not affect with those files already in source control. You need to remove them from source control first. Also make sure your .tfignore files have checked in source control.
Upvotes: 4 [selected_answer]
|
2018/03/16
| 888
| 3,229
|
<issue_start>username_0: i have put one checkbox in header for the select other check boxes but click event not working on header.
Javascript code
```
$(document).ready(function () {
$('#example').on('click','.checkAll', function () {
alert("hi");
if ($(this).prop('checked')) {
$('.otherChechBox').prop('checked', true);
selected = [];
$(".otherChechBox:checked").each(function () {
selected.push($(this).val());
});
} else {
$('.otherChechBox').prop('checked', false);
selected = [];
}
});
});
```
HTML Code
```
| | # | Permission Parent | Permission Name | Slug | Action |
| --- | --- | --- | --- | --- | --- |
@foreach($permissions as $permission)
| | {{$permission->id}} | {{$permission->permission\_parent}} | {{$permission->permission\_name}} | {{$permission->slug}} |
|
@endforeach
```
i have also tried this
```
$('#example').on('click','.checkAll', function () {..});
```
click event working perfectly in `|` but not working in `|`<issue_comment>username_1: Try to wrap the js code in document.ready function.
```
$(document).ready(function(){
$('#example').on('click','.checkAll', function () {
alert("hi");
if ($(this).prop('checked')) {
$('.otherChechBox').prop('checked', true);
selected = [];
$(".otherChechBox:checked").each(function () {
selected.push($(this).val());
});
} else {
$('.otherChechBox').prop('checked', false);
selected = [];
}
});
});
```
Update
------
```
$(document).ready(function(){
$('#example').on('click','.checkAll', function () {
alert("hi");
});
});
```
Try above code first to check if the event is being triggered or not. If you get the alert there is another issue in your js.
Update 2
--------
Try using Onclick on input.
```
function checkboxfunction(){
alert("clicked");
}
```
Upvotes: 1 <issue_comment>username_2: So you want to check all other checkboxes in the table, when clicking on the `.checkAll` checkbox ?
See snippet below, let me know if this is what you wanted.
```js
$('#example').on('click', '.checkAll', function() {
let otherCheck = $('input[type="checkbox"]').not(this)
otherCheck.prop('checked', this.checked);
});
```
```html
| | # | Permission Parent | Permission Name | Slug | Action |
| --- | --- | --- | --- | --- | --- |
```
Upvotes: 0 <issue_comment>username_3: i have figured out,its work with below snippet.
```
$('#example thead th').on('click','.checkAll', function () {..});
```
Upvotes: 1 [selected_answer]<issue_comment>username_4: Similar issue.
Datatable was firing
```
$('#example').on('change','input[type="checkbox"]'....
```
for me when checkbox was in thead UNTIL I scrolled the table and the heading became fixed at the top (using the extra fixedHeader.min.js setting). The `on('change')` then no longer fired if I ticked/unticked the checkbox in the heading.
Adding in an extra
```
$('#example thead th').on('click', '#myclickfield', function () {
```
fired every time (i.e. whether heading was visible as part of table, or only visible as part of the fixedheader redisplay of heading row).
Upvotes: 0
|
2018/03/16
| 997
| 2,822
|
<issue_start>username_0: Here is some html and css. I need to show arrow on hover inside link, but I can't do this. How can I fix it?
```css
.header-text-links a {
display: block;
width: 278px;
height: 55px;
padding: 0px 20px;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
justify-content: center;
align-items: center;
border: 1px solid #fab608;
color: #fab608;
font-size: 18px;
font-family: "Futura Demi";
text-transform: uppercase;
}
.header-text-links a .svg-inline--fa {
display: none;
}
.header-text-links a:hover {
color: white;
background: #fab608;
text-decoration: none;
justify-content: space-around;
}
.header-text-links a:hover .header-text-links a .svg-inline--fa {
display: block;
}
```
```html
[Some text](#)
```
Also, I tried to get icon by classes .fas, .fa-arrow-right even try to get path tag, but the result is the same<issue_comment>username_1: You are using wrong selector...try `.header-text-links a:hover .svg-inline--fa`
**Why?**
For better understanding remove `:hover` for just once, so it will look like
```
.header-text-links a .header-text-links a .svg-inline--fa
```
which means it will target `.svg-inline--fa` inside `.header-text-links a` again inside `.header-text-links a`
```css
.header-text-links a {
display: block;
width: 278px;
height: 55px;
padding: 0px 20px;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
justify-content: center;
align-items: center;
border: 1px solid #fab608;
color: #fab608;
font-size: 18px;
font-family: "Futura Demi";
text-transform: uppercase;
}
.header-text-links a .svg-inline--fa {
display: none;
}
.header-text-links a:hover {
color: white;
background: #fab608;
text-decoration: none;
justify-content: space-around;
}
.header-text-links a:hover .svg-inline--fa {
display: block;
}
```
```html
[Some text](#)
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: **Best solution**
Only change in css
```
.header-text-links a:hover > .svg-inline--fa {
display: inline-block !important;
}
```
```css
.header-text-links a {
display: block;
width: 278px;
height: 55px;
padding: 0px 20px;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
justify-content: center;
align-items: center;
border: 1px solid #fab608;
color: #fab608;
font-size: 18px;
font-family: "Futura Demi";
text-transform: uppercase;
}
.header-text-links a .svg-inline--fa {
display: none;
}
.header-text-links a:hover {
color: white;
background: #fab608;
text-decoration: none;
justify-content: space-around;
}
.header-text-links a:hover > .svg-inline--fa {
display: inline-block !important;
}
```
```html
[Some text](#)
```
Upvotes: 1
|
2018/03/16
| 1,275
| 2,822
|
<issue_start>username_0: I have this zookeeper config:
```
autopurge.snapRetainCount=10
autopurge.purgeInterval=1
snapCount=3000000
```
And my `/opt/zookeeper-3.4.11/data` dir goes large, no `autopurge` happens.
I try clean up this mess with `zkCleanup.sh` but it does nothing.
```
sysadmin@clickhouse-node1:/opt/zookeeper-3.4.11/bin$ ls /opt/zookeeper-3.4.11/data/version-2/ | wc -l
18
sysadmin@clickhouse-node1:/opt/zookeeper-3.4.11/bin$ ./zkCleanup.sh -n 10
sysadmin@clickhouse-node1:/opt/zookeeper-3.4.11/bin$ ls /opt/zookeeper-3.4.11/data/version-2/ | wc -l
18
sysadmin@clickhouse-node1:/opt/zookeeper-3.4.11$ ls data/version-2/
log.9028ed00e log.902eafb66 log.903362dcb log.90374bde5 log.903b5f685 log.903f8e16a log.b000000a8 log.b004d4eac log.b0083c3e1
log.902b9c065 log.9030ece30 log.903590e4a log.90395a935 log.903d9b0f0 log.90421e5d6 log.b002462e2 log.b0068bba3 log.b00a38f08
```
My zkCleanup.sh <https://pastebin.com/Q9XSpSfz>
UPD: log from new zoo cleanup script:
`sysadmin@clickhouse-node1:/opt/zookeeper-3.4.11/bin$ ./zoo_clean.sh -n 10
/opt/zookeeper-3.4.11/data
/opt/zookeeper-3.4.11/logs
/usr/lib/jvm/java-8-oracle/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper-3.4.11/bin/../build/classes:/opt/zookeeper-3.4.11/bin/../build/lib/*.jar:/opt/zookeeper-3.4.11/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.11/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.11/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper-3.4.11/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.11/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.11/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper-3.4.11/bin/../zookeeper-3.4.11.jar:/opt/zookeeper-3.4.11/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.11/bin/../conf: org.apache.zookeeper.server.PurgeTxnLog /opt/zookeeper-3.4.11/logs /opt/zookeeper-3.4.11/data -n 10`
Nothing happen again
Any idea how to fix this?<issue_comment>username_1: <https://clickhouse.yandex/docs/en/operations/tips/#zookeeper>
Clickhouse required zookeper 3.5+
Upvotes: 0 <issue_comment>username_2: **`dataDir` and `dataLogDir` in your `conf/zoo.cfg` are relative paths**, the following instructions may help :
```
cd /opt/zookeeper-3.4.11
./bin/zkCleanup.sh -n 10
```
then you can see the output like:
```
Removing file: Mar 15, 2018 1:57:16 PM data/log/version-2/log.9028ed00e
Removing file: Mar 12, 2018 5:33:11 PM data/log/version-2/log.902eafb66
```
Upvotes: 1 <issue_comment>username_3: It is a bug in zkCleanup.sh in 3.4.11 docker image <https://github.com/apache/zookeeper/pull/475>
Simple fix is to edit zkCleanup.sh from
org.apache.zookeeper.server.PurgeTxnLog "$ZOODATALOGDIR" "$ZOODATADIR" $\*
to
org.apache.zookeeper.server.PurgeTxnLog "$ZOODATADIR" "$ZOODATALOGDIR" $\*
Upvotes: 2 [selected_answer]
|
2018/03/16
| 338
| 1,381
|
<issue_start>username_0: I am working with an oversea developer who is developing one of my apps.
The app uses a auto-renewable Membership, and he just pointed me out that he needs the App-Specific Shared Secret since his current role and permission (developer & marketer) does not provide him access to it.
I am not sure how I should feel about this or if there wouldn't be a better way for me to proceed, in order for him to finish off the in-app purchases part?<issue_comment>username_1: If you don't want to share your master shared secret you can generate an app-specific shared secret. The developer needs access to a shared secret to securely implement subscriptions. Shared secrets can also be regenerated.
This is what apple recommends:
>
> The app-specific shared secret is a unique code to receive receipts
> for only this app’s auto-renewable subscriptions. You may want to use
> an app-specific shared secret if you’re transferring this app to
> another developer, or if you want to keep your master shared secret
> private.
>
>
>
Upvotes: 0 <issue_comment>username_2: Just send this "App-Specific Shared Secret" if he's working on the In-App Purchase part. Since this secret code can always regenerate, it safe to send it to him. You still can regenerate it after he finish his part. and you can replace it in his code if you have any worry.
Upvotes: 1
|
2018/03/16
| 1,209
| 4,262
|
<issue_start>username_0: I decided to use Arduino IDE for ESP8266 to program on my ESP8266, targeting to read data (bytes array from a TTL camera). ESP8266-01 (8Mbits Flash ROM) has 8 pins so I decided to use GPIO16 (TXD) and GPIO2 (RXD) as SoftwareSerial pins to achieve this. But the ESP8266 printed an exception and I am not sure what happened to it.
[](https://i.stack.imgur.com/9Nxd2.png)
So I have a few question about this crash.
1. I want to know whether I can run SoftwareSerial on ESP8266 (ESP8266 has 2 UARTs, Serial0 for Serial print and we could not send data through Serial1), so I decided to use SoftwareSerial.
2. I don't know what the exception info means for I can't understand assembly language.
3. I read documents on Github for Arduino IDE for ESP8266, but I don't understand well about the pin definitions on ESP8266 when programming with Arduino IDE. For example, when using GPIO16 (TXD) and GPIO2 (RXD) as SoftwareSerial, we might use Constructor `SoftwareSerial Camera(int Pin1, int Pin2)`. I want to know what the corresponding Pin1 and Pin2 for GPIO2 and GPIO16) is. The document really confused me.
Here is my key code.
```
#include
#include "camera\_VC0706.h"
#include
HTTPClient httpClient;
//define camera and bytes array, I am not sure whether the pins are correct or not
int rxPin=2;
int txPin=16;
SoftwareSerial cameraSerial(rxPin,txPin); //RX,TX
camera\_VC0706 Camera = camera\_VC0706(&cameraSerial);
//take photo
bool takePhoto() {
byte time=0;
writeRawSerial(F("WAITING"),true);
while(!Camera.begin(115200)) {
if(+time<=5){//try 5 times
writeRawSerial(F("."),false);
} else {
writeRawSerial("",true);
writeSerial(F("NO\_CAMERA"));
return false;
}
}
writeRawSerial(F("CAMERA\_ON"), false);
writeRawSerial(F(""), false);
writeRawSerial(Camera.getVersion(), false);
writeSerial(F(""));
Camera.setImageSize(VC0706\_320x240);
if (!Camera.takePicture()) {
writeSerial(F("TAKE\_PHOTO\_FAIL"));
return false;
} else {
byte imgSize = Camera.frameLength();
writeSerial(F("TAKE\_PHOTO\_SUCCESS"));
writeRawSerial(F(""),false);
writeRawSerial(String(imgSize,DEC),false);
writeSerial(F(""));
freeHeap();//It was defined, but not key function, only for showing free heap of esp8266
imgBuffer=Camera.readPicture(imgSize);
freeHeap();
Camera.resumeVideo();
writeSerial(F("SAVE\_PHOTO\_SUCCESS"));
return true;
}
}
```
Thank you for reading my questions.<issue_comment>username_1: So, the stack trace shown is pretty standard for any system, and it gives you pretty much everything you need to track it down - It is not assembly code, it is hexadecimal addresses of your binary.
First, you have an Exception 28 - If we look at the [ESP8266 Reference](https://www.espressif.com/sites/default/files/documentation/esp8266_reset_causes_and_common_fatal_exception_causes_en.pdf), you can see 28 means you either have a bad pointer, or you're trying to access non-cached data from an interrupt/cache is turned off.
Next, you have an address, epc1. This is where the code crashes, which is a hexadecimal address to the binary you uploaded. Also, ctx: cont is helpful, as it indicates the program crashed in your code and not the system code.
You also have excvaddr, which is the addressed you tried to access, 0x10.
Given that, you almost certainly have a NULL pointer somewhere in your code that you are dereferencing. You should use the ESP Exception Decoder to determine what line has the offending crash and bad pointer usage.
In regards to using the software serial - You'd just use the pin numbers on the datasheet. GPIO 14 would be number 14 in the constructor. I found it worked, but caused very bizarre crashes in the production products I worked with after 1+ days of usage, so do not recommend it at all. We ended up using Serial1 for debug printing, and freed up Serial0 for general UART communications - Way more reliable.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to add the standard Arduino functions of setup() and loop() or it won't know where to start.
If you have a look at the example sketches you should be able to get something running, and then you can start adding your takePhoto code.
Upvotes: 2
|
2018/03/16
| 529
| 1,934
|
<issue_start>username_0: Where do we include any external fonts in Angular storybook ?
I'm using materialize css and some google fonts.
HTML :
```
```
CSS:
Within angular-cli.json I have this
```
"styles": [
"styles.css",
"../node_modules/materialize-css/dist/css/materialize.css"
]
```<issue_comment>username_1: It was solved by me as a hack, import `loadGlobalStyles` function to storybook's config:
```
const links = [
{
rel: 'stylesheet',
href: 'https://fonts.googleapis.com/icon?family=Material+Icons',
},
{
rel: 'stylesheet',
href: 'https://fonts.googleapis.com/css?family=Montserrat:400',
},
];
// to prevent duplication with HMR
function isLinkExist(option) {
return !!document.querySelector(`link[href='${option.href}']`);
}
function insertLink(options) {
const link = document.createElement('link');
Object.assign(link, options);
document.head.insertBefore(link, document.head.firstElementChild);
}
export function loadGlobalStyles() {
links.forEach(link => isLinkExist(link) ? null : insertLink(link));
}
```
Upvotes: 0 <issue_comment>username_2: About the link tags for loading Google fonts, [Custom Head Tags](https://storybook.js.org/docs/configurations/add-custom-head-tags/) is designed for such situation:
Add a file named `preview-head.html` under `.storybook` directory (by default), where is the html tags that you want to insert:
```
```
To load styles listed in `angular.json` file (`angular-cli.json` has been replaced since Angular 6 +), make sure that you have installed the `@storybook/angular` version 5+, which has provided a default webpack configuration internally that would read configuration from `angular.json` configuration. See the official example: <https://github.com/storybookjs/storybook/blob/next/examples/angular-cli/package.json>
I've been using the features above with Angular 8 and they work at the version of 5.3
Upvotes: 4
|
2018/03/16
| 839
| 2,746
|
<issue_start>username_0: So there are a lot of posts around this subject, but none of which seems to help.
I have an application running on a wildfly server inside a docker container.
And for some reason I cannot connect my remote debugger to it.
So, it is a wildfly 11 server that has been started with this command:
```
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone.xml --debug 9999;
```
And in my standalone.xml I have this:
```
```
The console output seems promising:
`Listening for transport dt_socket at address: 9999`
I can even access the admin console with the credentials admin:admin on `localhost:9990/console`
However IntelliJ refuses to connect... I've creates a remote JBoss Server configuration that in the server tab points to localhost with management port 9990.
And in the startup/connection tab I've entered 9999 as remote socket port.
The docker image has exposed the ports 9999 and 9990, and the docker-compose file binds those ports as is.
Even with all of this IntelliJ throws this message when trying to connect:
```
Error running 'remote':
Unable to open debugger port (localhost:9999): java.io.IOException "handshake failed - connection prematurally closed"
```
followed by
```
Error running 'remote':
Unable to connect to the localhost:9990, reason:
com.intellij.javaee.process.common.WrappedException: java.io.IOException: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed
```
I'm completely lost as to what the issue might be...
Interessting addition is that after intelliJ fails, if I invalidate caches and restart then wildfly reprints the message saying that it is listening on port 9999<issue_comment>username_1: Not sure if this can be seen as an answer since it goes around the problem.
But the way I solved this, was by adding a "pure" remote configuration in intelliJ instead of jboss remote. This means that it won't automagically deploy, but I'm fine with that
Upvotes: 2 [selected_answer]<issue_comment>username_2: In case someone else in the future comes to this thread with he same issue, I found this solution here:
<https://github.com/jboss-dockerfiles/wildfly/issues/91#issuecomment-450192272>
Basically, apparart from the --debug parameter, you also need to pass \*:8787
Dockerfile:
```
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "--debug", "*:8787"]
```
docker-compose:
```
ports:
- "8080:8080"
- "8787:8787"
- "9990:9990"
command: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 --debug *:8787
```
I have not tested the docker-compose solution, as my solution was on dockerfile.
Upvotes: 2
|
2018/03/16
| 623
| 2,021
|
<issue_start>username_0: I'm new to UFT and I'm trying to select element from the Auto suggest list using Record and Play option of UFT.
this is my recorded script for auto suggest list
```
Browser("Log in to iCare").Page("iCare_3").WebEdit("Start typing your medication").Set "AMLODIPINE TAB"
Browser("Log in to iCare").Page("iCare_3").WebList("AMLODIPINE TAB").Click
```
But when I'm running it UFT just setting AMLODIPINE TAB option in text box and not selecting the option from the list so due to this I'm not redirecting to next modal view.
I have also tried SendKeys but still it is not woring
```
Set WshShell = CreateObject("WScript.Shell")
WshShell.SendKeys "AMLODIPINE TAB"
wait 3
WshShell.SendKeys "{DOWN}"
wait 3
WshShell.SendKeys "{ENTER}"
wait 3
Set WshShell = Nothing
```
Please see attach screen short
[](https://i.stack.imgur.com/lUabN.png)
Waiting for reply..thank you<issue_comment>username_1: It may be that UFT's `WebEdit.Set` command doesn't simulate the specific events the application is looking for. In such cases using device replay usually solves this problem. See [this answer for more details](https://stackoverflow.com/a/17920339/3848).
**tl;dr;** Try this code:
```vbs
origReplayType = Setting.WebPackage("ReplayType")
Setting.WebPackage("ReplayType") = 2 ' Changes to device mode
Browser("Log in to iCare").Page("iCare_3").WebEdit("Start typing your medication").Set "AMLODIPINE TAB"
Setting.WebPackage("ReplayType") = origReplayType ' return to previous mode
Browser("Log in to iCare").Page("iCare_3").WebList("AMLODIPINE TAB").Click
```
Upvotes: 1 <issue_comment>username_2: You can use
```
Browser("Log in to iCare").Page("iCare_3").WebElement("Html Tag:=li","innertext:=AMLODIPINE TAB").Click
```
instead of
```
Browser("Log in to iCare").Page("iCare_3").WebList("AMLODIPINE TAB").Click
```
this might be work and if not then try it with
```
Setting.WebPackage("ReplayType") = 2
```
Upvotes: 0
|
2018/03/16
| 656
| 2,487
|
<issue_start>username_0: This may be a silly question but I've searched around and found nothing.
I have a code:
```
class A {}
let className = 'A';
```
I need to check if `className` corresponds to an existing class.
This is all not in the global scope and is running in node.js so I can't use `window`
I know that the following would work:
```
eval(`typeof ${className} === 'function'`);
```
However I am a bit reluctant to use `eval` (and also the linter complains about it)
In addition I need to also instantiate the class as a variable which again I can do with eval like so:
```
let ctor = eval(className);
let object = new ctor();
```
But again this uses eval.
Is there any alternative way to achieve these?<issue_comment>username_1: Use `Function` constructor
```
function ac(){} //new class
var f = new Function( "return typeof ac == 'function'" ) //define function to verify
f(); //returns true if class exists in the current scope
```
Make it more generic
```
function ac(){} //new class
var f = function( className ){
return new Function( "return typeof " + className + " == 'function'" )(); //define function to verify and invoke the same
}
f( "ac" ); //returns true if class exists in the current scope
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: This may indicate indicate wrong design decision and possible XY problem. The need for `eval` usually indicates this. It's developer's responsibility to keep track of classes in use.
If functions are exported, `module.exports` can be checked. If they are not exported, they likely should be.
If classes should be tracked more than in a single module, a container can be used, and the functions should be registered explicitly:
```
const globalClassContainer = new Map;
function registerClass(cls) {
if (globalClassContainer.has(cls.name))
globalClassContainer.set(cls.name, new Set);
globalClassContainer.get(cls.name).add(cls);
}
class Foo {};
registerClass(Foo);
```
Functions are not supposed to be unequivocally identified by their names, names are supposed to be used only for debugging purposes. There can be more than one function with same `name` (even within current scope), there can be functions that don't match `name`. Function `name` tends to be safer in Node.js, but there's no guarantee for safety:
```
class Foo {}
const Bar = Foo; // Bar.name === 'Foo';
const bazFactory = () => class {};
const Baz = bazFactory(); // Baz.name === '';
```
Upvotes: 1
|
2018/03/16
| 2,072
| 8,867
|
<issue_start>username_0: I'm working on an app and I have a menu with a NavigationDrawer to navigate between fragments. In one of the fragments I make a call to the backend and then save the results in a list. When I navigate to another fragment and back, the results are gone, but I'd like to save the contents of the list temporarily. **I wanted to use `onSaveInstanceState()`, but that method doesn't seem to get called ever.** I also looked if the data is still in the fields when I return to the fragment, but that also wasn't the case. I think I'm doing something wrong with the FragmentManager, but I'm not sure about it.
This is the method used for the transactions for the Fragments:
```
private void openFragment(Class fragmentClass) {
Fragment fragment;
try {
fragment = (Fragment) fragmentClass.newInstance();
} catch (InstantiationException | IllegalAccessException e) {
e.printStackTrace();
return;
}
contentFrame.removeAllViews();
FragmentManager fragmentManager = getSupportFragmentManager();
fragmentManager.beginTransaction().replace(R.id.contentFrame,fragment).commit();
}
```
I use a switch case to determine the Fragment's class and send that to this method.
I could probably figure out a hacky-snappy way to fix this, but I'd like to fix this without too much hacky-snappy code.
I hope someone has an idea on how to fix this. Thanks in advance.
**EDIT:**
Here is my fragment class:
```
public class LGSFragment extends Fragment {
@BindView(R.id.rvLGS)
RecyclerView rvLGS;
private List lgsList;
private LGSAdapter adapter;
@Override
public View onCreateView(@NonNull LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
//I debugged here and all fields were null at this point
View view = inflater.inflate(R.layout.fragment\_lgs,container,false);
ButterKnife.bind(this, view);
lgsList = new ArrayList<>();
LinearLayoutManager manager = new LinearLayoutManager(getContext());
rvLGS.setLayoutManager(manager);
adapter = new LGSAdapter(lgsList);
rvLGS.setAdapter(adapter);
getDatabaseLGSs();
return view;
}
/\*\*
\* Method to load in the LGSs from the database
\*/
private void getDatabaseLGSs() {
String collection = getString(R.string.db\_lgs);
FireStoreUtils.getAllDocumentsConverted(collection, LGS.class, new OperationCompletedListener() {
@Override
public void onOperationComplete(Result result, Object... data) {
if (result == Result.SUCCESS) {
lgsList.clear();
List newLGSs = (List) data[0];
List ids = (List) data[1];
int i = 0;
for (LGS lgs : newLGSs) {
lgs.setId(ids.get(i));
lgsList.add(lgs);
i++;
}
adapter.notifyDataSetChanged();
}
}
});
}
@Override
public void onSaveInstanceState(@NonNull Bundle outState) {
super.onSaveInstanceState(outState);
}
}
```<issue_comment>username_1: `onSaveInstanceState` is not called because there is **no reason to**, when you navigate between fragments, the older fragment doesn't get destroyed till the OS need the space they use (low Memory).
First of all create a back stack to keep fragments or just call `addtoBackStack` at the end of `fragmentTransaction` and then move the list initiation and data request to onCreate so it only called when the fragment is created:
```
lgsList = new ArrayList<>();
getDatabaseLGSs();
```
and after that every time you get back to fragment the view is recreated with available data.
**Update:**
Instead of keeping an reference on your own, you can add the fragment to the `backstack` and then retrieve it using corresponding `tag`. This let's fragmentManager manages the caching by itself. And the second time you access a fragment, it doesn't gets recreated:
```
@Override
public void onNavigationDrawerItemSelected(@NonNull MenuItem item) {
if (item.isChecked())
return;
item.setChecked(true);
setTitle(item.getTitle());
FragmentManager fragmentManager = getSupportFragmentManager();
FragmentTransaction transaction = fragmentManager.beginTransaction();
Fragment currentlyShown = fragmentManager.findFragmentByTag(currentlyShownTag);
Fragment dest;
switch (item.getItemId()){
case R.id.nav_lgs:
dest = fragmentManager.findFragmentByTag(LGSFragment.class.getName());
if (dest == null) {
Log.d("TRANSACTION", "instanciating new fragment");
dest = new LGSFragment();
currentlyShownTag = LGSFragment.class.getName();
transaction.add(R.id.contentFrame, dest, LGSFragment.class.getName());
}
break;
...
}
if(currentlyShown != null)
transaction.hide(currentlyShown);
transaction.show(dest);
transaction.commit();
drawerLayout.closeDrawers();
return true;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: **EDIT:**
Although this solution works fine, this solution uses some bad practices, I recommend using the accepted solution instead.
I've solved the problem with the help of [username_1](https://stackoverflow.com/users/6600000/keivan-esbati) and [denvercoder9](https://stackoverflow.com/users/2235972) (Thanks for that!)
Since I only have 4 fragments I keep an instance of each of them in the MainActivity, I also have a variable to track the current Fragment. Everytime I open a fragment, I hide the current fragment using the FragmentManager and calling `.hide()` in the transaction. Then, if the Fragment is a new Fragment I call `.add()` in the transaction, otherwise I call `.show` in the transaction.
The code for the `onNavigationItemSelected()` method (which triggers when a user selects an item in the menu):
```
public boolean onNavigationItemSelected(@NonNull MenuItem item) {
if (!item.isChecked()) {
item.setChecked(true);
setTitle(item.getTitle());
switch (item.getItemId()) {
case R.id.nav_lgs: {
if (lgsFragment == null) {
lgsFragment = new LGSFragment();
openFragment(lgsFragment, FragmentTag.LGS.toString());
} else {
openFragment(lgsFragment, "");
}
currentFragmentTag = FragmentTag.LGS;
break;
}
case R.id.nav_users: {
if (userFragment == null) {
userFragment = new UserFragment();
openFragment(userFragment, FragmentTag.USERS.toString());
} else {
openFragment(userFragment, "");
}
currentFragmentTag = FragmentTag.USERS;
break;
}
case R.id.nav_profile: {
if (profileFragment == null) {
profileFragment = new ProfileFragment();
openFragment(profileFragment, FragmentTag.PROFILE.toString());
} else {
openFragment(profileFragment, "");
}
currentFragmentTag = FragmentTag.PROFILE;
break;
}
case R.id.nav_my_lgs: {
if (myLGSFragment == null) {
myLGSFragment = new MyLGSFragment();
openFragment(myLGSFragment, FragmentTag.MY_LGS.toString());
} else {
openFragment(myLGSFragment, "");
}
currentFragmentTag = FragmentTag.MY_LGS;
break;
}
default: {
if (lgsFragment == null) {
lgsFragment = new LGSFragment();
openFragment(lgsFragment, FragmentTag.LGS.toString());
} else {
openFragment(lgsFragment, "");
}
currentFragmentTag = FragmentTag.LGS;
break;
}
}
}
drawerLayout.closeDrawers();
return true;
}
```
The `openFragment()` method used above:
```
private void openFragment(Fragment fragment, String tag) {
FragmentManager fragmentManager = getSupportFragmentManager();
if (currentFragmentTag != FragmentTag.NO_FRAGMENT) {
fragmentManager.beginTransaction().hide(fragmentManager.findFragmentByTag(currentFragmentTag.toString())).commit();
}
if (!tag.equals("")) {
fragmentManager.beginTransaction().add(R.id.contentFrame,fragment,tag).commit();
} else {
fragmentManager.beginTransaction().show(fragment).commit();
}
}
```
Set up in `onCreate()`:
```
currentFragmentTag = FragmentTag.NO_FRAGMENT;
if (lgsFragment == null) {
lgsFragment = new LGSFragment();
openFragment(lgsFragment, FragmentTag.LGS.toString());
} else {
openFragment(lgsFragment, "");
}
currentFragmentTag = FragmentTag.LGS;
```
Upvotes: 2
|
2018/03/16
| 939
| 2,680
|
<issue_start>username_0: I'm trying to show the number of pages on PDF file.
So in the header I put this css:
```
.page-number:after {
counter-increment: pages;
content: counter(page) " of " counter(pages);
}
```
Html:
```
Page
```
But it returns me `Page 1 of 1` ... `Page 2 of 2`.
The first counter works fine but the total is wrong.
How can I solve that?<issue_comment>username_1: ```css
.page{
counter-reset: page;
}
.page .page-number{
display:block;
}
.page .page-number:after{
counter-increment: page;
content:counter(page);
}
.page:after{
display: block;
content: "Number of pages: " counter(page);
}
```
```html
Page
Page
Page
Page
```
Upvotes: -1 <issue_comment>username_2: There is no way to get a counter total with CSS counters so the only way I can think of getting the output you require is to duplicate the HTML (which may not be a big problem if the content is dynamically generated). Output the HTML once to get the total number of pages then again to get the current page.
```css
#pageCounter {
counter-reset: pageTotal;
}
#pageCounter span {
counter-increment: pageTotal;
}
#pageNumbers {
counter-reset: currentPage;
}
#pageNumbers div:before {
counter-increment: currentPage;
content: "Page " counter(currentPage) " of ";
}
#pageNumbers div:after {
content: counter(pageTotal);
}
```
```html
```
Upvotes: 3 <issue_comment>username_3: It is possible for EdgeChromium browser. I am using ul and li only for this.
I have done:
```
*
*
*
```
with css
```
@page {
size: A4;
margin: 0;
padding: 0;
}
footerfix
{
width: 210mm;
height: 40mm;
bottom: 0;
z-index: 100;
}
html, body {
width: 210mm;
height: 297mm;
margin: 0;
padding: 0;
}
.footer_wrapper {
position: fixed;
width: 210mm;
height: 297mm;
margin: 0;
padding: 0;
left: 0mm;
background-color: transparent;
}
footer {
width: 210mm;
height: 297mm;
margin: 0;
padding: 0;
position: relative;
white-space: nowrap;
text-align: center;
z-index: 3000;
page-break-after: always;
background-color: transparent;
.page_001 {
top: 0mm;
z-index: 3001;
}
.page_002 {
top: 297mm;
z-index: 3002;
}
.page_003 {
top: calc(2*297mm);
z-index: 3003;
}
ul {
list-style-type: none;
padding: 0;
margin: 0;
counter-reset: seite seiten;
}
li::before {
counter-increment: seite seiten;
content: "page "counter(seite) " / ";
}
la::after {
content: counter(seiten);
}
.page {
background-color: white; /*otherwise page numbers on top of each other*/
}
```
Upvotes: 0
|
2018/03/16
| 624
| 2,213
|
<issue_start>username_0: I'm new to nginx lua, and got a setup from previous developer. Trying to go through the docs to understand the scope but I'm pretty unsure.
It's like this right now
```
init_by_lua_block {
my_module = require 'my_module'
my_module.load_data()
}
location / {
content_by_lua_block {
my_module.use_data()
}
}
```
And in my\_module
```
local _M = {}
local content = {}
function _M.use_data()
-- access content variable
end
function _M.load_data()
-- code to load json data into content variable
end
return _M
```
So my understand is, content is a local variable, so its lifetime is within each request. However, it's being initialized in `init_by_lua_block`, and is being used by other local functions, which makes me confused. Is this a good practice? And what's the actual lifetime of this content variable?
Thanks a lot for reading.<issue_comment>username_1: `init_by_lua[_block]` runs at nginx-loading-config phase, before forking worker process.
so the `content` variable is global, it's all the same in every request.
<https://github.com/openresty/lua-nginx-module/#init_by_lua>
Upvotes: 0 <issue_comment>username_2: Found this: <https://github.com/openresty/lua-nginx-module#data-sharing-within-an-nginx-worker>
To globally share data among all the requests handled by the same nginx worker process, encapsulate the shared data into a Lua module, use the Lua require builtin to import the module, and then manipulate the shared data in Lua. This works because required Lua modules are loaded only once and all coroutines will share the same copy of the module (both its code and data). Note however that Lua global variables (note, not module-level variables) WILL NOT persist between requests because of the one-coroutine-per-request isolation design.
Here is a complete small example:
```
-- mydata.lua
local _M = {}
local data = {
dog = 3,
cat = 4,
pig = 5,
}
function _M.get_age(name)
return data[name]
end
return _M
```
and then accessing it from nginx.conf:
```
location /lua {
content_by_lua_block {
local mydata = require "mydata"
ngx.say(mydata.get_age("dog"))
}
}
```
Upvotes: 3
|
2018/03/16
| 539
| 1,276
|
<issue_start>username_0: I have a base list `[1,4,10]` which needs to be converted to a list having consecutive elements of each element in the base list in an efficient way
Examples:
* If I need 2 consecutive numbers then `[1,4,10]` will be `[1,2,4,5,10,11]`.
* If 3 consecutive numbers then `[1,4,10]` will be `[1,2,3,4,5,6,10,11,12]`.<issue_comment>username_1: Here is one way. [`itertools.chain`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable) removes the need for explicit nested loops.
```
from itertools import chain
def consecutiver(lst, n=3):
return list(chain.from_iterable(range(i, i+n) for i in lst))
res = consecutiver([1, 4, 10], 2)
# [1, 2, 4, 5, 10, 11]
res2 = consecutiver([1, 4, 10], 3)
# [1, 2, 3, 4, 5, 6, 10, 11, 12]
```
Upvotes: 0 <issue_comment>username_2: Here's a one liner, assuming the list is x and the number of 'consecutives' is c:
```
reduce(lambda a, b: a + b, map(lambda x: range(x, x+c), x))
```
Upvotes: 0 <issue_comment>username_3: ```
a = [1,4,10]
k = 3 #no of consecutive
x=[range(b,b+k) for b in a]
output = [m for d in x for m in d]
```
Upvotes: 0 <issue_comment>username_4: ```
arr=[1,4,10]
con=3
[r + i for r in arr for i in range(con)]
# [1, 2, 3, 4, 5, 6, 10, 11, 12]
```
Upvotes: 1
|
2018/03/16
| 385
| 1,551
|
<issue_start>username_0: I have a `Model` and a `Property` class with the following signatures:
```
public class Property {
public String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
public class Model {
private List properties = new ArrayList<>();
public List getProperties() {
return properties;
}
}
```
I want a `Map>` from a `List` where the key would be the name from the `Property` class. How can I can I use java8 streams to group that list by its `Propery`es' name? All `Property`es are unique by name.
It is possible to solve in a single stream or should I split it somehow or go for the classical solution?<issue_comment>username_1: ```
yourModels.stream()
.flatMap(model -> model.getProperties().stream()
.map(property -> new AbstractMap.SimpleEntry<>(model, property.getName())))
.collect(Collectors.groupingBy(
Entry::getValue,
Collectors.mapping(
Entry::getKey,
Collectors.toSet())));
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Why not use `forEach` ?
Here is concise solution using `forEach`
```
Map> resultMap = new HashMap<>();
listOfModels.forEach(currentModel ->
currentModel.getProperties().forEach(prop -> {
Set setOfModels = resultMap.getOrDefault(prop.getName(), new HashSet<>());
setOfModels.add(currentModel);
resultMap.put(prop.getName(), setOfModels);
})
);
```
Upvotes: 2
|
2018/03/16
| 381
| 1,595
|
<issue_start>username_0: I would like to know how applications such as WhatsApp is still able to navigate the user to right message screen even when WhatsApp is closed/terminated.
Does this imply that WhatsApp is not at all terminated but stays running in the background?
I need to implement the same feature in a react-native application. The push notification is working but when my app is terminated, it only opens the app to the main screen upon receipt of a push notification.
I would like to know:
1) If there exists a strategy to actually make it work similar to WhatsApp receiving pushnotifs even when it is terminated ?
2) Other technologies - e.g. OneSignal or Pusher - that offers this.
Please note that I am from an iOS background.
Thanks,
Avinash<issue_comment>username_1: ```
yourModels.stream()
.flatMap(model -> model.getProperties().stream()
.map(property -> new AbstractMap.SimpleEntry<>(model, property.getName())))
.collect(Collectors.groupingBy(
Entry::getValue,
Collectors.mapping(
Entry::getKey,
Collectors.toSet())));
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Why not use `forEach` ?
Here is concise solution using `forEach`
```
Map> resultMap = new HashMap<>();
listOfModels.forEach(currentModel ->
currentModel.getProperties().forEach(prop -> {
Set setOfModels = resultMap.getOrDefault(prop.getName(), new HashSet<>());
setOfModels.add(currentModel);
resultMap.put(prop.getName(), setOfModels);
})
);
```
Upvotes: 2
|
2018/03/16
| 977
| 3,941
|
<issue_start>username_0: I have a simple implementation of Room in my app. It has two entities.
It works fine while app is running. But I see no data once app is restarted.
This is my database creation code -
```
@Database(entities = {Person.class, College.class}, version = 1, exportSchema = true)
public abstract class AppDatabase extends RoomDatabase {
private static AppdDatabase INSTANCE;
public abstract PersonDao personDao();
public abstract CollegeDao collgeDao();
public static AppDatabase getInstance(Context context) {
if (INSTANCE == null) {
INSTANCE = Room.databaseBuilder(context.getApplicationContext(), AppDatabase.class, "my-sample-app.db")
.build();
}
return INSTANCE;
}
public static void destroyInstance() {
if (INSTANCE != null) INSTANCE.close();
INSTANCE = null;
}
}
```
In any running session, I can add and view data but nothing is available after app restart. What am I missing here?
Thanks<issue_comment>username_1: I didn't check my code but may be it will work for you. Try this:
```
@Database(entities = {Person.class, College.class}, version = 1)
public abstract class AppDatabase extends RoomDatabase {
private static AppdDatabase INSTANCE;
public abstract PersonDao personDao();
public abstract CollegeDao collgeDao();
public static AppDatabase getInstance(final Context context) {
if (INSTANCE == null) {
synchronized (AppDatabase.class) {
INSTANCE = Room.databaseBuilder(context.getApplicationContext(), AppDatabase.class, "my_sample_app")
.build();
}
}
return INSTANCE;
}
public static void destroyInstance() {
if (INSTANCE != null) INSTANCE.close();
INSTANCE = null;
}
}
```
Upvotes: 0 <issue_comment>username_2: you used below code and i think your issue will be solved ..
make data base like below code ..
```
@Database(entities = {MyTable.class}, version = 1)
```
public abstract class AppDatabase extends RoomDatabase {
public abstract MyTableDao getTableDao();
}
then after make dao interface..
```
@Dao
```
public interface MyTableDao {
@Insert
void insertData(MyTable myTable);
}
and make your pojo class
```
@Entity
```
public class MyTable {
@PrimaryKey(autoGenerate = true)
private int id;
```
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
private double ItemPrice;
public double getItemPrice() {
return ItemPrice;
}
public void setItemPrice(double itemPrice) {
ItemPrice = itemPrice;
}
```
}
than after make one app activity class and called at the application level.
```
public class AppActivity extends Application {
```
static AppDatabase db;
```
@Override
public void onCreate() {
super.onCreate();
db = Room.databaseBuilder(getApplicationContext(), AppDatabase.class, "database-name").build();
}
public static AppDatabase getDatabase() {
return db;
}
```
}
above activity class declared in manifest file in application tag.
```
android:name=".AppActivity"
```
then after used to insert in main activity like this ..
```
AppActivity.getDatabase().getTableDao().insertData(yourobject);
```
Upvotes: 0 <issue_comment>username_3: It was a silly mistake-
I wasn't using ***getInstance()*** posted in question, I was using below code to create database instead -
```
mDb = Room.inMemoryDatabaseBuilder(mContext, AppDatabase.class)
```
My code works as intended with original code posted in question.
Thanks
Upvotes: 2 [selected_answer]<issue_comment>username_4: The reason why you had an empty db every time you restart the app is because in your database class you used "Room.inMemoryDatabaseBuilder.." which is used only for testing purposes on db ,so insteand you should used "Room.databaseBuilder.."
Upvotes: 2
|
2018/03/16
| 658
| 2,431
|
<issue_start>username_0: I'm studying Java, I want to know what to do.
How do I fix it? What do I need to study?
I do not understand how to use methord. Two types 
<issue_comment>username_1: Try instead of
`System.out.println(setName("tine") + setLastName("linux"));`
Upvotes: -1 <issue_comment>username_2: You should really learn how functions work in general. Also how OOP (Object Oriented Programming) works. [Wikipedia on OOP](https://en.wikipedia.org/wiki/Object-oriented_programming)
Upvotes: 0 <issue_comment>username_3: In this example your mistake is calling both methods at the same time.
```
System.out.print(setName("tine") + " " + setLastname("Linux"));
```
As a future reference I don't think you should use set functions to return value. Those should be void and then you can use get methods to return.
```
public static String getName(){
return name;
}
```
Upvotes: 1 <issue_comment>username_4: There is so much wrong here.
* Setter methods, as `setName` and `setLastname` should not just return the parameter. Instead, as the name suggets, set something.
* Your setters methods are static, but your members `name` and `lastName` are not. This is not possible, your setter methods should not be static.
* At `System.out.println("tine",setLastName("linux"));` you call the `setName` method with two parameters (a String "tine" and the result of `setLastName("linux")` which is just "linux"), but it expects only one parameter.
* Your ClassB class should have getters, if it has setters.
* Don't name your method `getPrint` if it doesn't get something.
Your code should look like this:
```
public class ClassA extends ClassB
{
public void print()
{
setName("tine");
setLastname("linux");
System.out.println(getName() + " " + getLastname());
}
}
```
```
public class ClassB
{
String name;
String lastname;
public String getName()
{
return name;
}
public void setName(String name)
{
this.name = name;
}
public String getLastname()
{
return lastname;
}
public void setLastname(String lastname)
{
this.lastname = lastname;
}
}
```
Now you can call `new ClassA().print();` somewhere in your code and it will print: `tine linux`.
Upvotes: 1
|
2018/03/16
| 894
| 2,957
|
<issue_start>username_0: I want to crop phone screen given on the image. I tried this code but the result is not what i wanted.
**[Phone Screen](https://i.stack.imgur.com/cTE70.jpg)**

```
import cv2
import numpy as np
img = cv2.imread('phone_test.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
_, contours, _= cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
x,y,w,h = cv2.boundingRect(contours[0])
crop = img[y:y+h,x:x+w]
cv2.imwrite('phone_test_crop.jpg',crop)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow('image',img) #show the image
cv2.waitKey(0)
```
The result would be just this
**[Crop Result](https://i.stack.imgur.com/S2V3P.jpg)**

any solutions?<issue_comment>username_1: Try instead of
`System.out.println(setName("tine") + setLastName("linux"));`
Upvotes: -1 <issue_comment>username_2: You should really learn how functions work in general. Also how OOP (Object Oriented Programming) works. [Wikipedia on OOP](https://en.wikipedia.org/wiki/Object-oriented_programming)
Upvotes: 0 <issue_comment>username_3: In this example your mistake is calling both methods at the same time.
```
System.out.print(setName("tine") + " " + setLastname("Linux"));
```
As a future reference I don't think you should use set functions to return value. Those should be void and then you can use get methods to return.
```
public static String getName(){
return name;
}
```
Upvotes: 1 <issue_comment>username_4: There is so much wrong here.
* Setter methods, as `setName` and `setLastname` should not just return the parameter. Instead, as the name suggets, set something.
* Your setters methods are static, but your members `name` and `lastName` are not. This is not possible, your setter methods should not be static.
* At `System.out.println("tine",setLastName("linux"));` you call the `setName` method with two parameters (a String "tine" and the result of `setLastName("linux")` which is just "linux"), but it expects only one parameter.
* Your ClassB class should have getters, if it has setters.
* Don't name your method `getPrint` if it doesn't get something.
Your code should look like this:
```
public class ClassA extends ClassB
{
public void print()
{
setName("tine");
setLastname("linux");
System.out.println(getName() + " " + getLastname());
}
}
```
```
public class ClassB
{
String name;
String lastname;
public String getName()
{
return name;
}
public void setName(String name)
{
this.name = name;
}
public String getLastname()
{
return lastname;
}
public void setLastname(String lastname)
{
this.lastname = lastname;
}
}
```
Now you can call `new ClassA().print();` somewhere in your code and it will print: `tine linux`.
Upvotes: 1
|
2018/03/16
| 460
| 1,941
|
<issue_start>username_0: I have an Azure App Service.
The App Service relies on databases held on my Azure SQL Server.
I am attempting to disable 'Allow access to Azure Services' on the Azure SQL server to prevent malicious users in other Azure subscriptions connecting to my database, however when I do this the App Service can no longer connect to the Azure SQL server / database.
I have attempted to connect the app service to a VNET that has been granted access to the SQL Server however still cannot get a connection.
Point to Site and service endpoints have been enabled.
Is it possible to have App Services talk to the Azure SQL when 'Allow access to Azure Services' is disabled. Or would I have to host my App Service in an App Service Environment? Iʻve saw mention of configuring a proxy server, but donʻt know how to set that up.
Thanks
David<issue_comment>username_1: If you are looking for more security consider using virtual network rules, This way you can remove Allow all Azure Services from your Azure SQL Servers and replace it with a VNet Firewall Rule. However, this affect some Azure SQL Database features like: Query Editor, Import/Export service, table auditing and SQL Data Sync. For more information, please read [this](https://learn.microsoft.com/en-us/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview) documentation.
Upvotes: 0 <issue_comment>username_2: You can lock it down just to outbound IP addresses used by your web app. The IP addresses are listed in the portal under "Properties" if you navigate to your app. An example is shown at <https://blogs.msdn.microsoft.com/waws/2017/02/01/how-do-i-determine-the-outbound-ip-addresses-of-my-azure-app-service/> .
This solution is not perfect though as outbound IP addresses are shared with some other apps hosted in the same region by App Service, but it still greatly reduces the space of IPs which can access your database.
Upvotes: 2
|
2018/03/16
| 1,106
| 4,141
|
<issue_start>username_0: I have a file with some package-level functions in Kotlin.
```
//Logger.kt
fun info(tag : String, message : String){
...
}
fun error{....}
```
I'm testing functions of a class that call functions of this kotlin file, and I would like to mock them. I know that package-level functions are like static methods in Java, so I've been thinking of using PowerMock.
```
//MyClass: Class that calls Logger.kt functions
class MyClass {
fun myFunction(){
info("TAG", "Hello world!")
}
}
```
Any ideas?<issue_comment>username_1: You can use PowerMock for this. As you already pointed out, Kotlin generates a static Java class for your top level functions in the file `Logger.kt`, named `LoggerKt.java`. You can change it by annotating the file with `@file:JvmName(“...“)`, if you like. Therefore you can do it like this:
```
@RunWith(PowerMockRunner.class)
@PrepareForTest(LoggerKt.class)
public class MyClassTest {
@Test
public void testDoIt() {
PowerMockito.mockStatic(LoggerKt.class);
MyClass sut = new MyClass();
sut.myFunction(); //the call to info(...) is mocked.
}
}
```
I tried to make it work in Kotlin, but I didn't find a way to make the from Kotlin generated `Logger` Java class available as a class literal to be able to use it for the `@PrepareForTest` annotation. Although it is possible [to reference the generated Java class](https://stackoverflow.com/questions/34951894/is-there-a-way-to-reference-the-java-class-for-a-kotlin-top-level-function) in Kotlin.
Upvotes: 2 <issue_comment>username_2: There is a workaround you can use to mock top-level function from Kotlin.
**Explanation**
`@PrepareForTest` annotation does have 2 params to provide the context (classes) where you want to mock something or where you mocking stuff in going to be used.
The first param is `value` if type `Class []`: here you can provide an array of classes. E.g:
```
@PrepareForTest(Class1::class, Class2::class, Class3::class, QueryFactory::class)
```
The second param `fullyQualifiedNames` of type `String[]`: here you can provide an array with the fully qualified name of the classes. E.g:
```
@PrepareForTest(Class1::class, fullyQualifiedNames = arrayOf("x.y.z.ClassName"))
```
Let's say that we have a Kotlin file named "MyUtils.kt" which contains only top-level functions. As you know you cannot reference the MyUtilsKt class from Kotlin files, but from Java you can. It means that there is static class generated (I do not have enough knowledge yet to provide you with more details about this) and it has a fully qualified name.
**Solution**
This solution is not perfect. I implemented it on our code-base and it seems to work. For sure it can be improved.
1. I created a Kotlin file called `TopLevelFunctionClass.kt` where I added the fully qualified names of the "classes" which contain only top-level functions.
`internal const val MyUtilsKt = "com.x.y.z.MyUtilsKt"`
Unfortunately, I had to hardcode the name since an annotation argument must be a compile-time constant.
2. I updated the `@PrepareForTest` annotation of my test class as following:
```
@RunWith(PowerMockRunner::class)
@PrepareForTest(Class1::class, Class2::class, Class4::class,
fullyQualifiedNames = [MyUtilsKt]) // the string constant declared in TopLevelFunctionClass.kt
```
3. I updated the test method as following:
Top-level function in `MyUtils.kt`:
```
internal fun testMock(): Int {
return 4
}
```
The test method:
```
@Test
fun myTestMethod() {
...
mockStatic(Class.forName(MyUtilsKt)) // the string constant declared in TopLevelFunctionClass.kt
`when`(testMock()).thenReturn(10)
assertEquals(10, testMock()) // the test will successfully pass.
}
```
Side effect: In case you rename the kotlin file which contains the top-level functions you have to change also the constant defined in `TopLevelFunctionClass.kt`. A possible solution to avoid the renaming problem is to add: `@file:JvmName("The name you want for this file")`. In case you'll have 2 files with the same name you'll get a duplication JVM class name error.
Upvotes: 3
|
2018/03/16
| 1,128
| 4,277
|
<issue_start>username_0: The problem i'm facing is that i have a list of let's say 5 jobs. Each job had it's own page and at the bottom of that page i have a button to apply for this job.
Now when you click the button a webform opens on a new page that you need to fill in in order to apply for that job.
The problem here is that when we receive mails, we don't know for what job they are applying.
A possible sollution would be to just pass the node title to a field in the webform or just in the mail they receive. But i can't seem to get this working.
I'm using the webforms module and the site is made in Drupal 8. suggestions are welcome in order to solve this issue.<issue_comment>username_1: You can use PowerMock for this. As you already pointed out, Kotlin generates a static Java class for your top level functions in the file `Logger.kt`, named `LoggerKt.java`. You can change it by annotating the file with `@file:JvmName(“...“)`, if you like. Therefore you can do it like this:
```
@RunWith(PowerMockRunner.class)
@PrepareForTest(LoggerKt.class)
public class MyClassTest {
@Test
public void testDoIt() {
PowerMockito.mockStatic(LoggerKt.class);
MyClass sut = new MyClass();
sut.myFunction(); //the call to info(...) is mocked.
}
}
```
I tried to make it work in Kotlin, but I didn't find a way to make the from Kotlin generated `Logger` Java class available as a class literal to be able to use it for the `@PrepareForTest` annotation. Although it is possible [to reference the generated Java class](https://stackoverflow.com/questions/34951894/is-there-a-way-to-reference-the-java-class-for-a-kotlin-top-level-function) in Kotlin.
Upvotes: 2 <issue_comment>username_2: There is a workaround you can use to mock top-level function from Kotlin.
**Explanation**
`@PrepareForTest` annotation does have 2 params to provide the context (classes) where you want to mock something or where you mocking stuff in going to be used.
The first param is `value` if type `Class []`: here you can provide an array of classes. E.g:
```
@PrepareForTest(Class1::class, Class2::class, Class3::class, QueryFactory::class)
```
The second param `fullyQualifiedNames` of type `String[]`: here you can provide an array with the fully qualified name of the classes. E.g:
```
@PrepareForTest(Class1::class, fullyQualifiedNames = arrayOf("x.y.z.ClassName"))
```
Let's say that we have a Kotlin file named "MyUtils.kt" which contains only top-level functions. As you know you cannot reference the MyUtilsKt class from Kotlin files, but from Java you can. It means that there is static class generated (I do not have enough knowledge yet to provide you with more details about this) and it has a fully qualified name.
**Solution**
This solution is not perfect. I implemented it on our code-base and it seems to work. For sure it can be improved.
1. I created a Kotlin file called `TopLevelFunctionClass.kt` where I added the fully qualified names of the "classes" which contain only top-level functions.
`internal const val MyUtilsKt = "com.x.y.z.MyUtilsKt"`
Unfortunately, I had to hardcode the name since an annotation argument must be a compile-time constant.
2. I updated the `@PrepareForTest` annotation of my test class as following:
```
@RunWith(PowerMockRunner::class)
@PrepareForTest(Class1::class, Class2::class, Class4::class,
fullyQualifiedNames = [MyUtilsKt]) // the string constant declared in TopLevelFunctionClass.kt
```
3. I updated the test method as following:
Top-level function in `MyUtils.kt`:
```
internal fun testMock(): Int {
return 4
}
```
The test method:
```
@Test
fun myTestMethod() {
...
mockStatic(Class.forName(MyUtilsKt)) // the string constant declared in TopLevelFunctionClass.kt
`when`(testMock()).thenReturn(10)
assertEquals(10, testMock()) // the test will successfully pass.
}
```
Side effect: In case you rename the kotlin file which contains the top-level functions you have to change also the constant defined in `TopLevelFunctionClass.kt`. A possible solution to avoid the renaming problem is to add: `@file:JvmName("The name you want for this file")`. In case you'll have 2 files with the same name you'll get a duplication JVM class name error.
Upvotes: 3
|
2018/03/16
| 709
| 3,057
|
<issue_start>username_0: I'm currently using Cucumber Framework with Selenium Java . I wish to upgrade my Normal Cucumber reports to extent reports .
Tried with Extent reports 3.0.2. I was able to generate Extent reports but i was not able to embed Screenshot of failed test cases into Extent Report .
Can any one please help to know is this possible to have this feature of embedding screenshot for failed cases into Cucumber Selenium Java Framework . If yes please help me in forwarding any link related to this or code .
also i would like to understand is there any other Custom reports better than extent report which is easier to configure .
Also please help me in understanding which sort of paralell execution is faster .
1.Cucumber JVM paralell plugin
2.Maven Sure fire plugin
3.Test NG paralell classes
also it would be great help if any one can share me on how to configure for parallel execution .
thanks for help in advance .
Have nice day<issue_comment>username_1: Use something similar to this in order to have extent report take screenshots when ever failure occurs.Good luck.
```
public void onTestFailure(ITestResult testResult) {
System.out.println("Method " + getTestMethodName(testResult) + " failed");
watch.reset();
try {
String path = TakeScreenshot.takeScreenshot(eDriver, testResult.getMethod().getMethodName());
test.addScreenCaptureFromPath(path).fail("**"
+ testResult.getMethod().getMethodName().replaceAll("(\\p{Ll})(\\p{Lu})", "$1 $2").toUpperCase()
+ "**" + " failed
"+testResult.getThrowable());
} catch (IOException e) {
e.printStackTrace();
}
}
```
Upvotes: 1 <issue_comment>username_2: I use the following NPM module which I have to manually run after each test, because I too am running in java...unless there is java code that can run a node module programmatically, that I don't know about -
[cucumber-html-reporter](https://www.npmjs.com/package/cucumber-html-reporter)
Upvotes: 0 <issue_comment>username_3: If you can add the plugin in your runner class
```
plugin = { "com.cucumber.listener.ExtentCucumberFormatter:Report/Report.html"},
```
Try this one to error while taking screenshot.. And it could add screenshots of a failed test case in Cucumber extent report.
```
public void forAfterScen(Scenario scenario) throws IOException
{
if (scenario.isFailed()) {
String screenshotName = scenario.getName().replaceAll(" ", "_");
try {
TakesScreenshot ts = (TakesScreenshot) driver;
File sourcePath = ts.getScreenshotAs(OutputType.FILE);
File destinationPath = new File(System.getProperty("user.dir") + "\\Report\\" + screenshotName+".png");
Files.copy(sourcePath, destinationPath);
Reporter.addScreenCaptureFromPath(destinationPath.toString());
Reporter.addScenarioLog(screenshotName);
} catch (IOException e) {
}
}
}
```
Upvotes: 1
|
2018/03/16
| 1,698
| 7,090
|
<issue_start>username_0: In ASP.NET core 2.1, I cannot access session variables.
While debugging I have noticed that in every request the session ID changes
(HttpContex.Session.Id)
Did I make mistake in session configuration?
Startup.cs
```
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.Configure(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version\_2\_1);
// Adds a default in-memory implementation of IDistributedCache.
services.AddDistributedMemoryCache();
services.AddSession(options =>
{
// Set a short timeout for easy testing.
options.IdleTimeout = TimeSpan.FromSeconds(1000);
options.Cookie.HttpOnly = true;
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseSession();
if (env.IsDevelopment())
{
app.UseBrowserLink();
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
```
Program.cs
```
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup();
}
```
While debugging i have noticed that in every request session id changes
(HttpContex.Session.Id)
```
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using ucms6.Models;
namespace ucms6.Controllers
{
public class HomeController : Controller
{
const string SessionKeyName = "_Name";
const string SessionKeyYearsMember = "_YearsMember";
const string SessionKeyDate = "_Date";
public IActionResult Index()
{
// Requires using Microsoft.AspNetCore.Http;
HttpContext.Session.SetString(SessionKeyName, "Rick");
HttpContext.Session.SetInt32(SessionKeyYearsMember, 3);
return RedirectToAction("SessionNameYears");
// return View();
}
public IActionResult SessionNameYears()
{
var name = HttpContext.Session.GetString(SessionKeyName);
var yearsMember = HttpContext.Session.GetInt32(SessionKeyYearsMember);
return Content($"Name: \"{name}\", Membership years: \"{yearsMember}\"");
}
public IActionResult About()
{
ViewData["Message"] = "Your application description page.";
return View();
}
```<issue_comment>username_1: The default distributed cache store in ASP.NET Core is in-memory. Since sessions use distributed cache, that means your session store is also in-memory. Things stored in memory are process-bound, so if the process terminates, everything stored in memory goes along with it. Finally, when you stop debugging, the application process is terminated. That then means every time you start and stop debugging, you have an entirely new session store.
There's a couple of routes you can take. First, if you just want to run the site, without debugging it, you can use `CTRL`+`F5`. This will kick off IIS Express and load your web app, without starting all the debugging machinery along with it. You can then proceed to make as many requests as you like, and it will all hit the same process (meaning your session store will be intact). This is great for doing the frontend side of development, as you can modify your Razor views, CSS, JS, etc. and see those changes without have to stop and start debugging again. However, if you make any C# code changes (class, controller, etc.), Visual Studio will kick off a build, which will terminate the application and then restart it. Your site keeps running as if nothing happened, but anything stored in-memory, including your sessions will be gone. It's at least better than debugging constantly, though.
Second, you can simply use a persistent store in development, as well (you should *already* be setup to use a persistent store in production, so fix that ASAP, if not). You can use something like SQL Server or Redis in development, just like you would in production. The SQL store can be added to your existing development database, so you don't actually need to install SQL Server. You can also install a local copy of Redis and just run it off of localhost, if you prefer that route. With either approach, your distributed cache, and your sessions along with it, will be stored in something external to the application, so starting and stopping your application will have no effect on what's stored there.
Upvotes: 2 <issue_comment>username_2: In the `ConfigureServices()` method of the `Startup` class, Set `options.CheckConsentNeeded = context => false;` as follows:
```
services.Configure(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => false; // Default is true, make it false
options.MinimumSameSitePolicy = SameSiteMode.None;
});
```
Upvotes: 5 <issue_comment>username_3: The solution is to mark the session cookie as essential.
```
public void ConfigureServices(IServiceCollection services)
{
//...
services.AddSession(opt =>
{
opt.Cookie.IsEssential = true;
});
//...
}
```
The documentation about the flag states:
>
> Indicates if this cookie is essential for the application to function correctly. If true then consent policy checks may be bypassed. The default value is false.
>
>
>
This will keep the cookie policy options intact and the session is still working as expected because `CookiePolicyOptions.CheckConsentNeeded` only affects non-essential cookies.
Upvotes: 4 <issue_comment>username_4: I spent an hour trying to solve this, so posting another solution just in case.
Make sure you increase the default timeout which is 10 seconds. When a user is filling a long form that takes a lot of time - it might reset itself if it takes longer than 10 seconds.
```
services.AddSession(options =>
{
options.Cookie.IsEssential = true;
options.IdleTimeout = TimeSpan.FromMinutes(10);
});
```
And by "default" I mean that every .NET programmer who uses session probably copy-pastes code from [this page](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0) and it's set 10 seconds. That's what I did anyway. ;)
Upvotes: 0
|
2018/03/16
| 1,070
| 4,469
|
<issue_start>username_0: **I'm making a chat system like Facebook or Twitter**.
Everything is okay but i don't know how to select last user's massage that the user has sent or received from them. I mean when you enter the message in Facebook you can see the last messages that you've seen or not seen.
There is img to explain more below.
I have a table named message having columns like below
`(id,userTo,userFrom,message,date)`
**for example :-**
[](https://i.stack.imgur.com/u28zj.png)<issue_comment>username_1: The default distributed cache store in ASP.NET Core is in-memory. Since sessions use distributed cache, that means your session store is also in-memory. Things stored in memory are process-bound, so if the process terminates, everything stored in memory goes along with it. Finally, when you stop debugging, the application process is terminated. That then means every time you start and stop debugging, you have an entirely new session store.
There's a couple of routes you can take. First, if you just want to run the site, without debugging it, you can use `CTRL`+`F5`. This will kick off IIS Express and load your web app, without starting all the debugging machinery along with it. You can then proceed to make as many requests as you like, and it will all hit the same process (meaning your session store will be intact). This is great for doing the frontend side of development, as you can modify your Razor views, CSS, JS, etc. and see those changes without have to stop and start debugging again. However, if you make any C# code changes (class, controller, etc.), Visual Studio will kick off a build, which will terminate the application and then restart it. Your site keeps running as if nothing happened, but anything stored in-memory, including your sessions will be gone. It's at least better than debugging constantly, though.
Second, you can simply use a persistent store in development, as well (you should *already* be setup to use a persistent store in production, so fix that ASAP, if not). You can use something like SQL Server or Redis in development, just like you would in production. The SQL store can be added to your existing development database, so you don't actually need to install SQL Server. You can also install a local copy of Redis and just run it off of localhost, if you prefer that route. With either approach, your distributed cache, and your sessions along with it, will be stored in something external to the application, so starting and stopping your application will have no effect on what's stored there.
Upvotes: 2 <issue_comment>username_2: In the `ConfigureServices()` method of the `Startup` class, Set `options.CheckConsentNeeded = context => false;` as follows:
```
services.Configure(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => false; // Default is true, make it false
options.MinimumSameSitePolicy = SameSiteMode.None;
});
```
Upvotes: 5 <issue_comment>username_3: The solution is to mark the session cookie as essential.
```
public void ConfigureServices(IServiceCollection services)
{
//...
services.AddSession(opt =>
{
opt.Cookie.IsEssential = true;
});
//...
}
```
The documentation about the flag states:
>
> Indicates if this cookie is essential for the application to function correctly. If true then consent policy checks may be bypassed. The default value is false.
>
>
>
This will keep the cookie policy options intact and the session is still working as expected because `CookiePolicyOptions.CheckConsentNeeded` only affects non-essential cookies.
Upvotes: 4 <issue_comment>username_4: I spent an hour trying to solve this, so posting another solution just in case.
Make sure you increase the default timeout which is 10 seconds. When a user is filling a long form that takes a lot of time - it might reset itself if it takes longer than 10 seconds.
```
services.AddSession(options =>
{
options.Cookie.IsEssential = true;
options.IdleTimeout = TimeSpan.FromMinutes(10);
});
```
And by "default" I mean that every .NET programmer who uses session probably copy-pastes code from [this page](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0) and it's set 10 seconds. That's what I did anyway. ;)
Upvotes: 0
|
2018/03/16
| 1,199
| 4,989
|
<issue_start>username_0: I have a cucumber+Java project where everything is working perfectly fine if I use JUnit Runner to execute cucumber scenarios written in Feature file but problem arises when I try to use build.gradle file to run them.
```
@Scenario1
Given I have URL
When When I login
Then I can see Homescreen
@Scenario2
Given I am logged in
When I make payment
Then I can see payment receipt
```
---
I have created a gradle task-
```
task Cucumber()<<{
println 'Running Test'
javaexec {
main = "cucumber.api.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args =['--format','pretty','--format',
'html:'+System.getProperty("port")+System.getProperty("tag"),
'--format',
'json:'+System.getProperty("port")+'/cucumber.json',
'src/test/resources'
,'--glue','classpath:stepDefinition',
'--tags', System.getProperty("tag")]
}
}
```
Scenario2 steps are read by Gradle task but at the same time Scenario1 steps are not found.
What could be the issue?<issue_comment>username_1: The default distributed cache store in ASP.NET Core is in-memory. Since sessions use distributed cache, that means your session store is also in-memory. Things stored in memory are process-bound, so if the process terminates, everything stored in memory goes along with it. Finally, when you stop debugging, the application process is terminated. That then means every time you start and stop debugging, you have an entirely new session store.
There's a couple of routes you can take. First, if you just want to run the site, without debugging it, you can use `CTRL`+`F5`. This will kick off IIS Express and load your web app, without starting all the debugging machinery along with it. You can then proceed to make as many requests as you like, and it will all hit the same process (meaning your session store will be intact). This is great for doing the frontend side of development, as you can modify your Razor views, CSS, JS, etc. and see those changes without have to stop and start debugging again. However, if you make any C# code changes (class, controller, etc.), Visual Studio will kick off a build, which will terminate the application and then restart it. Your site keeps running as if nothing happened, but anything stored in-memory, including your sessions will be gone. It's at least better than debugging constantly, though.
Second, you can simply use a persistent store in development, as well (you should *already* be setup to use a persistent store in production, so fix that ASAP, if not). You can use something like SQL Server or Redis in development, just like you would in production. The SQL store can be added to your existing development database, so you don't actually need to install SQL Server. You can also install a local copy of Redis and just run it off of localhost, if you prefer that route. With either approach, your distributed cache, and your sessions along with it, will be stored in something external to the application, so starting and stopping your application will have no effect on what's stored there.
Upvotes: 2 <issue_comment>username_2: In the `ConfigureServices()` method of the `Startup` class, Set `options.CheckConsentNeeded = context => false;` as follows:
```
services.Configure(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => false; // Default is true, make it false
options.MinimumSameSitePolicy = SameSiteMode.None;
});
```
Upvotes: 5 <issue_comment>username_3: The solution is to mark the session cookie as essential.
```
public void ConfigureServices(IServiceCollection services)
{
//...
services.AddSession(opt =>
{
opt.Cookie.IsEssential = true;
});
//...
}
```
The documentation about the flag states:
>
> Indicates if this cookie is essential for the application to function correctly. If true then consent policy checks may be bypassed. The default value is false.
>
>
>
This will keep the cookie policy options intact and the session is still working as expected because `CookiePolicyOptions.CheckConsentNeeded` only affects non-essential cookies.
Upvotes: 4 <issue_comment>username_4: I spent an hour trying to solve this, so posting another solution just in case.
Make sure you increase the default timeout which is 10 seconds. When a user is filling a long form that takes a lot of time - it might reset itself if it takes longer than 10 seconds.
```
services.AddSession(options =>
{
options.Cookie.IsEssential = true;
options.IdleTimeout = TimeSpan.FromMinutes(10);
});
```
And by "default" I mean that every .NET programmer who uses session probably copy-pastes code from [this page](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0) and it's set 10 seconds. That's what I did anyway. ;)
Upvotes: 0
|
2018/03/16
| 1,807
| 7,002
|
<issue_start>username_0: I'm a bit confused. If you search about the usefulness of (smart) pointers you get very different opinions.
Almost everybody agrees that one should use smart pointers over normal pointers, but there are also many opinions that you should not use pointers at all in modern C++.
Lets consider this abstract situation (Which is similar to my problem):
You have one class "Clothes" which has a member of type "Hat"
```
class Hat {
public:
enum class HatType{
sombrero,
sun hat,
helmet,
beanie,
cowboy_hat
};
// some functions
private:
HatType type;
// some members
}
class Clothes {
public:
// some functions
private:
Hat currentHat;
// some other members of other types
}
```
Is there any Difference in run time if I would change `Hat` to `Hat*` (or a `unique_ptr`?).
(In many functions of `Clothes` you would need to call something from `Hat`)
The reason I ask is because there are many different types of hats.
Like sombreros, sun hats, helmets, beanies and cowboy hats. Right now my class hat has an enumerator which stores the hat type.
The hat type is only relevant in one specific function of `Hat` but this is the most used function.
Right now I use a simple `switch case` in this specific function and based on the hat type the function is evaluated a bit differently. This worked fine but I think it would be smarter to simply make an own class for every different hat type which inherit from one single main `Hat` class and override the one function.
To do so I think I would have to change the member `currentHat` in `Clothes` to any type of pointer. I researched if this would have any negative effects on my performance (I thought about maybe because the location of my `currentHat` object and my `Clothes` object in memory can be wide apart, but I have no Idea if this would happen and if this would have any negative effects with modern compilers).
During my research I often read that one should avoid pointers, this made me thinking... some of the information I found was also very old and I do not know if this is outdated. Is there any better way to do this?
Has anyone any experience with this kind of problem? Would be good to get some feedback before I spend a lot of time changing my whole project...
Side note: I took `Clothes` and `Hats` only as an examples. In my real application I have different types and I am creating many million of objects from type `Clothes` and I have to call functions of `Clothes` which will call functions of `Hat` many many million times, that is why I'm concerned with run time.
Edit: Maybe it is also worth noting that I have entirely avoided pointers in my application so far, mostly because I read in some books that one should try to avoid pointers :-)
EDIT2: Thank you for all your answers. Please note that part of my question was not how to do it (although the answers were very useful in that depart) but rather if the performance will suffer from this.<issue_comment>username_1: Ok, let's break down the pointer debacle:
Don't use explicit `new`/`delete` in C++
----------------------------------------
There really isn't any exception to this rule. Except if you are writing a library/framework. Or some fancy stuff that requires placement new. But in user code `new`/`delete` should be 100% absent.
This brings us to my next rule:
Don't use raw pointers to denote ownership.
-------------------------------------------
This you will find often on the net as the "Don't use raw pointers" advice/rule. But in my view the problem is not with raw pointers, but with raw pointers that own an object.
For ownership use smart pointers (`unique_ptr`, `shared_ptr`). If a pointer is not an owner of the object then it's ok if you use raw pointers. For instance in a tree like structure you can have `unique_ptr` to the children and a raw pointer to the parent.
You can (arguably) also use a pointer to denote an *optional* value via `nullptr`.
You need to use pointers or references for dynamic polymorphism
---------------------------------------------------------------
(Well... there are other ways, like type erasure but I won't go there in my post) If you want polymorphism then you can't use a value. If you can use reference then do, but most of the time you can't. Which often means you have to use `unique_ptr`.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As you need *different* Heads, you have basically two options:
1) Using a (smart) pointer, this is well known and easy
2) Using std::variant
This is a vary different approach! No longer need of a base class and ( pure ) virtual methods. Instead of this using the `std::visit` to access the current object in the *tagged union*. But this comes with a cost: Dispatching in std::visit can be expensive if the compiler did not hardly optimize the call table inside. But if it can, it is simply taking the tag from the variant and use it as a index to the overloaded function from your visit call, here given as a generic lambda. That is not as fast as indirect call via vtable pointer, but not more than a view instructions more ( if it is optimized!).
As always: How you want t go is a matter of taste.
Example:
```
class Head1
{
public:
void Print() { std::cout << "Head1" << std::endl; }
};
class Head2
{
public:
void Print() { std::cout << "Head2" << std::endl; }
};
class Clothes
{
std::variant currentHead;
public:
Clothes()
{
currentHead = Head1();
}
void Do() { std::visit( [](auto& head){ head.Print();}, currentHead); }
void SetHead( int id )
{
switch( id )
{
case 0:
currentHead= Head1();
break;
case 1:
currentHead= Head2();
break;
default:
std::cerr << "Wrong id for SetHead" << std::endl;
}
}
};
int main()
{
Clothes cl;
cl.Do();
cl.SetHead(1);
cl.Do();
cl.SetHead(0);
cl.Do();
}
```
Upvotes: 1 <issue_comment>username_3: Here's a breakdown of your three suggested implementations, and some more:
### Plain member: `Hat currentHat;`
* `Clothes` uniquely owns the `Hat`
* No dynamic allocation
* No polymorphism
### Smart pointer: `std::unique_ptr currentHat;`
* `Clothes` still uniquely owns the `Hat`
* Dynamic allocation required
* Polymorphism available, i.e. can hold instances derived from `Hat`
### Raw pointer: `Hat *currentHat;`
* `Clothes` does not own the `Hat`
* Unspecified allocation, need to ensure that the `Hat` outlives the `Clothes`
* Polymorphism still available
* Perfectly fine if these are your requirements. Fight me :)
---
Unlisted contenders
-------------------
### Smart pointer: `std::shared_ptr currentHat;`
* `Clothes` shares ownership of the `Hat` with zero or more other `Clothes`
* Same as `std::unique_ptr` otherwise
### Reference: `Hat ¤tHat`
* Similar to a raw pointer
* Is non-rebindable, thus makes the object non-assignable -- hence I don't personally like member references in the general case.
Upvotes: 2
|
2018/03/16
| 1,186
| 4,466
|
<issue_start>username_0: Following is my sample HTTP server. I need to remove the 'Content-length:' header generated at the response. I have tried many approaches and not succeded. Is there any way to remove the content-length from server response?
```
public class SimpleHttpServer {
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(9000), 0);
server.createContext("/test", new TestHandler());
server.setExecutor(null); // creates a default executor
server.start();
}
static class TestHandler implements HttpHandler {
public void handle(HttpExchange t) throws IOException {
byte[] response = "Welcome to Test Server..!!\n".getBytes();
t.sendResponseHeaders(200, response.length);
OutputStream os = t.getResponseBody();
os.write(response);
os.close();
}
}
}
```<issue_comment>username_1: You have to send 0 in the response length, as specified in the [javadoc](https://docs.oracle.com/javase/7/docs/jre/api/net/httpserver/spec/com/sun/net/httpserver/HttpExchange.html#sendResponseHeaders(int,%20long)) for the `sendResponseHeaders`:
>
> responseLength - if > 0, specifies a fixed response body length and that exact number of bytes must be written to the stream acquired from getResponseBody(), or else if equal to 0, then chunked encoding is used, and an arbitrary number of bytes may be written. if <= -1, then no response body length is specified and no response body may be written.
>
>
>
```
t.sendResponseHeaders(200, 0);
```
This means it would not send to the browser the length of the response and also not send the Content-Length header instead it sends the response as [chunked encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding) that as you indicate this is for a test it could be fine.
>
> Chunked transfer encoding is a streaming data transfer mechanism available in version 1.1 of the Hypertext Transfer Protocol (HTTP). In chunked transfer encoding, the data stream is divided into a series of non-overlapping "chunks". The chunks are sent out and received independently of one another. No knowledge of the data stream outside the currently-being-processed chunk is necessary for both the sender and the receiver at any given time.
>
>
>
Upvotes: 0 <issue_comment>username_2: A workaround could be:
```
t.sendResponseHeaders(200, 0);
```
Note that
>
> If the response length parameter is `0`, then chunked transfer encoding is used and *an arbitrary amount of data* may be sent.
>
>
>
Upvotes: 1 <issue_comment>username_3: ```
Content-Length header is always set, unless it's 0 or -1;
```
If you check the source of [`HttpExchange sendResponseHeaders()`](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/httpserver/ExchangeImpl.java#ExchangeImpl.0rspHdrs) you will find this snippet, which contains the relevant logic:
As you can see when `contentLen == 0` and !http10, this header is added `"Transfer-encoding", "chunked"`.
You can use [`getResponseHeaders()`](https://docs.oracle.com/javase/8/docs/api/javax/xml/ws/spi/http/HttpExchange.html#getResponseHeaders--), which returns a mutable map of headers, to set any response headers, **except** `"Date"` and `"Transfer-encoding"` - read the linked source code to see why.
```
207 if (contentLen == 0) {
208 if (http10) {
209 o.setWrappedStream (new UndefLengthOutputStream (this, ros));
210 close = true;
211 } else {
212 rspHdrs.set ("Transfer-encoding", "chunked");
213 o.setWrappedStream (new ChunkedOutputStream (this, ros));
214 }
215 } else {
216 if (contentLen == -1) {
217 noContentToSend = true;
218 contentLen = 0;
219 }
220 /* content len might already be set, eg to implement HEAD resp */
221 if (rspHdrs.getFirst ("Content-length") == null) {
222 rspHdrs.set ("Content-length", Long.toString(contentLen));
223 }
224 o.setWrappedStream (new FixedLengthOutputStream (this, ros, contentLen));
225 }
```
If you need more flexibility, you need to use other constructs, rather than plain `HttpExchange`. Classes come with constraints, default behavior and built in a certain way.
Upvotes: 0
|
2018/03/16
| 132
| 446
|
<issue_start>username_0: Basic HTML 5 form, trying to fire HTML 5 email validation which should not allow `"aaa@aaa"` type of emails, meaning those emails which don't have `"."` near the end.
so emails like below are correct
```
<EMAIL>
<EMAIL>
```
and email like below are incorrect
```
aaa@aaa
```
```html
Email Validation
----------------
```<issue_comment>username_2: try this format
```html
E-mail:
```
Upvotes: -1
|
2018/03/16
| 578
| 1,574
|
<issue_start>username_0: I want to return the key of the object where it's `ContractID` value equals `10`. So in this example I want to return `0`.
```
{
0 : {ContractID: 10, Name: "dog"}
1 : {ContractID: 20, Name: "bar"}
2 : {ContractID: 30, Name: "foo"}
}
```
I've tried using the `filter` method but it doesn't work how I'd have wanted.
```
var id = objname.filter(p => p.ContractID == 10);
```
This instead returns the array, not the key. How can I return the key?<issue_comment>username_1: Use find on the `Object.keys()`
```js
let obj = {
'0' : {ContractID: 10, Name: "dog"},
'1' : {ContractID: 20, Name: "bar"},
'2' : {ContractID: 30, Name: "foo"}
}
let res = Object.keys(obj).find(e => obj[e].ContractID === 10);
console.log(res);
```
However, your "object" looks more like it should be an array. Either create it directly correct as an array, or convert it to one first. Then use `findIndex()`
```js
let obj = {
'0' : {ContractID: 10, Name: "dog"},
'1' : {ContractID: 20, Name: "bar"},
'2' : {ContractID: 30, Name: "foo"}
};
obj.length = Object.keys(obj).length;
let arr = Array.from(obj);
let key = arr.findIndex(e => e.ContractID === 10);
console.log(key);
```
Upvotes: 2 <issue_comment>username_2: You could simply use a `for in` loop :
```js
var o = {
0 : {ContractID: 10, Name: "dog"},
1 : {ContractID: 20, Name: "bar"},
2 : {ContractID: 30, Name: "foo"}
};
var k;
for(key in o){
if(o[key].ContractID == 10){
k = key;
break;
}
}
console.log(k);
```
Upvotes: 0
|
2018/03/16
| 634
| 1,846
|
<issue_start>username_0: I want to create function that accept 'dynamic array struct' and use it to map a data from database \*mgodb
```
type Cats struct {
Meow string
}
func getCatsPagination() {
mapStructResult("Animality","Cat_Col", Cats)
}
type Dogs struct {
Bark string
}
func getDogsPagination() {
mapStructResult("Animality","Dog_Col", Dogs)
}
func mapStructResult(db string, collection string, model interface{}) {
result := []model{} //gets an error here
err := con.Find(param).Limit(int(limit)).Skip(int(offset)).All(&result) // map any database result to 'any' struct provided
if err != nil {
log.Fatal(err)
}
}
```
and gets an error as "model is not a type", why is it?
any answer will be highly appreciated !<issue_comment>username_1: Use find on the `Object.keys()`
```js
let obj = {
'0' : {ContractID: 10, Name: "dog"},
'1' : {ContractID: 20, Name: "bar"},
'2' : {ContractID: 30, Name: "foo"}
}
let res = Object.keys(obj).find(e => obj[e].ContractID === 10);
console.log(res);
```
However, your "object" looks more like it should be an array. Either create it directly correct as an array, or convert it to one first. Then use `findIndex()`
```js
let obj = {
'0' : {ContractID: 10, Name: "dog"},
'1' : {ContractID: 20, Name: "bar"},
'2' : {ContractID: 30, Name: "foo"}
};
obj.length = Object.keys(obj).length;
let arr = Array.from(obj);
let key = arr.findIndex(e => e.ContractID === 10);
console.log(key);
```
Upvotes: 2 <issue_comment>username_2: You could simply use a `for in` loop :
```js
var o = {
0 : {ContractID: 10, Name: "dog"},
1 : {ContractID: 20, Name: "bar"},
2 : {ContractID: 30, Name: "foo"}
};
var k;
for(key in o){
if(o[key].ContractID == 10){
k = key;
break;
}
}
console.log(k);
```
Upvotes: 0
|
2018/03/16
| 648
| 1,898
|
<issue_start>username_0: I am getting this type of json in my $scope of angularjs:
```
$scope.someStuff = {
"id": 2,
"service": "bike",
"min": "22",
"per": "100",
"tax": "1",
"categoryservices": [
{
"id": 32,
"category": {
"id": 1,
"name": "software"
}
},
{
"id": 33,
"category": {
"id": 2,
"name": "hardware"
}
},
{
"id": 34,
"category": {
"id": 3,
"name": "waterwash"
}
}
]
}
```
I want to use angularjs forEach loop and i want to get only category name,
My expected output:
```
[{"name":"software"}, {"name":"hardware"}, {"name":"waterwash"}]
```<issue_comment>username_1: Use find on the `Object.keys()`
```js
let obj = {
'0' : {ContractID: 10, Name: "dog"},
'1' : {ContractID: 20, Name: "bar"},
'2' : {ContractID: 30, Name: "foo"}
}
let res = Object.keys(obj).find(e => obj[e].ContractID === 10);
console.log(res);
```
However, your "object" looks more like it should be an array. Either create it directly correct as an array, or convert it to one first. Then use `findIndex()`
```js
let obj = {
'0' : {ContractID: 10, Name: "dog"},
'1' : {ContractID: 20, Name: "bar"},
'2' : {ContractID: 30, Name: "foo"}
};
obj.length = Object.keys(obj).length;
let arr = Array.from(obj);
let key = arr.findIndex(e => e.ContractID === 10);
console.log(key);
```
Upvotes: 2 <issue_comment>username_2: You could simply use a `for in` loop :
```js
var o = {
0 : {ContractID: 10, Name: "dog"},
1 : {ContractID: 20, Name: "bar"},
2 : {ContractID: 30, Name: "foo"}
};
var k;
for(key in o){
if(o[key].ContractID == 10){
k = key;
break;
}
}
console.log(k);
```
Upvotes: 0
|
2018/03/16
| 1,319
| 4,461
|
<issue_start>username_0: I checked allow unknown sources app to be installed.
java.lang.RuntimeException: Unable to get provider mono.MonoRuntimeProvider: java.lang.RuntimeException: Unable to find application Mono.Android.Platform.ApiLevel\_24 or Xamarin.Android.Platform!E/AndroidRuntime( 8048): at android.app.ActivityThread.installProvider(ActivityThread.java:5571)E/AndroidRuntime( 8048): at android.app.ActivityThread.installContentProviders(ActivityThread.java:5163)E/AndroidRuntime( 8048): at android.app.ActivityThread.handleBindApplication(ActivityThread.java:5103)E/AndroidRuntime( 8048): at android.app.ActivityThread.access$1600(ActivityThread.java:177)E/AndroidRuntime( 8048): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1510)E/AndroidRuntime( 8048): at android.os.Handler.dispatchMessage(Handler.java:102)E/AndroidRuntime( 8048): at android.os.Looper.loop(Looper.java:145)E/AndroidRuntime( 8048): at android.app.ActivityThread.main(ActivityThread.java:5951)E/AndroidRuntime( 8048): at java.lang.reflect.Method.invoke(Native Method)E/AndroidRuntime( 8048): at java.lang.reflect.Method.invoke(Method.java:372)E/AndroidRuntime( 8048): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1388)E/AndroidRuntime( 8048): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1183)E/AndroidRuntime( 8048): Caused by: java.lang.RuntimeException: Unable to find application Mono.Android.Platform.ApiLevel\_24 or Xamarin.Android.Platform!E/AndroidRuntime( 8048): at mono.MonoRuntimeProvider.attachInfo(MonoRuntimeProvider.java:38)E/AndroidRuntime( 8048): at android.app.ActivityThread.installProvider(ActivityThread.java:5568)E/AndroidRuntime( 8048): ... 11 moreE/AndroidRuntime( 8048): Caused by: android.content.pm.PackageManager$NameNotFoundException: Xamarin.Android.PlatformE/AndroidRuntime( 8048): at android.app.ApplicationPackageManager.getApplicationInfo(ApplicationPackageManager.java:305)E/AndroidRuntime( 8048): at mono.MonoRuntimeProvider.attachInfo(MonoRuntimeProvider.java:32)E/AndroidRuntime( 8048): ... 12 moreV/ApplicationPolicy( 3137): isApplicationStateBlocked userId 0 pkgname com.mycomp.test1W/ActivityManager( 3137): Force finishing activity com.mycomp.test1/md5414c3d8510d5c9d2b651f345e03d9f02.SplashScreenActivityE/android.os.Debug( 3137): ro.product\_ship = true
my app installs in all my devices.. inclucidn development and non-developmetn devices
but its not installing in the my client's devices<issue_comment>username_1: >
> Unable to get provider mono.MonoRuntimeProvider
>
>
> Unable to find application Mono.Android.Platform.ApiLevel\_24
>
>
>
You are distributing an APK that does not contain the Mono runtime.
Typically **debug builds** are assigned to use the "Shared Mono Runtime" as the Visual Studio IDE installs the runtime separately when you are debugging, one of the benefits is smaller APK sizes to allow faster debug deployment cycles.
[](https://i.stack.imgur.com/O15PV.png)
When creating a **release build** (typically through the archive/deploy process), the "Shared Mono Runtime" option is turned off so the APK produced is totally self-contained.
[](https://i.stack.imgur.com/x8byw.png)
Note: When deploying from the VS IDE during debug sessions, the following APK packages are auto-installed when using the shared runtime:
* Mono.Android.DebugRuntime
* Mono.Android.Platform.ApiLevel\_XX
Upvotes: 2 <issue_comment>username_2: The application that you give didn't have **mono.runtime** attached as part of the app.
you were able to run the app on development machine because **mono.runtime** is installed in those devices.
if you go to **Application Manager** then you can find *mono.runtime* over there
just uninstall *mono.runtime* then restart the device then install the app and open it, it'd crash.
because both app and device doesn't have mono.runtime.
So to fix this issue, you have to uncheck **Use Shared Mono Runtime** so app always attaches the *mono.runtime* with the app package and you don't have to worry about whether the device have mono.runtime installed or not
To do this in your visual studio or xamarin studio make the following selections
**Options -> General -> Android Build** -> Choose **Release** in the configuration -> uncheck **Use Shared Mono Runtime**
Upvotes: 2
|
2018/03/16
| 758
| 2,975
|
<issue_start>username_0: Why am I not able to give permission/authorization to a Google Apps Script that I also made using the same Google account?
It seems like Google doesnt trust myself to use my own Google Apps Script with my own Spreadsheet.
Here is the line of code that breaks everything. If this line doesnt exist, I'm not asked for permission.
```
var sheet = SpreadsheetApp.getActiveSheet();
```
So it's trying to access the spreadsheet that created this Google Apps Script, also made using my account but I cant grant permission.
When I run the line of code above, I am told I need to give permissions, so I do by selecting the account name I am already logged into. I am greeted by this error,
>
> This app isn't verified
>
>
>
which unfortunately does not provide competent documentation to troubleshoot.
[](https://i.stack.imgur.com/ILRlq.png)
Any feedback or help would be much appreciated! Thanks!<issue_comment>username_1: >
> Unable to get provider mono.MonoRuntimeProvider
>
>
> Unable to find application Mono.Android.Platform.ApiLevel\_24
>
>
>
You are distributing an APK that does not contain the Mono runtime.
Typically **debug builds** are assigned to use the "Shared Mono Runtime" as the Visual Studio IDE installs the runtime separately when you are debugging, one of the benefits is smaller APK sizes to allow faster debug deployment cycles.
[](https://i.stack.imgur.com/O15PV.png)
When creating a **release build** (typically through the archive/deploy process), the "Shared Mono Runtime" option is turned off so the APK produced is totally self-contained.
[](https://i.stack.imgur.com/x8byw.png)
Note: When deploying from the VS IDE during debug sessions, the following APK packages are auto-installed when using the shared runtime:
* Mono.Android.DebugRuntime
* Mono.Android.Platform.ApiLevel\_XX
Upvotes: 2 <issue_comment>username_2: The application that you give didn't have **mono.runtime** attached as part of the app.
you were able to run the app on development machine because **mono.runtime** is installed in those devices.
if you go to **Application Manager** then you can find *mono.runtime* over there
just uninstall *mono.runtime* then restart the device then install the app and open it, it'd crash.
because both app and device doesn't have mono.runtime.
So to fix this issue, you have to uncheck **Use Shared Mono Runtime** so app always attaches the *mono.runtime* with the app package and you don't have to worry about whether the device have mono.runtime installed or not
To do this in your visual studio or xamarin studio make the following selections
**Options -> General -> Android Build** -> Choose **Release** in the configuration -> uncheck **Use Shared Mono Runtime**
Upvotes: 2
|
2018/03/16
| 3,468
| 13,753
|
<issue_start>username_0: The task is to automate OLAP pivot table data filtering. There are some items in pivot field named sPivotFieldName I need to exclude. The code below works pretty fine.
```
With Worksheets(sWorksheetName).PivotTables(sPivotTableName)
With .CubeFields(sCubeFieldName)
.Orientation = xlRowField
.IncludeNewItemsInFilter = True
End With
.PivotFields(sPivotFieldName).HiddenItemsList = vSomeItemsToExclude
End With
```
But the problem appears when I'm trying to change cube field ".Orientation" property's value to xlPageField. Run-time error 1004 fires each time. Here's an example:
```
With Worksheets(sWorksheetName).PivotTables(sPivotTableName)
With .CubeFields(sCubeFieldName)
.Orientation = xlPageField
.IncludeNewItemsInFilter = True
End With
.PivotFields(sPivotFieldName).HiddenItemsList = vSomeItemsToExclude
End With
```
The reason seems to be that items of the fields placed in pagefield aren's visible as they are when placed for example in the rowfield (one can see them as row captions). Or maybe there's something else. What am I missing?<issue_comment>username_1: This functionality obviously isn't available for PageFields. Seems to me a workaround is to use the .VisibleITemsList approach instead, but make sure it doesn't include the items you want to exclude.
To do this, you need to dump all the unfiltered items to a variant, loop the variant looking for the term you want to hide, and if you find it, just replace that element for some other element that you *don't* want to hide. (This saves you having to create a new array without that item in it).
The tricky thing is to get a list of all unfiltered items: .VisibleItemsList won't give it to you if the PivotTable doesn't have some kind of filter applied. So we need to get sneaky by making a copy of the PivotTable, making the PageField of interest a RowField, removing all other fields, and then hoovering up the complete list of items, so we know what should be visible after we remove the ones that should be hidden.
Here's a function that handles filtering no matter whether you're dealing with a RowField or a PageField and no matter whether you want to use the .VisibleItemsList to set the filter, or the .HiddenItemsList
In your particular case, you would call it like so:
FilterOLAP SomePivotField, vSomeItemsToExclude, False
```
Function FilterOLAP(pf As PivotField, vList As Variant, Optional bVisible As Boolean = True)
Dim vAll As Variant
Dim dic As Object
Dim sItem As String
Dim i As Long
Dim wsTemp As Worksheet
Dim ptTemp As PivotTable
Dim pfTemp As PivotField
Dim sPrefix As String
Set dic = CreateObject("Scripting.Dictionary")
With pf
If .Orientation = xlPageField Then
pf.CubeField.EnableMultiplePageItems = True
If Not pf.CubeField.EnableMultiplePageItems Then pf.CubeField.EnableMultiplePageItems = True
End If
If bVisible Then
If .CubeField.IncludeNewItemsInFilter Then .CubeField.IncludeNewItemsInFilter = False
.VisibleItemsList = vList
Else
If .Orientation = xlPageField Then
' Can't use pf.HiddenItemsList on PageFields
' We'll need to manipulate a copy of the PT to get a complete list of visible fields
Set wsTemp = ActiveWorkbook.Worksheets.Add
pf.Parent.TableRange2.Copy wsTemp.Range("A1")
Set ptTemp = wsTemp.Range("A1").PivotTable
With ptTemp
.ColumnGrand = False
.RowGrand = False
.ManualUpdate = True
For Each pfTemp In .VisibleFields
With pfTemp
If .Name <> pf.Name And .Name <> "Values" And .CubeField.Orientation <> xlDataField Then .CubeField.Orientation = xlHidden
End With
Next pfTemp
.ManualUpdate = False
End With
sPrefix = Left(pf.Name, InStrRev(pf.Name, ".")) & "&["
Set pfTemp = ptTemp.PivotFields(pf.Name)
pfTemp.CubeField.Orientation = xlRowField
pfTemp.ClearAllFilters
vAll = Application.Transpose(pfTemp.DataRange)
For i = 1 To UBound(vAll)
vAll(i) = sPrefix & vAll(i) & "]"
dic.Add vAll(i), i
Next i
'Find an item that we know is visible
For i = 1 To UBound(vList)
If Not dic.exists(vList(i)) Then
sItem = vList(i)
Exit For
End If
Next i
'Change any items that should be hidden to sItem
For i = 1 To UBound(vList)
If dic.exists(vList(i)) Then
vAll(dic.Item(vList(i))) = sItem
End If
Next i
.VisibleItemsList = vAll
Application.DisplayAlerts = False
wsTemp.Delete
Application.DisplayAlerts = True
Else
If Not .CubeField.IncludeNewItemsInFilter Then .CubeField.IncludeNewItemsInFilter = True
.HiddenItemsList = vList
End If
End If
End With
End Function
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Someone please, show me example how it works((
```
Dim pt As PivotTable
Dim pf As PivotField
Set pt = ActiveSheet.PivotTables("Сводная таблица2")
Set pf = pt.PivotFields("[груп бай].[Название клиента].[Название клиента]")
wList = "[груп бай].[Название клиента].&[ООО ""Сеть автоматизированных пунктов выдачи""]"
FilterOLAP(pf, wList, FAlse)
```
debuging here
```
> If .Name <> pf.Name And .Name <> "Values" And .CubeField.Orientation
> <> xlDataField Then .CubeField.Orientation = xlHidden
```
Upvotes: 0 <issue_comment>username_3: I hope it's not too late for me to contribute an answer, just for posterity's sake.
If you look at the `PivotTable.MDX` property on any OLAP PivotTable, you can see the MDX query which Excel is actually using behind the scenes to populate the data which shows up in the PivotTable. Inspired somewhat by this observation, I thought to myself: shouldn't it be possible to be *even sneakier*, by (a) creating an ADODB connection with the same connection string which the PivotCache uses, (b) putting together an appropriate MDX query ourselves, and (c) reading the result directly into an array in VBA, to which we can then assign the `PivotField.VisibleItemsList` property?
Benefits of this approach include...
* Avoiding the overhead and awkwardness of having to create & destroy a temporary PivotTable to get the full list of items;
* Handling OLAP PivotFields correctly which have more than 1,048,575 members--putting these on Rows with the temporary-PivotTable approach would cause an error, as the PivotTable would exceed the maximum number of rows on a Worksheet; and,
* Using an MDX query that's a bit faster and more efficient than the one Excel would most likely use by default.
Without further ado (or perhaps with further ADO? hehe), here's the VBA subroutine I came up with.
```
' Filter a PivotField in an OLAP PivotTable on either Visible or Hidden items.
Public Sub FilterOLAPPivotField(oPF As PivotField, vItems As Variant, _
Optional ByVal bVisible As Boolean = True)
Dim dictItems As Object
Dim i As Long
Dim sConn As String, sConnItems() As String
Dim sCatalog As String
Dim sQuery As String
Dim oConn As Object
Dim oRS As Object
Dim vRecordsetRows As Variant
Dim dictVisibleItems As Object
' In case something fails while we still have the ADODB Connection or Recordset
' open, this ensures the subroutine will "fail gracefully" and still close them.
' Feel free to add some more error handling if you like!
On Error GoTo Fail
' Turn on "checkbox mode" for selecting more than one filter item, for convenience.
oPF.CubeField.EnableMultiplePageItems = True
' If filtering on Visible items: then we just need to set the PivotField's
' VisibleItemsList property to the vItems array, and we can skip the rest.
If bVisible Then
oPF.VisibleItemsList = vItems
Exit Sub
End If
' All the rest of this subroutine is just for the case where we want our vItems
' to be the *Hidden* items, i.e. so everything *but* those items is visible.
' Read vItems into a Scripting.Dictionary. This is for convenience; we want to use
' its Exists method later. We only really care about the Keys; the Item:=True
' is just a dummy.
Set dictItems = CreateObject("Scripting.Dictionary")
For i = LBound(vItems) To UBound(vItems)
dictItems.Add Key:=vItems(i), Item:=True
Next i
' Get the connection string from the PivotCache of the PivotField's parent PivotTable
' (This assumes it is an OLEDB connection.)
' The connection string is needed to make a separate connection to the server
' with ADODB. It also contains the Initial Catalog, which we also need.
sConn = Replace$(oPF.Parent.PivotCache.Connection, "OLEDB;", vbNullString, Count:=1)
sConnItems = Split(sConn, ";")
For i = LBound(sConnItems) To UBound(sConnItems)
If sConnItems(i) Like "Initial Catalog=*" Then
sCatalog = "[" & Split(sConnItems(i), "=")(1) & "]"
Exit For
End If
Next i
' Construct an MDX query to send to the server, which just gets the UNIQUE_NAME of
' all the members in the hierarchy we're interested in.
sQuery = Join$(Array( _
"WITH MEMBER [Unique Name] AS", _
oPF.CubeField.Name & ".CURRENTMEMBER.UNIQUE_NAME", _
"SELECT [Unique Name] ON 0,", _
oPF.Name, "ON 1 FROM", _
sCatalog _
))
' Using ADODB, get the result of the query, and dump it into a Variant array.
Set oConn = CreateObject("ADODB.Connection")
Set oRS = CreateObject("ADODB.Recordset")
oConn.Open sConn
oRS.Open sQuery, oConn
vRecordsetRows = oRS.GetRows()
' The Recordset rows are a multidimensional array with 2 columns: column 0 contains
' the member captions, and column 1 (which is the one we want) contains the unique names.
' So we loop through the result, adding any member which was *not* in vItems to
' a new Scripting.Dictionary.
Set dictVisibleItems = CreateObject("Scripting.Dictionary")
For i = 0 To oRS.RecordCount - 1
If Not dictItems.Exists(vRecordsetRows(1, i)) Then
dictVisibleItems.Add Key:=vRecordsetRows(1, i), Item:=True
End If
Next i
' dictVisibleItems.Keys now contains all member which were *not* in vItems.
' All that remains is to set the PivotField's VisibleItemsList to this array!
oPF.VisibleItemsList = dictVisibleItems.Keys
Fail:
' Last but not least: don't forget to close the ADODB Connection and Recordset.
' If we got to this point normally, then (despite the 'Fail' label) we just close
' them uneventfully and end.
' If we jumped here because of an error, then we see a MsgBox at this point, but the
' subroutine will try to "fail gracefully" and still close the Connection & Recordset.
' Just in case we somehow ended up down here via an error raised *before* the
' Connection or Recordset was ever open, we also have "On Error Resume Next".
' Otherwise, the Close method itself might raise an error, sending us back to 'Fail'
' and trapping the subroutine in an infinite loop!
If Err Then
MsgBox "Something went horribly wrong", vbCritical, "Error"
Err.Clear
End If
On Error Resume Next
oRS.Close
oConn.Close
End Sub
```
If you're interested in using it in your own Workbook, then just copy it into a standard module, and call it with the relevant arguments.
For example: `FilterOLAPPivotField(ActiveCell.PivotField, Items, False)` would filter the PivotField under the active cell so that it contains all items *except* those in the `Items` array.
An oddity I observed while testing this on my machine: sometimes, `CubeField.EnableMultiplePageItems` seems to think it's a read-only property if I have just opened a Workbook with a PivotField I'm trying to manipulate. Because the subroutine writes to this property, this can cause it to fail. Clicking once in the UI to open the filter dropdown always seems to make the problem go away. Not sure exactly what's behind this... maybe the PivotCache is not loading until I actually interact with the PivotTable? If anyone else has some insight, I'd be quite interested to learn about what causes this.
---
One last side note: if you plan on doing some manual finagling of a bunch of PivotFields on an existing Excel Workbook, then one thing you might also consider would be to put a button on your Quick Access Toolbar which just inverts all the filters on the PivotField under the active cell, i.e. which includes everything that's currently filtered & filters everything that's currently included. Or, you might want to have a UserForm with a CommandButton which does something similar. You could use the above subroutine to create such a button, by having another sub which calls it, like so:
```
' Invert the filters on the OLAP PivotField under the active cell.
Public Sub btnInvertOLAPPivotFieldFilter_Click()
Dim oPF As PivotField
Set oPF = ActiveCell.PivotField
oPF.CubeField.EnableMultiplePageItems = True
FilterOLAPPivotField oPF, oPF.VisibleItemsList, False
End Sub
```
Upvotes: 1
|
2018/03/16
| 1,606
| 6,545
|
<issue_start>username_0: I'm trying to take a picture from device camera or pick it up from the gallery and upload it to the server via the volley
everything works fine but the image quality is so bad
```
private void dispatchTakePictureIntent()
{
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent , CAMERA_REQUEST_CODE);
}
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data {
switch (requestCode) {
case CAMERA_REQUEST_CODE:
if ( resultCode == RESULT_OK){
Bundle bundle = data.getExtras();
bitmap = (Bitmap) bundle.get("data");
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
}
break;
```
and getParams method :
```
byte[] a = convertBitmapToByteArrayUncompressed(bitmap);
params.put("img" , Base64.encodeToString(a , Base64.DEFAULT));
public static byte[] convertBitmapToByteArrayUncompressed(Bitmap bitmap){
ByteBuffer byteBuffer = ByteBuffer.allocate(bitmap.getByteCount());
bitmap.copyPixelsToBuffer(byteBuffer);
byteBuffer.rewind();
return byteBuffer.array();
}
```<issue_comment>username_1: From Naugat Taking picture would be different.
Create a image file fist:
```
String mCurrentPhotoPath;
private File createImageFile() throws IOException {
// Create an image file name
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
String imageFileName = "JPEG_" + timeStamp + "_";
File storageDir = getExternalFilesDir(Environment.DIRECTORY_PICTURES);
File image = File.createTempFile(
imageFileName, /* prefix */
".jpg", /* suffix */
storageDir /* directory */
);
// Save a file: path for use with ACTION_VIEW intents
mCurrentPhotoPath = image.getAbsolutePath();
return image;
}
```
Then dispatch take picture intent
```
static final int REQUEST_TAKE_PHOTO = 1;
private void dispatchTakePictureIntent() {
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
// Ensure that there's a camera activity to handle the intent
if (takePictureIntent.resolveActivity(getPackageManager()) != null) {
// Create the File where the photo should go
File photoFile = null;
try {
photoFile = createImageFile();
} catch (IOException ex) {
// Error occurred while creating the File
...
}
// Continue only if the File was successfully created
if (photoFile != null) {
Uri photoURI = FileProvider.getUriForFile(this,
"com.example.android.fileprovider",
photoFile);
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI);
startActivityForResult(takePictureIntent, REQUEST_TAKE_PHOTO);
}
}
}
```
In your onActivity result check for RESULT\_OK for a successful capture.
```
if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK)
```
You already got the image path. Now use `mCurrentPhotoPath` to upload process.
Also, you need to implement file provider.
In your Manifest add this:
```
...
...
```
In XML in resource dir and add this:
```
xml version="1.0" encoding="utf-8"?
```
Now you will get a full-size image from a camera.
Source: <https://developer.android.com/training/camera/photobasics.html>
Upvotes: 4 [selected_answer]<issue_comment>username_2: use `getParcelableExtra()`, instead of `getExtras()` for small size images.
```
Bitmap bitmap = (Bitmap) intent.getParcelableExtra("data");
```
If your images are too large, you have to compress them and send to the other activity. Then you can get compressed bitmap and uncompress it in second activity. Try below code.
1st Activity
```
Intent intent = new Intent(this, SecondActivity.class);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPG, 100, stream);
byte[] bytes = stream.toByteArray();
intent.putExtra("bitmap",bytes);
```
2nd Activity
```
byte[] bytes = getIntent().getByteArrayExtra("bitmap");
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
```
Upvotes: 0 <issue_comment>username_3: You should have to use multipart entity to send image without compression. using multipart entity your image quality also will be maintained. Please follow this to send image using volley
```
public class MultipartReq extends JsonObjectRequest {
private static final String FILE_PART_NAME = "file";
private static final String STRING_PART_NAME = "text";
private final File mFilePart;
//private final String mStringPart;
MultipartEntityBuilder entityBuilder = MultipartEntityBuilder.create();
HttpEntity httpEntity;
Context context;
private Map params;
public MultipartReq(Context context, int method, String url, JSONObject jsonRequest, Response.Listener listener, Response.ErrorListener errorListener, File file, Map params) {
super(method, url, jsonRequest, listener, errorListener);
this.context = context;
mFilePart = file;
entityBuilder.setMode(HttpMultipartMode.BROWSER\_COMPATIBLE);
this.params = params;
buildMultipartEntity();
httpEntity = entityBuilder.build();
}
private void buildMultipartEntity() {
try {
if (mFilePart.exists()) { entityBuilder.addBinaryBody(FILE\_PART\_NAME, mFilePart, ContentType.create(mimeType), mFilePart.getName());
}
try {
if(!params.isEmpty()){
for (String key: params.keySet()){
entityBuilder.addPart(key, new StringBody(params.get(key),ContentType.TEXT\_PLAIN));
}
}
} catch (Exception e) {
VolleyLog.e("UnsupportedEncodingException");
}
} else {
ShowLog.e("no such file");
}
} catch (Exception e) {
ShowLog.e("UnsupportedEncodingException");
}
}
@Override
public Map getHeaders() throws AuthFailureError {
HashMap params = new HashMap();
return params;
}
@Override
public String getBodyContentType() {
return httpEntity.getContentType().getValue();
}
@Override
public byte[] getBody() {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
try {
httpEntity.writeTo(bos);
} catch (IOException e) {
VolleyLog.e("IOException writing to ByteArrayOutputStream");
}
return bos.toByteArray();
}
@Override
protected void deliverResponse(JSONObject response) {
super.deliverResponse(response);
}
}
```
Upvotes: 2
|
2018/03/16
| 547
| 1,957
|
<issue_start>username_0: I'm trying to load a template in `print_table()` function. If I uncomment the include\_once code above, it will work. Why? the `get_template function` does exactly the same. As it is now it says $people is undefined.
```
function get_template( $template_name ) {
include_once __DIR__ . DIRECTORY_SEPARATOR . $template_name;
}
function print_table( $people ) {
// include_once __DIR__ . DIRECTORY_SEPARATOR . 'html/table.php';
get_template( 'html/table.php' );
}
```
in html/table.php
```
| Name | Author | Date |
| --- | --- | --- |
php dd($people); ?
```<issue_comment>username_1: The included file is evaluated in the scope of the function including it. `print_table` has a variable `$people` in scope. `get_template` does not, since you're not passing the variable to `get_template`; it only has a `$template_name` variable in scope.
Also see <https://stackoverflow.com/a/16959577/476>.
Upvotes: 2 <issue_comment>username_2: `$people` is an argument of function `print_table()`, that's why it is available in the file included by `print_table()`.
But it is not available in the file included by the `get_template()` function because in the context of the `get_template()` function there is no variable named `$people` defined.
Read about [variable scope](http://php.net/manual/en/language.variables.scope.php).
Upvotes: 1 <issue_comment>username_3: That's because of variable scope. `$people` is not defined in your function `get_template()` (as well said in other answers).
To be **reusable**, you also could pass an associative array that contains all variables and use [`extract()`](https://php.net/extract) to use them as variable in your template:
```
function get_template($template_name, $data) {
extract($data);
include_once __DIR__ . DIRECTORY_SEPARATOR . $template_name;
}
function print_table($people) {
get_template('html/table.php', ['people'=>$people]);
}
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 594
| 2,026
|
<issue_start>username_0: I am using Python to parse an excel spreadsheet and write documentation to a word document. I want to highlight binary substrings e.g. '001' by having the numbers appear in dark red. I can find the substrings using re being any text which is a binary number sequence between single quotes, that is not my question. The question is how do I put the highlighting on just those characters within the paragraph? The format I would like to end up with is like the following:
[](https://i.stack.imgur.com/4Rn5I.png)
Any help would be appreciated.<issue_comment>username_1: ```
from docx import Document
from docx.shared import RGBColor
document = Document()
run = document.add_paragraph()
binary_run = run.add_run('your_binary_code')
binary_run.font.color.rgb = RGBColor(rgb_color_code_goes_here)
cmplt_run = binary_run.add_run('rest of the text goes here')
```
This will change the font color of ur binary code to the color code you provide. Refer [python-docx documemtation](http://googleweblight.com/i?u=http://python-docx.readthedocs.io/en/latest/user/text.html&hl=en-IN#a.font-color "Python-docx documentation for setting font color") to understand more.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Thanks to @username_1 for the inspiration, here is my working code:
```
def highlight_binary(par, text):
''' Add text to paragraph while highlighting binary in dark red
intputs:
par => the paragraph to add text to
text => the text with 0 or more binary strings in it
'''
BINARY_STR = re.compile("'[01]+'")
for occurance in re.findall(BINARY_STR, text):
pos = text.find(occurance)
par.add_run(text[:pos+1]) # +1 -> opening quote in normal text
text = text[pos+len(occurance)-1:] # -1 -> closing quote in normal text
hl = par.add_run(occurance[1:-1]) # remove quotes from highlighted part
hl.font.color.rgb = RGBColor(192,0,0)
if text: # any text left?
par.add_run(text)
```
Upvotes: 0
|
2018/03/16
| 842
| 3,449
|
<issue_start>username_0: How can I choose according value in AsyncStorage which screen should be displayed? I don't know why setting screen value 'Home' to InitialScreen variable doesn't work?
Once I log in **login.js** screen and I close app, after launching the app again I am navigated to **login.js**. But now I want to go to **home.js** screen.
Parent's file **routes.js**:
```
let InitialScreen
const RoutesNavigation = StackNavigator({
Login: { screen: Login },
Home: { screen: Home }
}, {
initialRouteName: InitialScreen,
navigationOptions: {
header: false,
}
});
export default class App extends Component {
constructor(props) {
super(props);
value = AsyncStorage.getItem('name');
if (value !== null) {
InitialScreen = 'Home'; //This doesn't change Initial screen!!!
console.log("JJJJJJJJJJJJJJJJJJ routes.js value !== null ");
}
}
render() {
return (
);
}
}
```
This is **login.js**, where I store value from received json:
```
export default class Login extends Component {
constructor(props) {
super(props);
this.state = {
username: '',
password: '',
}
}
render() {
return (
this.setState({ username })}
underlineColorAndroid='transparent'
/>
this.setState({ password })}
secureTextEntry={true}
underlineColorAndroid='transparent'
/>
Log in
);
}
login = () => {
var formData = new FormData();
formData.append('userName', this.state.username);
formData.append('password', <PASSWORD>);
fetch('http://....', {
method: 'POST',
body: formData
})
.then((response) => response.json())
.then((responseJson) => {
console.log("JJJJJJJJJJJJJJJJJJJJJJJJJ name: " + responseJson.name);
AsyncStorage.setItem('name', responseJson.name);
this.props.navigation.navigate('Home');
})
.catch(() => {
console.log("JJJJJJJJJJJJJJJJJJ Wrong connection");
alert('Wrong connection');
})
}
}
```
This is **home.js**:
```
export default class Home extends Component {
render() {
return (
Member area. You are logged in.
Log out
);
}
logout = () => {
AsyncStorage.removeItem('name');
this.props.navigation.navigate('Login');
console.log("JJJJJJJJJJJJJJJJJJ Logged out");
}
}
```<issue_comment>username_1: Create your navigator in here:
```
value = AsyncStorage.getItem('name');
if (value !== null) {
InitialScreen = 'Home';
const RoutesNavigation = StackNavigator({
Login: { screen: Login },
Home: { screen: Home }
},{
initialRouteName: InitialScreen,
navigationOptions: {
header: false,
}
});
}
```
Because you are creating your navigator at the top with empty initial route but you are changing value in here so you must create here.
Hope it will work.
Upvotes: 1 <issue_comment>username_2: AsyncStorage is async.Because of the js nature thread won't wait result of this
```
AsyncStorage.getItem('name');
```
use callback with getItem
```
AsyncStorage.getItem('name',(error,result) => {
if (result!== null) {
//do something
}
});
```
Upvotes: 0
|
2018/03/16
| 334
| 1,215
|
<issue_start>username_0: i have 2 routes with POST methods
```
Route::post('/payment/checkOrder','Finance\PaymentCallbackController@checkOrder');
Route::post('/payment/paymentAviso', 'Finance\PaymentCallbackController@paymentAviso');
```
how can i create legacy links for these routes?
```
/plat.php?paysystem=5&method=checkOrder
/plat.php?paysystem=5&method=paymentAviso
```<issue_comment>username_1: Create your navigator in here:
```
value = AsyncStorage.getItem('name');
if (value !== null) {
InitialScreen = 'Home';
const RoutesNavigation = StackNavigator({
Login: { screen: Login },
Home: { screen: Home }
},{
initialRouteName: InitialScreen,
navigationOptions: {
header: false,
}
});
}
```
Because you are creating your navigator at the top with empty initial route but you are changing value in here so you must create here.
Hope it will work.
Upvotes: 1 <issue_comment>username_2: AsyncStorage is async.Because of the js nature thread won't wait result of this
```
AsyncStorage.getItem('name');
```
use callback with getItem
```
AsyncStorage.getItem('name',(error,result) => {
if (result!== null) {
//do something
}
});
```
Upvotes: 0
|
2018/03/16
| 595
| 2,377
|
<issue_start>username_0: I have this simple function:
```
bool foo(const std::string& str)
{
static const std::string ky[] = { "KeyWord1", "KeyWord2", "KeyWord3" };
static const std::set kySet(ky, ky+ sizeof(ky)/sizeof(ky[0]));
return kySet.find(str) != kySet.end();
}
```
It basically holds a set of pre-set keywords, and test if a given string is one of the keywords.
I use `static` because I want only one copy of the pre-set variables.
This will run in a multi-thread environment and on different architectures. However, I was told this is only thread-safe on Linux but will break on AIX and Solaris.
I couldn't understand why it would break?<issue_comment>username_1: That can only be true if the compiler does not implement the C++ standard. Otherwise thread-safe dynamic initialization of variables with static storage duration is guaranteed by the standard, see [[stmt.dcl]](http://eel.is/c++draft/stmt.dcl#4).
>
> Dynamic initialization of a block-scope variable with static storage duration or thread storage duration is performed the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. **If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.** [...]
>
>
>
(Emphasis is mine)
Upvotes: 0 <issue_comment>username_2: Quoting from the 03 standard
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1905.pdf>
section 6.7
>
> An implementation is permitted to perform early initialization of
> other local objects with static storage duration under the same
> conditions that an implementation is permitted to statically
> initialize an object with static storage duration in namespace scope
> (3.6.2). Otherwise such an object is initialized the first time
> control passes through its declaration; such an object is considered
> initialized upon the completion of its initialization.
>
>
>
There is no mention of threads; and as such you should consider function statics not thread safe unless the function had been called while single threaded.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 469
| 1,945
|
<issue_start>username_0: I am trying to add a action bar to my layout but it is giving me a multiple root tag error.
Here is my code.
please help me
EDIT:
Here is the full xml
```
xml version="1.0" encoding="utf-8"?
```<issue_comment>username_1: That can only be true if the compiler does not implement the C++ standard. Otherwise thread-safe dynamic initialization of variables with static storage duration is guaranteed by the standard, see [[stmt.dcl]](http://eel.is/c++draft/stmt.dcl#4).
>
> Dynamic initialization of a block-scope variable with static storage duration or thread storage duration is performed the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. **If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.** [...]
>
>
>
(Emphasis is mine)
Upvotes: 0 <issue_comment>username_2: Quoting from the 03 standard
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1905.pdf>
section 6.7
>
> An implementation is permitted to perform early initialization of
> other local objects with static storage duration under the same
> conditions that an implementation is permitted to statically
> initialize an object with static storage duration in namespace scope
> (3.6.2). Otherwise such an object is initialized the first time
> control passes through its declaration; such an object is considered
> initialized upon the completion of its initialization.
>
>
>
There is no mention of threads; and as such you should consider function statics not thread safe unless the function had been called while single threaded.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 458
| 1,628
|
<issue_start>username_0: I was wondering what would be more performant. Native functions of the language that implies 2 iterations or a simple `for` loop.
The idea is to find the index of an array of objects whose property filterId match a concrete value.
The solution with a `for` would be this one whose runtime is n
```
for (i = 0; i < entries.length; i++) {
if (entries[i].filterId === filterId) {
return i;
}
}
```
or this other solution which internally must use 2 loops one for the `map` and another one for the `indexOf`. However, these are JS functions which are optimized internally by the JS engine. runtime 2n.
```
entries.map(item=>item.filterId).indexOf(filterId);
```
some enlightenment about this?<issue_comment>username_1: You could use [Array.prototype.findIndex](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/findIndex).
```
entries.findIndex(item => item.filterId === filterId)
```
Upvotes: 1 <issue_comment>username_2: you can make it using widely available array capabilities and still have a complexity of O(n).
For instance you can use`array.some`.
Advantage of `array.some` : it is available in almost all browsers, (available in IE since IE9).
See <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some> for documentation
```js
const data = [{id:1},{id:2},{id:3},{id:4}]
let foundIndex = null;
data.some((value, index) => {
if(value.id === 3) {
foundIndex = index;
return true;
}
return false;
});
console.log(foundIndex);
```
Upvotes: 0
|
2018/03/16
| 628
| 2,395
|
<issue_start>username_0: I've got a table which holds email adresses. But over the years of use this table got messed up and some of the email adresses got saved in the following format.
`<NAME>`
Probably because of some user copy-pasting the email adres from an other application. In an effort to sanitize this column I'm searching for a single sql-query to perform this sanitizing.
I'm a little bit stuck on how to approach this problem on a pure MySQL way. That is, how to transform the above mentioned format (and only that format) to only the email adres between the < >.
Filtering those 'wrong' emails is fairly simple:
`SELECT * FROM table WHERE email like '%<%>%'`
But then....
```
UPDATE table
SET email = ???
WHERE email like '%<%>%'
```<issue_comment>username_1: If I did understand your needs, that's your solution:
```
UPDATE table
SET email = REPLACE(REPLACE(SUBSTRING(email,instr(email,'<')),'<',''),'>','')
WHERE email like '%<%>%'
```
or
```
UPDATE table
SET email = REPLACE(SUBSTRING(email,instr(email,'<')+1),'>','')
WHERE email like '%<%>%'
```
Tried [here](http://sqlfiddle.com/#!9/d81017/5).
Upvotes: 2 [selected_answer]<issue_comment>username_2: I modified [@username_1's answer](https://stackoverflow.com/a/49319003/2730233) so that it also sanitizes emails that have `<` but not `>`, and `>` but not `<`. I ran into that scenario (clients' databases not properly sanitized).
The following is an example via a select so you can see the result that you'll get, but you can easily adapt it as an update:
```
SELECT
CASE
WHEN
instr(email,'<') > 0 AND instr(email,'>') > 0
THEN
REPLACE(
REPLACE(
SUBSTRING(
email,
instr(email,'<')
),'<',''
),'>',''
)
WHEN
instr(email,'<') > 0 AND instr(email,'>') = 0
THEN
SUBSTRING(
email,
instr(email,'<') + 1,
CHAR_LENGTH(email)
)
WHEN
instr(email,'<') = 0 AND instr(email,'>') > 0
THEN
SUBSTRING(
email,
1,
instr(email,'>') - 1
)
ELSE email
END AS email
FROM table
WHERE email LIKE '%<%' OR email LIKE '%>%';
```
Upvotes: 0
|
2018/03/16
| 822
| 2,617
|
<issue_start>username_0: I have this code,
I use it to transpose a list of numbers and then separate them with a comma,
It works well for every list, if numbers are between 0 and 99.
However if the list of numbers contains numbers with more than 2 characters, then, the code will give the attended result, only if at least one number in the list has 2 characters or less.
If I have a list of numbers with > 2 characters, then it will give me the transposed number without the comma separator.
Could you please advise
```
Option Explicit
Sub colonne_a_ligne()
'On compte le nombre de Trades
Dim Number_of_Trade As Integer
Number_of_Trade = Worksheets("Feuil1").Range("A:A").Cells.SpecialCells(xlCellTypeConstants).Count
Dim m As Integer
m = Number_of_Trade + 1
'On transpose les trades en ligne
Dim sourceRange As Range
Dim targetRange As Range
Set sourceRange = ActiveSheet.Range(Cells(1, 1), Cells(Number_of_Trade, 1))
Set targetRange = ActiveSheet.Cells(2, 2)
sourceRange.Copy
targetRange.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks:=False, Transpose:=True
'On met en forme dans le format
Dim sisk As String
Dim row As Long
Dim col As Long
For row = 2 To 2
sisk = vbNullString
For col = 2 To m
If VBA.Len(sisk) Then sisk = sisk & ","
sisk = sisk & Cells(row, col)
Next col
Worksheets("Feuil1").Cells(3, 2) = sisk
Next row
End Sub
```
[](https://i.stack.imgur.com/NFCa0.png)
[](https://i.stack.imgur.com/lGoNH.png)
[](https://i.stack.imgur.com/8DrPg.png)<issue_comment>username_1: This is a number formatting issue; specifically with the thousands separator. The second image is showing a true number with a space as the thousands separator but rounded off to 15 digit precision.
This should resolve the attempted auto-conversion to a number.
```
Dim sisk As String
Dim row As Long
Dim col As Long
For row = 2 To 2
sisk = vbNullString
For col = 2 To m
sisk = sisk & "," & Cells(row, col).TEXT
Next col
Worksheets("Feuil1").Cells(3, 2).NUMBERFORMAT = "@"
Worksheets("Feuil1").Cells(3, 2) = MID(sisk, 2)
Next row
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Based on data you shown (not tested)
```
Sub colonne_a_ligne()
With Worksheets("Feuil1")
.Range("A2").Value = Join(Application.Transpose(.Range("A1", .Cells(.Rows.Count, 1).End(xlUp)).Value), ",")
End With
End Sub
```
Upvotes: 2
|
2018/03/16
| 606
| 2,173
|
<issue_start>username_0: I have a products table which has a column named quantity. I also have a pivot table named member\_product which has a quantity column as well.
I want to be able to subtract quantity from the products table with the quantity from member\_product table.
Similar with updating the quantity of an inventory with the member\_product as the cart. I can't seem to figure it out.
```
Tables
products
id | name | quantity
member_product
product_id | member_id | quantity
```
view
```
| Name | Qty |
| --- | --- |
@foreach($members->products as $p)
| {{$p->name}} | {{$p->pivot->qty}} |
@endforeach
```
controller
```
public function updateProduct(Request $request)
{
$productid = $request['productid'];
$qty = $request['qty'];
products = DB::table('products')
->whereIn('id', $productid)
->update(['qty' => DB::raw('qty - 1')]);
//->update(['qty' => DB::raw('qty - $qty]); (i want to execute something like this)
}
```
With my current code, i got the needed ids and can also run the `db::raw('qty -1')`.
Is it possible to insert a value inside `db::raw` so it will use the qty of the ids specified by my whereIn, which is from my product\_member table. Or am i approaching this the wrong way.
I'm having a hard time dealing with arrays so please bear with me.<issue_comment>username_1: Yes, it is possible. Try to do this in this way
```
public function updateProduct()
{
$productid = $request['productid'];
$qty = $request['qty'];
products = DB::table('products')
->join('member_product','member_product.product_id','=','products.id')
->whereIn('products.id', $productid)
->update(['products.quantity' => DB::raw("products.quantity - member_product.quantity")]);
}
```
You need to use table names before field names to avoid ambiguous column names in query.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The better way would be to define a belongsToMany on the member and then update as follows...
```
Member::find(1)->products()->updateExistingPivot($productId, [
'qty' => 1
]);
```
You will of course need to update your product qty also.
Upvotes: 0
|
2018/03/16
| 442
| 1,492
|
<issue_start>username_0: Please help me to install librdkafka on windows xampp for php development.
PHP : 7.1.12, x86, Thread Safe, MSVC14
I downloaded compatible package from <https://pecl.php.net/package/rdkafka/3.0.5/windows>
Copied `php_rdkafka.dll` to `ext` folder of my php xampp and `librdkafka.dll` to `System32` folder (also to `ext` folder).
But the extension is not working. I am getting following error:
>
> PHP Warning: PHP Startup: Unable to load dynamic library
> 'C:\xampp\php\ext\php\_rdkafka.dll' - The specified module could not be
> found. in Unknown on line 0
>
>
>
I suspect that `librdkafka` is not properly installed.<issue_comment>username_1: Yes, it is possible. Try to do this in this way
```
public function updateProduct()
{
$productid = $request['productid'];
$qty = $request['qty'];
products = DB::table('products')
->join('member_product','member_product.product_id','=','products.id')
->whereIn('products.id', $productid)
->update(['products.quantity' => DB::raw("products.quantity - member_product.quantity")]);
}
```
You need to use table names before field names to avoid ambiguous column names in query.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The better way would be to define a belongsToMany on the member and then update as follows...
```
Member::find(1)->products()->updateExistingPivot($productId, [
'qty' => 1
]);
```
You will of course need to update your product qty also.
Upvotes: 0
|
2018/03/16
| 289
| 1,229
|
<issue_start>username_0: I am coming from the web world where DB connection opening is on app server start and closing on its close. Currently I need to create a Swing app with db connection. Most probably I will do this without connection pool, but it's a consideration for me and it's not related to my problem. I will use SQLite db.
I can open a connection in the main method where I am creating the main `JFrame` but where to close it? In my opinion the best will be if its closed in on frame close - but how?
How to close properly the DB connection exactly when the main window (the program) is closed?<issue_comment>username_1: Make title bar read only so user can not close the JFrame by click to upper right cross button. Take a JButton to close JFrame. In actionPerformed method on click of button close the DB connection and write System.exit() method to exit from the application properly.
Upvotes: -1 <issue_comment>username_2: You can do this.
```
JFrame frame = new JFrame();
frame.addWindowListener(new WindowAdapter()
{
@Override
public void windowClosing(WindowEvent e)
{
super.windowClosing(e);
// Do your disconnect from the DB here.
}
});
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 226
| 882
|
<issue_start>username_0: I have gradle project which has lombok jar ,i have added the below dependency in gradle, Gradle version : 4.5.1
```
compileOnly group:'org.projectlombok',name:'lombok', version: '1.16.20'
```
gradle build in command prompts not working<issue_comment>username_1: Make title bar read only so user can not close the JFrame by click to upper right cross button. Take a JButton to close JFrame. In actionPerformed method on click of button close the DB connection and write System.exit() method to exit from the application properly.
Upvotes: -1 <issue_comment>username_2: You can do this.
```
JFrame frame = new JFrame();
frame.addWindowListener(new WindowAdapter()
{
@Override
public void windowClosing(WindowEvent e)
{
super.windowClosing(e);
// Do your disconnect from the DB here.
}
});
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 251
| 979
|
<issue_start>username_0: How can I change `java.io.tmpdir` folder for my Hadoop 3 Cluster running on YARN?
By default it gets something like `/tmp/***`, but my `/tmp` filesystem is to small for everythingYARN Job will write there.
Is there a way to change it ?
I have also set `hadoop.tmp.dir` in `core-site.xml`, but it looks like, it is not really used.<issue_comment>username_1: Make title bar read only so user can not close the JFrame by click to upper right cross button. Take a JButton to close JFrame. In actionPerformed method on click of button close the DB connection and write System.exit() method to exit from the application properly.
Upvotes: -1 <issue_comment>username_2: You can do this.
```
JFrame frame = new JFrame();
frame.addWindowListener(new WindowAdapter()
{
@Override
public void windowClosing(WindowEvent e)
{
super.windowClosing(e);
// Do your disconnect from the DB here.
}
});
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 736
| 2,314
|
<issue_start>username_0: [Image to show the problem](https://i.stack.imgur.com/gQ0ry.png)Here is the code to illustrate the problem:
```
# -*- coding:utf-8 -*-
text = u"严"
print text
```
If I run the code above in VSCode debug, it will prints "涓" instead of "严", which is the result of the first 2 byte (\xe4\xb8) of u"严" in UTF-8 (\xe4\xb8\xa5), decoded in gbk codec. \xe4\xb8 in gbk is "涓".
However if I run the same code in pycharm it prints "严" exactly as I expected. And it is the same If I run the code in powershell.
Wired the VSCode python debugger behaves different with python interpreter. How can I get the print result correct, I do not think add a decode("gbk") in the end of every text would be a good idea.
My Environment data
-------------------
* VS Code version: 1.21
* VSCode Python Extension version : 2018.2.1
* OS and version: Windows 10
* Python version : 2.7.14
* Type of virtual environment used : No<issue_comment>username_1: If you open your python file in VS2017 you can do the following:
Go to:
1. File->
2. Save selected item as ->
3. >
> click on the down-arrow next to "Save button"
>
>
>
4. >
> clicking "Save With Encoding...
>
>
>
5. select the type of coding you need...
6. if .py already saved then overwrite file > select "yes"
>
> select for example : `"Chinese Simplified (GB18030) - Codepage 54936"`
>
>
>
Also, add the following on line 2 of your .py file:
`# -*- coding: gb18030 -*-` or `# -*- coding: gb2312 -*-`
Those encodings accept your 严 character.
Nice link to endocoder/decoder tester [here](http://string-functions.com/encodedecode.aspx).
Upvotes: -1 <issue_comment>username_2: 1. For Windows users, in your System Variables, add `PYTHONIOENCODING` Variables,change its value to `UTF-8`, then restart vscode, this worked on my pc.
2. Modify `task.json` file in vscode, I am not sure if it will still work on version 2.0.
You can find it here:[Changing the encoding for a task output](https://code.visualstudio.com/docs/editor/tasks#_changing-the-encoding-for-a-task-output)
or here in github:
[Tasks should support specifying the output encoding](https://github.com/Microsoft/vscode/issues/3550)
3. add this before you start a py script:
`import io
import sys
sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='utf8')`
Upvotes: 2
|
2018/03/16
| 672
| 2,295
|
<issue_start>username_0: In my application, a new log file will be generated everytime my application starts. However, this is not what I want because sometimes, I would end up with 10 log files due to opening and closing my application 10 times at different times.
I want my log file to be named `Log_LOGFILECREATEDDATE_LOGFILECREATEDTIME.txt`. All subsequent events performed in that application is to be logged to just that log file no matter how many times I start the application. A new log file should be created only if the old log file is renamed or deleted.
**Code in App.config**
```
```
**How I log things in my application**
```
public class ABC{
private static readonly log4net.ILog logMe = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
....
function A(int a)
{
...
logMe.Info("function A done!")
}
}
```<issue_comment>username_1: If you open your python file in VS2017 you can do the following:
Go to:
1. File->
2. Save selected item as ->
3. >
> click on the down-arrow next to "Save button"
>
>
>
4. >
> clicking "Save With Encoding...
>
>
>
5. select the type of coding you need...
6. if .py already saved then overwrite file > select "yes"
>
> select for example : `"Chinese Simplified (GB18030) - Codepage 54936"`
>
>
>
Also, add the following on line 2 of your .py file:
`# -*- coding: gb18030 -*-` or `# -*- coding: gb2312 -*-`
Those encodings accept your 严 character.
Nice link to endocoder/decoder tester [here](http://string-functions.com/encodedecode.aspx).
Upvotes: -1 <issue_comment>username_2: 1. For Windows users, in your System Variables, add `PYTHONIOENCODING` Variables,change its value to `UTF-8`, then restart vscode, this worked on my pc.
2. Modify `task.json` file in vscode, I am not sure if it will still work on version 2.0.
You can find it here:[Changing the encoding for a task output](https://code.visualstudio.com/docs/editor/tasks#_changing-the-encoding-for-a-task-output)
or here in github:
[Tasks should support specifying the output encoding](https://github.com/Microsoft/vscode/issues/3550)
3. add this before you start a py script:
`import io
import sys
sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='utf8')`
Upvotes: 2
|
2018/03/16
| 1,185
| 3,643
|
<issue_start>username_0: I have the following table in R:
```
Sample Cluster CellType Condition Genotype Lane
Sample1 1 A Mut XXXX 1
Sample2 2 B Mut YYYY 1
Sample3 2 A Mut YYYY 2
Sample4 1 A Mut ZZZZ 1
Sample5 2 B Mut YYYY 3
Sample6 1 B Mut YYYY 1
Sample7 1 A Mut XXXX 2
```
I would like to:
* Aggregate the table by the Cluster column,
* Where each other column yields the dominant value which relates to the cluster
* As well as the "confidence level", as a percentage of dominance from the values related to the same cluster
Like so:
```
Cluster CellType Condition Genotype Lane
1 A (75%) Mut (100%) XXXX (50%) 1 (75%)
2 B (66%) Mut (100%) YYYY (100%) 1 (33%)
```
I've tried using the aggregate function as follows which yields close results, but it's not quite there yet:
```
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
library(dplyr)
aggregate(. ~ Cluster, clustering_report, Mode)
```<issue_comment>username_1: ```r
library(dplyr)
df %>%
group_by(Cluster) %>%
summarise_at(vars(CellType:Lane), funs(val=names(which(table(.) == max(table(.)))[1]),
rate=(max(table(.))[1]/n())*100))
```
**Output is:**
```
Cluster CellType_val Condition_val Genotype_val Lane_val CellType_rate Condition_rate Genotype_rate Lane_rate
1 1 A Mut XXXX 1 75.0 100 50.0 75.0
2 2 B Mut YYYY 1 66.7 100 100 33.3
```
**Or maybe**
```
df %>%
group_by(Cluster) %>%
summarise_at(vars(CellType:Lane), funs(paste0(names(which(table(.) == max(table(.)))[1]),
" (",
rate=round((max(table(.))[1]/n())*100),
"%)")))
# Cluster CellType Condition Genotype Lane
#1 1 A (75%) Mut (100%) XXXX (50%) 1 (75%)
#2 2 B (67%) Mut (100%) YYYY (100%) 1 (33%)
```
**Sample data:**
```
df <- structure(list(Sample = c("Sample1", "Sample2", "Sample3", "Sample4",
"Sample5", "Sample6", "Sample7"), Cluster = c(1L, 2L, 2L, 1L,
2L, 1L, 1L), CellType = c("A", "B", "A", "A", "B", "B", "A"),
Condition = c("Mut", "Mut", "Mut", "Mut", "Mut", "Mut", "Mut"
), Genotype = c("XXXX", "YYYY", "YYYY", "ZZZZ", "YYYY", "YYYY",
"XXXX"), Lane = c(1L, 1L, 2L, 1L, 3L, 1L, 2L)), .Names = c("Sample",
"Cluster", "CellType", "Condition", "Genotype", "Lane"), class = "data.frame", row.names = c(NA,
-7L))
```
Upvotes: 2 <issue_comment>username_2: Here is a base R solution,
```
m1 <- do.call(rbind,
lapply(split(df, df$Cluster),
function(i) sapply(i[3:6],
function(j) {
t1 <- prop.table(table(j));
nms <- names(t1[which.max(t1)]);
paste0(nms, ' (' ,round(max(t1)*100), '%', ')')
})))
cbind.data.frame(unique(df[2]), m1)
```
which gives,
>
>
> ```
> Cluster CellType Condition Genotype Lane
> 1 1 A (75%) Mut (100%) XXXX (50%) 1 (75%)
> 2 2 B (67%) Mut (100%) YYYY (100%) 1 (33%)
>
> ```
>
>
Upvotes: 2
|
2018/03/16
| 183
| 707
|
<issue_start>username_0: How can we justify text in react native for both android and iOS without using webView, since webView is not visible in android after placing it inside another view.
The following option only works for iOS.
```
textAlign='justify'
```<issue_comment>username_1: Using webView resolved the issue. To show webView inside the ScrollView I just had to add height to webView.
<https://github.com/jsdf/react-native-htmlview/issues/7>
Upvotes: -1 <issue_comment>username_2: <https://facebook.github.io/react-native/docs/text.html#style>
As it s written here, justify for android is not available... Most of the tricks suggest to use `left` or then a webview as you said...
Upvotes: 1
|
2018/03/16
| 375
| 1,345
|
<issue_start>username_0: I have deployed laravel 5.4 app in AWS Ubuntu 16.04 apache2, i have created task scheduler for sending emails `dailyAt('10:00')`.
When i run the artisan command `php artisan email:reminder` manually every thing works fine.
But when i run `php artisan schedule:run` i am getting `No scheduled commands are ready to run.`
I have also ran `* * * * * php /var/www/html/app/artisan schedule:run >> /dev/null 2>&1` referring to documentation.
This is `Kernal.php`
```
class Kernel extends ConsoleKernel
{
protected $commands = [
\App\Console\Commands\EmailReminder::class,
];
protected function schedule(Schedule $schedule)
{
$schedule->command('email:reminder --force')->dailyAt('10:00');
}
protected function commands()
{
$this->load(__DIR__.'/Commands');
require base_path('routes/console.php');
}
}
```<issue_comment>username_1: Using webView resolved the issue. To show webView inside the ScrollView I just had to add height to webView.
<https://github.com/jsdf/react-native-htmlview/issues/7>
Upvotes: -1 <issue_comment>username_2: <https://facebook.github.io/react-native/docs/text.html#style>
As it s written here, justify for android is not available... Most of the tricks suggest to use `left` or then a webview as you said...
Upvotes: 1
|
2018/03/16
| 994
| 3,722
|
<issue_start>username_0: I am using Apollo Client for the frontend and Graphcool for the backend. There are two queries `firstQuery` and `secondQuery` that I want them to be called in sequence when the page opens. Here is the sample code (the definition of TestPage component is not listed here):
```
export default compose(
graphql(firstQuery, {
name: 'firstQuery'
}),
graphql(secondQuery, {
name: 'secondQuery' ,
options: (ownProps) => ({
variables: {
var1: *getValueFromFirstQuery*
}
})
})
)(withRouter(TestPage))
```
I need to get `var1` in `secondQuery` from the result of `firstQuery`. How can I do that with Apollo Client and compose? Or is there any other way to do it? Thanks in advance.<issue_comment>username_1: The props added by your `firstQuery` component will be available to the component below (inside) it, so you can do something like:
```
export default compose(
graphql(firstQuery, {
name: 'firstQuery'
}),
graphql(secondQuery, {
name: 'secondQuery',
skip: ({ firstQuery }) => !firstQuery.data,
options: ({firstQuery}) => ({
variables: {
var1: firstQuery.data.someQuery.someValue
}
})
})
)(withRouter(TestPage))
```
Notice that we use `skip` to skip the second query unless we actually have data from the first query to work with.
### Using the Query Component
If you're using the `Query` component, you can also utilize the `skip` property, although you also have the option to return something else (like `null` or a loading indicator) inside the first render props function:
```
{({ data: { someQuery: { someValue } = {} } = {} }) => (
{({ data: secondQueryData }) => (
// your component here
)}
```
### Using the useQuery Hook
You can also use `skip` with the `useQuery` hook:
```
const { data: { someQuery: { someValue } = {} } = {} } = useQuery(firstQuery)
const variables = { var1: someValue }
const skip = someValue === undefined
const { data: secondQueryData } = useQuery(secondQuery, { variables, skip })
```
### Mutations
Unlike queries, mutations involve specifically calling a function in order to trigger the request. This function returns a Promise that will resolve with the results of the mutation. That means, when working with mutations, you can simply chain the resulting Promises:
```
const [doA] = useMutation(MUTATION_A)
const [doB] = useMutation(MUTATION_B)
// elsewhere
const { data: { someValue } } = await doA()
const { data: { someResult } } = await doB({ variables: { someValue } })
```
Upvotes: 7 [selected_answer]<issue_comment>username_2: For anyone using [react apollo hooks](https://www.npmjs.com/package/@apollo/react-hooks) the same approach works.
You can use two `useQuery` hooks and pass in the result of the first query into the `skip` `option` of the second,
example code:
```js
const AlertToolbar = ({ alertUid }: AlertToolbarProps) => {
const authenticationToken = useSelectAuthenticationToken()
const { data: data1 } = useQuery(query, {
skip: !authenticationToken,
variables: {
alertUid,
},
context: makeContext(authenticationToken),
})
const { data: data2, error: error2 } = useQuery(query2, {
skip:
!authenticationToken ||
!data1 ||
!data1.alertOverview ||
!data1.alertOverview.deviceId,
variables: {
deviceId:
data1 && data1.alertOverview ? data1.alertOverview.deviceId : null,
},
context: makeContext(authenticationToken),
})
if (error2 || !data2 || !data2.deviceById || !data2.deviceById.id) {
return null
}
const { deviceById: device } = data2
return (
...
// do some stuff here with data12
```
Upvotes: 3
|
2018/03/16
| 538
| 1,379
|
<issue_start>username_0: In my limited experience with `python & numpy`, I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
```
A=[3,-1, 4]
B = array([1,1,1],[1,-1,1],[1,1,-1])
The most close one in B is [1, -1, 1]
```
1. the weight of positive and negative > close of (A, B)
2. find the most close one in B (all the same Pos or Neg)
`B1 = array([1,1,1], [1,-1,1], [1,1, -1], [3,1,4])`
`The result is [1,-1,1]`
after searching around for a decent XX solution and found that everything out there was difficult to use.
Thanks in advance.<issue_comment>username_1: One possible way:
```
A = np.array([3,-1, 4])
B = np.array([[1,1,1],[1,-1,1],[1,1,-1]])
# distances array-wise
np.abs(B - A)
# sum of absolute values of distances (smallest is closest)
np.sum(np.abs(B - A), axis=1)
# index of smallest (in this case index 1)
np.argmin(np.sum(np.abs(B - A), axis=1))
# all in one line (take array 1 from B)
result = B[np.argmin(np.sum(np.abs(B - A), axis=1))]
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this,
```
import numpy as np
A=np.array([3,-1, 4])
B =np.array([[1,1,1],[1,-1,1],[1,1,-1]])
x=np.inf
for val in B:
if (x>(np.absolute(A-val)).sum())and((np.sign(A)==np.sign(val)).all()==True):
x=(np.absolute(A-val)).sum()
y=val
print x
print y
```
Upvotes: 0
|
2018/03/16
| 494
| 1,329
|
<issue_start>username_0: Is it possible to bring the arrowhead in the front of tooltip from the microtip library?, I'm trying to add shadow to the entire tooltip but the head pointing down is on the back, so it gets blocked. Tried adding a background image to after but it's not working correctly.
Here's the code:
```
&[aria-label][role="tooltip"] {
--microtip-font-size: 15px;
&::after {
box-shadow: 5px 9px 45px 4px darkgrey;
border-radius: 7px;
border: 1px solid darkgrey;
background-color: white;
color: #000080;
}
}
```<issue_comment>username_1: One possible way:
```
A = np.array([3,-1, 4])
B = np.array([[1,1,1],[1,-1,1],[1,1,-1]])
# distances array-wise
np.abs(B - A)
# sum of absolute values of distances (smallest is closest)
np.sum(np.abs(B - A), axis=1)
# index of smallest (in this case index 1)
np.argmin(np.sum(np.abs(B - A), axis=1))
# all in one line (take array 1 from B)
result = B[np.argmin(np.sum(np.abs(B - A), axis=1))]
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this,
```
import numpy as np
A=np.array([3,-1, 4])
B =np.array([[1,1,1],[1,-1,1],[1,1,-1]])
x=np.inf
for val in B:
if (x>(np.absolute(A-val)).sum())and((np.sign(A)==np.sign(val)).all()==True):
x=(np.absolute(A-val)).sum()
y=val
print x
print y
```
Upvotes: 0
|
2018/03/16
| 776
| 2,043
|
<issue_start>username_0: Background
----------
I have a string containing a javascript object. This string is not JSON stringified ( only a portion of it is ).
I need to convert this string into an object so than I can use it.
Here is an example of such string:
```
`{method: 'POST', url: '/iot/pipe/', query: {}, body: { d: '{"n": "861359031087669", "b": 100, "v": "02.37", "t": 1, "d":[[1515070895,413973366,21717600,110,1,0],[1515070897,413975033,21719083,102,1,0]]}' }, headers: { host: 'node_session_iot', connection: 'close', 'content-length': '1219', accept: '*/*', 'user-agent': 'QUECTEL_MODULE', 'content-type': 'application/x-www-form-urlencoded' } }`
```
`JSON.parse`
------------
The string is not json stringified, so parse will fail.
`eval`
------
`eval` is evil. Never use it.
Solutions?
----------
I find it incredibly frustrating. I have the object right in front of me and I can't do a single thing with it.
What other options do I have to convert this string into an object?<issue_comment>username_1: Your request body is in correct JSON format.
So take request body from the given string by using regular expression or string functions and use JSON.Parse method to get the object.
In this case your request body is
```
{"n": "861359031087669", "b": 100, "v": "02.37", "t": 1, "d":[[1515070895,413973366,21717600,110,1,0],[1515070897,413975033,21719083,102,1,0]]}
```
Please see the below snapshot for the JSON.Parse function.
[](https://i.stack.imgur.com/kJbRs.png)
Upvotes: 0 <issue_comment>username_2: Well I wont say it is a perfect solution and it is very example specific but idea is to convert string to step by step to JSON string
Hope it works
```
//take in quotation
y = x.replace(/(\w+)(\s*:)+/g,"\"\$1\"$2");
//convert single quotation into "
y = y.replace(/\'/g,"\"" );
// remove " from object literals
y = y.replace(/\"\s*{/g,"{" );
y = y.replace(/}\"\s*/g,"}" );
yOjb = JSON.parse(y);
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,014
| 3,635
|
<issue_start>username_0: I tried everything but cannot get a file upload to work.
I want it to upload to:
/var/www/mysite.com/uploads
Laravel is located at:
/var/www/mysite.com/admin/public/
Latest thing I tried was making a filesystem like this:
```
'uploads' => [
'driver' => 'local',
'root' => '/var/www/mysite.com/uploads'
]
```
I also tried
```
'uploads' => [
'driver' => 'local',
'root' => '../../uploads'
]
```
none of them did work.
Can anyone please tell me how I can upload files, outside of my Laravel directory to the directory I specified above?
EDIT: The error I receive:
League \ Flysystem \ Exception
Impossible to create the root directory "/var/www/mysite.com/uploads/image.jpg".
Also tried this:
```
'uploads' => [
'driver' => 'local',
'root' => dirname(dirname(dirname(__FILE__))).'/uploads'
]
```
with controller code:
```
$path = $request->file('image')->store(
'image.jpg', 'uploads'
);
```<issue_comment>username_1: How about this? It should even be OS-agnostic:
```
'uploads' => [
'driver' => 'local',
'root' => __DIR__ . '/../../uploads/'
]
```
Upvotes: 2 <issue_comment>username_2: For me the problem is solved in this way,
I've created a directory called `uploads` one level upper than laravel root directory.
after that in `config/filesystem.php` added this piece of code:
```
'links' => [
public_path('storage') => storage_path('app/public'),
public_path('storage2') => __DIR__ . '/../../uploads',
],
```
and also,
```
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
'public2' => [
'driver' => 'local',
'root' => __DIR__ . '/../../uploads',
'visibility' => 'public',
],
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
],
],
```
and finally entered the `php artisan storage:link` in terminal.
---
Actually what it does is that in `public` directory, a directory named `storage2` is created and this directory will point to `uploads` folder that is created outside of laravel root directory.
please note that this lines of code create the `uploads` direcotry and after that all images will store in this location.
```
'public2' => [
'driver' => 'local',
'root' => __DIR__ . '/../../uploads',
'visibility' => 'public',
],
```
---
for storing uploaded image in created `uploads` directory, I used this code:
```
if ($request->hasFile('cover')) {
$filenameWithExt = $request->file('cover')->getClientOriginalName ();
$filename = pathinfo($filenameWithExt, PATHINFO_FILENAME);
$extension = $request->file('cover')->getClientOriginalExtension();
$fileNameToStore = $filename . '_' . uniqid() . '.' . $extension;
$path = $request->file('cover')->storeAs('/image/blog', $fileNameToStore, 'public2');
}
```
and for displaying this image (in blade file):
```
|
```
Upvotes: 0
|
2018/03/16
| 1,022
| 3,716
|
<issue_start>username_0: I have two classes (Date and Employee).
Class Date doesn't have a constructor, but it has 3 variables with their setters.
Class Employee has a constructor where I initialized its variables. But I have to attach the 3 variables in Class Date to it.
I tried using Setter method, but when I run the code, it says that there's an error with:
```
hire_date.setDay(1);
hire_date.setMonth(1);
hire_date.setYear(2018);
```
//In class Empolyee
```
private String name;
private Date hire_date;
private double monthly_salary;
public Employee() {
name = "Jody";
hire_date.setDay(1);
hire_date.setMonth(1);
hire_date.setYear(2018);
monthly_salary = 2000.0;
}
```
//In class Date
```
private int day;
private int month;
private int year;
public void setDay(int day) {
this.day = day;
}
public void setMonth(int month) {
this.month = month;
}
public void setYear(int year) {
this.year = year;
}
```<issue_comment>username_1: How about this? It should even be OS-agnostic:
```
'uploads' => [
'driver' => 'local',
'root' => __DIR__ . '/../../uploads/'
]
```
Upvotes: 2 <issue_comment>username_2: For me the problem is solved in this way,
I've created a directory called `uploads` one level upper than laravel root directory.
after that in `config/filesystem.php` added this piece of code:
```
'links' => [
public_path('storage') => storage_path('app/public'),
public_path('storage2') => __DIR__ . '/../../uploads',
],
```
and also,
```
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
'public2' => [
'driver' => 'local',
'root' => __DIR__ . '/../../uploads',
'visibility' => 'public',
],
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
],
],
```
and finally entered the `php artisan storage:link` in terminal.
---
Actually what it does is that in `public` directory, a directory named `storage2` is created and this directory will point to `uploads` folder that is created outside of laravel root directory.
please note that this lines of code create the `uploads` direcotry and after that all images will store in this location.
```
'public2' => [
'driver' => 'local',
'root' => __DIR__ . '/../../uploads',
'visibility' => 'public',
],
```
---
for storing uploaded image in created `uploads` directory, I used this code:
```
if ($request->hasFile('cover')) {
$filenameWithExt = $request->file('cover')->getClientOriginalName ();
$filename = pathinfo($filenameWithExt, PATHINFO_FILENAME);
$extension = $request->file('cover')->getClientOriginalExtension();
$fileNameToStore = $filename . '_' . uniqid() . '.' . $extension;
$path = $request->file('cover')->storeAs('/image/blog', $fileNameToStore, 'public2');
}
```
and for displaying this image (in blade file):
```
|
```
Upvotes: 0
|
2018/03/16
| 536
| 2,183
|
<issue_start>username_0: I'm building an 'accessibility bar' that will include functions such as; increase/decrease contrast, text-size, line-spacing etc.
Is there a default, standard, recommended or well known icon that could be used to show/hide this functionality?
I feel like the 'wheelchair' icon: [wheelchair accessibility icon](https://i.stack.imgur.com/UkF86.png) doesn't represent web accessibility very well..
Can someone suggest any widely adopted alternatives? What would you use?<issue_comment>username_1: You want an icon associated with some action that will unhide the toolbar? There isn't really a common icon for that. I agree that you should stay away from the wheelchair. I sometimes have to bump up my font size or increase contrast, especially in very bright areas, like outside in the sun, but I wouldn't look for such a feature under a wheelchair icon because I don't associate myself with that.
The Mac has an "Accessibility" or "Universal Access" icon in system preferences. It looks like a person with its arms outstretched.
The PC has "Ease of Access" icon in the control panel. I'm not sure how to describe it. Kind of a pie with 8 pieces, but the two pieces in the north-east corner are missing and are replaced with a down and right arrow forming a right angle.
Neither icon conveys accessibility to me. It's just a symbol you have to learn (if you have vision).
[](https://i.stack.imgur.com/S4RZK.jpg)
Upvotes: 1 <issue_comment>username_2: I would say, you need to go with the wheelchair symbol if you need a specific symbol.
But put that in a textual context too.
Like:
"Accessibility features" + Symbol
The wheelchair is not a fresh symbol, but as most people with disabilities agree on is that it's universal and wellknown. Taking the accessibility symbols in Mac or Windows is not to recommend.
As suggested a settings icon would be an option, but it depends on the context of the site itself. Will it interfere with other icons? To avoid confusion, always accompany the icon with text, never rely on visual icons to tell the whole purpose of interaction.
Upvotes: 0
|
2018/03/16
| 408
| 1,550
|
<issue_start>username_0: I have logger file... it will be around 1GB size.
I want read data from it line by line till a certain keyword "transitioning to different mode" is reached and i want the time stamp for that as well
i tried different codes but its giving my only the log header and few lines from the log file (the file is in txt format)
Below is the code i tried to print the first 100 lines
```
i =1
with open(sFile) as fileobject:
for line in fileobject:
print line
i = i + 1
if i > 100:
break
```
This prints only the header and few other lines
Could someone please help in fixing this
Thanks<issue_comment>username_1: Is this what you want?
import time
```
start = time.time()
with open(sFile) as fileobject:
for i, line in enumerate(fileobject):
if line.strip() == "transitioning to different mode":#or
#if line.stratswith("transitioning to different mode"): # or
#if "transitioning to different mode" in line:
print(i)
break
end = time.time()
print(end - start)
```
Upvotes: 0 <issue_comment>username_2: The use of `yield` will be memory efficient for your task while loading huge file:
```
def readfile(filename):
with open(filename) as fileObject:
for line in fileObject:
yield(line)
yield None # after the end of line
```
and now yield the file lines as following:
```
lines = readfile(filename)
line = line.next():
while line:
print (line)
```
Upvotes: -1
|
2018/03/16
| 663
| 2,131
|
<issue_start>username_0: I have a field where I need to replace the comma `,` with a dot `.`
What is the best way to do this?
HTML form-
```
```
The Jquery (my first attempt)
```
jQuery(document).ready(function($){
$("#field\_ce5yi-other\_2-otext").keyup(function(){
$("#field\_ce5yi-other\_2-otext").text($("#field\_ce5yi-other\_2-otext").text().replace(','.''))
});
});
```<issue_comment>username_1: In your code there are a couple of syntax mistakes but the main issue is you're using `text()` when you need to use `val()` with it being an input.
Here's a working example:
```js
jQuery(document).ready(function($){
$("#field_ce5yi-other_2-otext").on('keyup',function(){
$("#field_ce5yi-other_2-otext").val($("#field_ce5yi-other_2-otext").val().replace(',','.'))
});
});
```
```html
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: fix the script:
use val() instead text()
```
jQuery(document).ready(function($){
$("#field\_ce5yi-other\_2-otext").keyup(function(){
$("#field\_ce5yi-other\_2-otext").val($("#field\_ce5yi-other\_2-otext").val().replace(',','.'));
});
});
```
now it should work.
Upvotes: 0 <issue_comment>username_3: The syntax for replacing is a little different to what you had. Basically the following grabs the value of the input on the keyup, replacing all the instances of the comma (thats what the "g" does in the replace - g for global) with a dot and then resets the value to the new string. Note that the comma is escaped (thats what the backslash is for).
```
jQuery(document).ready(function($){
$("#field_ce5yi-other_2-otext").keyup(function(){
var enteredText = $(this).val();
var newText = enteredText.replace(/\,/g,'.');
$(this).val(newText);
})
});
```
Upvotes: 0 <issue_comment>username_4: If you don't need to support ancient MSIE, simply do this:
```
$(function() {
$("#field_ce5yi-other_2-otext").on('input', function() {
var currValue = $(this).val().replace(/\,/g, '.');
$(this).val(currValue);
});
});
```
*(This will replace **all** occurrences of the `,` character to `.`)*
Upvotes: 0
|
2018/03/16
| 526
| 2,067
|
<issue_start>username_0: I am doing a huge project and I am having some issues with it.
Every time I click the login button, it takes a while to make the database connection, and if this button is an alert and you click it multiple times, it will show the alert multiple times as well.
This is the button:
```
this.login(this.state.email, this.state.password)
}>
Login
```
I would like to disable the button after clicking it so the alert error only appears once.
This is where I want to put the code to deactivate the button:
```
if (respo == 'Wrong name or password.') {
alert("Wrong Username and Password.");
} else {
if (respo.user.uid > 0) {
if (Object.values(respo.user.roles).indexOf("student user") > -1) {
AsyncStorage.setItem("u_uid", respo.user.uid);
alert("Login Successfull.");
Actions.home();
} else {
alert("Only Student user is allowed to login.");
}
}
}
```
Thank you!<issue_comment>username_1: Well, the simplest logic could be:
1. Set up a variable that will track whether the the login process is in process, like
`var loginInProcess = false`. This should probably be set in the state of some parent component that tracks the state of your application, but let's keep it simple.
2. In the `onPress` event, set the value `loginInProcess = true`
3. Execute the login actions conditionally, only if login is not in process:
For example:
```
onPress = {() => {
if (!loginInProcess) {
loginInProcess = true;
this.login(this.state.email, this.state.password)
} else {
/*Login in process, do something else*/
}}}>
```
4. If login fails *(your second block of code)*, reset the variable: `loginInProcess = false` to be able to try and "login" again.
Upvotes: 3 [selected_answer]<issue_comment>username_2: i have a good solution for this you can use this for disable the button
```
Submit
```
this activeOpacity and disable is work for this
Upvotes: 2
|
2018/03/16
| 1,321
| 4,246
|
<issue_start>username_0: ```
#include
#include
#define LEN 5
#define sum\_mac(func,type)\
void\* func (void\* arr, int size)\
{\
int i;\
type \*sum = (type\*)malloc(sizeof(type));\
if(sum == NULL)\
{\
printf("Error\n");\
exit(1);\
}\
\*sum = 0;\
for(i = 0; i
```
I get the following error when I try to compile this code:
>
> Error LNK2019 unresolved external symbol \_int\_sum referenced in function \_main
>
>
>
When I searched possible causes for this problem I got that *"a reference to a function or variable that the linker can't resolve, or find a definition for"*.
Can someone help me find the problem.<issue_comment>username_1: Problem is here:
```
return sum;\
}\ <---------- Remove the \
sum_mac(int_sum,int); <---- also remove the ; because this is not a statement
void *summary(void* arr, int size, void *(*func)(void*, int))
```
Compiler thinks that `sum_mac(int_sum,int);` is part of the macro definition because `\` at the end of line concatenates lines. That is why `sum_mac(int_sum,int);` is never called.
However, this reveals another problem with operator precendence on line:
```
*sum = *sum + *((type*)arr[i]);\
```
Array access `[i]` has higher precedence than cast `(type*)`, so you are trying to access `void` array which won't work. Also the last dereference is pointless. Line should be changed to:
```
*sum = *sum + ((type*)arr)[i];\
```
There is also third problem: You are passing calling `int_sum` too early. Function expects function pointer so you should only pass the pointer:
```
int *sum = summary(arr, LEN, int_sum); // Only pass int_sum
```
You should make sure that you have enabled all warnings on the compiler, since this is an error that compiler could warn you about.
Upvotes: 1 <issue_comment>username_2: There is (at least) three problems:
You included the macro instanciation in the macro itself, change:
```
}\
sum_mac(int_sum,int);
```
to:
```
}
sum_mac(int_sum,int);
```
Thus `sum_mac` is not part of the macro.
In the macro definition, change:
```
*sum = *sum + *((type*)arr[i]);
```
to:
```
*sum = *sum + ((type*)arr)[i];
```
In the first case, you try to use indexing on void pointer type, which is not possible (void has no size). So convert `arr` to pointer of the right type and use arithmetic on it.
--------------EDIT-----------
Change:
```
int *sum = summary(arr, LEN, int_sum(arr, LEN));
```
to
```
int *sum = summary(arr, LEN, int_sum);
```
In the first case you call `summary` with the third parameter value being the result of a call to `int_sum`, and that result if not a function pointer but the pointer to some `int`. You need to pass the function pointer.
Most of your problems are due to macro usage. This bypass the type system and fool the compiler, which is never a good idea.
Upvotes: 2 <issue_comment>username_3: The other problem is here:
```
int *sum = summary(arr, LEN, int_sum(arr, LEN));
```
The third argument of `summary` should be a *function pointer*, but `int_sum(arr, LEN)` is not a function pointer but it is the result of the `int_sum` function.
You need to write this:
```
int *sum = summary(arr, LEN, int_sum);
```
Upvotes: 0 <issue_comment>username_4: You don't declare the function you call anywhere, so the linker doesn't know what to do.
That being said, putting whole functions inside macros is not a good idea, nor to malloc single items. And the code `*sum = *sum + *((type*)arr[i]);` goes wildly out of bounds, because you didn't allocate an array, but use it as if it was one.
I would rewrite this from scratch. This is what you can do instead:
```
#include
#include
int int\_sum (size\_t size, int arr[size])
{
int sum = 0;
for(size\_t i=0; i
```
Which can be made even more generic, even without involving function pointers:
```
#define do_stuff(arr, stuff) _Generic((arr[0]), \
int: int_##sum, \
float: float_##sum) (sizeof(arr)/sizeof(arr[0]), arr)
int main (void)
{
int arr[] = { 1,2,3,4,5 };
int sum = do_stuff(arr, sum);
printf("int, the sum is: %d\n", sum);
float farr[] = {1.0f, 2.0f, 3.0f};
printf("float, the sum is: %f\n", do_stuff(farr, sum));
return 0;
}
```
Upvotes: 0
|
2018/03/16
| 1,651
| 5,625
|
<issue_start>username_0: This question is how to really implement the read method of a readable stream.
I have this implementation of a Readable stream:
```
import {Readable} from "stream";
this.readableStream = new Readable();
```
I am getting this error
>
> events.js:136
> throw er; // Unhandled 'error' event
> ^
>
>
> Error [ERR\_STREAM\_READ\_NOT\_IMPLEMENTED]: \_read() is not implemented
> at Readable.\_read (\_stream\_readable.js:554:22)
> at Readable.read (\_stream\_readable.js:445:10)
> at resume\_ (\_stream\_readable.js:825:12)
> at \_combinedTickCallback (internal/process/next\_tick.js:138:11)
> at process.\_tickCallback (internal/process/next\_tick.js:180:9)
> at Function.Module.runMain (module.js:684:11)
> at startup (bootstrap\_node.js:191:16)
> at bootstrap\_node.js:613:3
>
>
>
The reason the error occurs is obvious, we need to do this:
```
this.readableStream = new Readable({
read(size) {
return true;
}
});
```
I don't really understand how to implement the read method though.
The only thing that works is just calling
```
this.readableStream.push('some string or buffer');
```
if I try to do something like this:
```
this.readableStream = new Readable({
read(size) {
this.push('foo'); // call push here!
return true;
}
});
```
then nothing happens - nothing comes out of the readable!
Furthermore, these articles says you don't need to implement the read method:
<https://github.com/substack/stream-handbook#creating-a-readable-stream>
<https://medium.freecodecamp.org/node-js-streams-everything-you-need-to-know-c9141306be93>
**My question is** - why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.<issue_comment>username_1: >
> why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.
>
>
>
I think it's because you are not consuming it, you need to pipe it to an writable stream (e.g. stdout) or just consume it through a `data` event:
```
const { Readable } = require("stream");
let count = 0;
const readableStream = new Readable({
read(size) {
this.push('foo');
if (count === 5) this.push(null);
count++;
}
});
// piping
readableStream.pipe(process.stdout)
// through the data event
readableStream.on('data', (chunk) => {
console.log(chunk.toString());
});
```
Both of them should print 5 times `foo` (they are slightly different though). Which one you should use depends on what you are trying to accomplish.
>
> Furthermore, these articles says you don't need to implement the read method:
>
>
>
You might not need it, this should work:
```
const { Readable } = require("stream");
const readableStream = new Readable();
for (let i = 0; i <= 5; i++) {
readableStream.push('foo');
}
readableStream.push(null);
readableStream.pipe(process.stdout)
```
In this case you can't capture it through the `data` event. Also, this way is not very useful and not efficient I'd say, we are just pushing all the data in the stream at once (if it's large everything is going to be in memory), and then consuming it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Implement the `_read` method after your ReadableStream's initialization:
```
import {Readable} from "stream";
this.readableStream = new Readable();
this.readableStream.read = function () {};
```
Upvotes: 1 <issue_comment>username_3: From documentation:
**readable.\_read:**
"When readable.\_read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. [link](https://nodejs.org/api/stream.html#stream_readable_read_size_1)"
**readable.push:**
"The readable.push() method is intended be called only by Readable implementers, and only from within the readable.\_read() method. [link](https://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)"
Upvotes: 3 <issue_comment>username_4: readableStream is like a pool:
* .push(data), It's like pumping water to a pool.
* .pipe(destination), It's like connecting the pool to a pipe and pump water to other place
* The \_read(size) run as a pumper and control how much water flow and when the data is end.
The fs.createReadStream() will create read stream with the \_read() function has been auto implemented to push file data and end when end of file.
The \_read(size) is auto fire when the pool is attached to a pipe. Thus, if you force calling this function without connect a way to destination, it will pump to ?where? and it affect the machine status inside \_read() (may be the cursor move to wrong place,...)
The read() function must be create inside new Stream.Readable(). It's actually a function inside an object. It's not readableStream.read(), and implement readableStream.read=function(size){...} will not work.
The easy way to understand implement:
```
var Reader=new Object();
Reader.read=function(size){
if (this.i==null){this.i=1;}else{this.i++;}
this.push("abc");
if (this.i>7){ this.push(null); }
}
const Stream = require('stream');
const renderStream = new Stream.Readable(Reader);
renderStream.pipe(process.stdout)
```
You can use it to reder what ever stream data to POST to other server.
POST stream data with Axios :
```
require('axios')({
method: 'POST',
url: 'http://1172.16.58.3:3000',
headers: {'Content-Length': 1000000000000},
data: renderStream
});
```
Upvotes: 1
|
2018/03/16
| 1,754
| 5,834
|
<issue_start>username_0: My google cloud bucket has two folders **train** and **test** under **gs://MYBucket/demo2/TFRecords/** which contain TFRecords of image embeddings. I pass the url of my bucket to the program as an argument.
In the terminal I type the below to submit the job:
```
$training='gs://MYBucket/demo2/'
$gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--module-name $TRAINER_MODULE \
--package-path $TRAINER_PATH \
--region $REGION \
-- \
--train-files $training
```
Here **training** contains the address to my bucket
**My code** :
```
from tensorflow.python.lib.io import file_io
from StringIO import StringIO
BUCKET=None
DATA_DIR = "TFRecords/train/"
~Some code~
def Load_input():
global BUCKET
filenames = [os.path.join(BUCKET+DATA_DIR, "train-0000%d-of-00002.tfrecord" % i) for i in xrange(0, 1)]
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError("Failed to find file: " + f)
filename_queue = tf.train.string_input_producer(filenames)
~Some code~
def main(unused_args):
parser.add_argument('--train-files',help='BUCKET path to training data',nargs='+',required=True)
args = parser.parse_args()
global BUCKET
BUCKET = StringIO(file_io.read_file_to_string(args.__dict__['train_files'][0]))
~Some other code that internally calls **Load_input()**~
```
The line :
```
filenames = [os.path.join(BUCKET+DATA_DIR, "train-0000%d-of-00002.tfrecord" % i) for i in xrange(0, 1)]
```
throws error :
>
> TypeError: unsupported operand type(s) for +: 'instance' and 'str'
>
>
>
what I have tried :
```
BUCKET = file_io.read_file_to_string(args.train_files[0])
```
but it raises error:
```
raise ValueError("Failed to find file: " + f)
ERROR 2018-03-16 15:57:55 +0530 master-replica-0
ValueError: Failed to find file: TFRecords/train/train-00000-of-00002.tfrecord
```
**My Question :**
How am I supposed to join BUCKET and DATA\_DIR in order to provide the path to the training files correctly?<issue_comment>username_1: >
> why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.
>
>
>
I think it's because you are not consuming it, you need to pipe it to an writable stream (e.g. stdout) or just consume it through a `data` event:
```
const { Readable } = require("stream");
let count = 0;
const readableStream = new Readable({
read(size) {
this.push('foo');
if (count === 5) this.push(null);
count++;
}
});
// piping
readableStream.pipe(process.stdout)
// through the data event
readableStream.on('data', (chunk) => {
console.log(chunk.toString());
});
```
Both of them should print 5 times `foo` (they are slightly different though). Which one you should use depends on what you are trying to accomplish.
>
> Furthermore, these articles says you don't need to implement the read method:
>
>
>
You might not need it, this should work:
```
const { Readable } = require("stream");
const readableStream = new Readable();
for (let i = 0; i <= 5; i++) {
readableStream.push('foo');
}
readableStream.push(null);
readableStream.pipe(process.stdout)
```
In this case you can't capture it through the `data` event. Also, this way is not very useful and not efficient I'd say, we are just pushing all the data in the stream at once (if it's large everything is going to be in memory), and then consuming it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Implement the `_read` method after your ReadableStream's initialization:
```
import {Readable} from "stream";
this.readableStream = new Readable();
this.readableStream.read = function () {};
```
Upvotes: 1 <issue_comment>username_3: From documentation:
**readable.\_read:**
"When readable.\_read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. [link](https://nodejs.org/api/stream.html#stream_readable_read_size_1)"
**readable.push:**
"The readable.push() method is intended be called only by Readable implementers, and only from within the readable.\_read() method. [link](https://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)"
Upvotes: 3 <issue_comment>username_4: readableStream is like a pool:
* .push(data), It's like pumping water to a pool.
* .pipe(destination), It's like connecting the pool to a pipe and pump water to other place
* The \_read(size) run as a pumper and control how much water flow and when the data is end.
The fs.createReadStream() will create read stream with the \_read() function has been auto implemented to push file data and end when end of file.
The \_read(size) is auto fire when the pool is attached to a pipe. Thus, if you force calling this function without connect a way to destination, it will pump to ?where? and it affect the machine status inside \_read() (may be the cursor move to wrong place,...)
The read() function must be create inside new Stream.Readable(). It's actually a function inside an object. It's not readableStream.read(), and implement readableStream.read=function(size){...} will not work.
The easy way to understand implement:
```
var Reader=new Object();
Reader.read=function(size){
if (this.i==null){this.i=1;}else{this.i++;}
this.push("abc");
if (this.i>7){ this.push(null); }
}
const Stream = require('stream');
const renderStream = new Stream.Readable(Reader);
renderStream.pipe(process.stdout)
```
You can use it to reder what ever stream data to POST to other server.
POST stream data with Axios :
```
require('axios')({
method: 'POST',
url: 'http://127.0.0.1:3000',
headers: {'Content-Length': 1000000000000},
data: renderStream
});
```
Upvotes: 1
|
2018/03/16
| 1,224
| 4,506
|
<issue_start>username_0: Why do we have different methods for HTTP request. GET can not send data(body) and can only request something only through URL params. Put is like changing a field value on the server and Post performs an operation everytime we execute it (concept of Idempotence).
Why can't there be one method or no method at all ?
Why can't there be just a simple HTTP request where we can send data if we want. Now it depends on server logic how it wants to process that request based on the request content (data / body). If the server wants to just execute it like a get request & return something or perform an operation like a POST request & return something. It would have been more simpler.<issue_comment>username_1: >
> why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.
>
>
>
I think it's because you are not consuming it, you need to pipe it to an writable stream (e.g. stdout) or just consume it through a `data` event:
```
const { Readable } = require("stream");
let count = 0;
const readableStream = new Readable({
read(size) {
this.push('foo');
if (count === 5) this.push(null);
count++;
}
});
// piping
readableStream.pipe(process.stdout)
// through the data event
readableStream.on('data', (chunk) => {
console.log(chunk.toString());
});
```
Both of them should print 5 times `foo` (they are slightly different though). Which one you should use depends on what you are trying to accomplish.
>
> Furthermore, these articles says you don't need to implement the read method:
>
>
>
You might not need it, this should work:
```
const { Readable } = require("stream");
const readableStream = new Readable();
for (let i = 0; i <= 5; i++) {
readableStream.push('foo');
}
readableStream.push(null);
readableStream.pipe(process.stdout)
```
In this case you can't capture it through the `data` event. Also, this way is not very useful and not efficient I'd say, we are just pushing all the data in the stream at once (if it's large everything is going to be in memory), and then consuming it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Implement the `_read` method after your ReadableStream's initialization:
```
import {Readable} from "stream";
this.readableStream = new Readable();
this.readableStream.read = function () {};
```
Upvotes: 1 <issue_comment>username_3: From documentation:
**readable.\_read:**
"When readable.\_read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. [link](https://nodejs.org/api/stream.html#stream_readable_read_size_1)"
**readable.push:**
"The readable.push() method is intended be called only by Readable implementers, and only from within the readable.\_read() method. [link](https://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)"
Upvotes: 3 <issue_comment>username_4: readableStream is like a pool:
* .push(data), It's like pumping water to a pool.
* .pipe(destination), It's like connecting the pool to a pipe and pump water to other place
* The \_read(size) run as a pumper and control how much water flow and when the data is end.
The fs.createReadStream() will create read stream with the \_read() function has been auto implemented to push file data and end when end of file.
The \_read(size) is auto fire when the pool is attached to a pipe. Thus, if you force calling this function without connect a way to destination, it will pump to ?where? and it affect the machine status inside \_read() (may be the cursor move to wrong place,...)
The read() function must be create inside new Stream.Readable(). It's actually a function inside an object. It's not readableStream.read(), and implement readableStream.read=function(size){...} will not work.
The easy way to understand implement:
```
var Reader=new Object();
Reader.read=function(size){
if (this.i==null){this.i=1;}else{this.i++;}
this.push("abc");
if (this.i>7){ this.push(null); }
}
const Stream = require('stream');
const renderStream = new Stream.Readable(Reader);
renderStream.pipe(process.stdout)
```
You can use it to reder what ever stream data to POST to other server.
POST stream data with Axios :
```
require('axios')({
method: 'POST',
url: 'http://127.0.0.1:3000',
headers: {'Content-Length': 1000000000000},
data: renderStream
});
```
Upvotes: 1
|