text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Type for MAT-file
A handle to a MAT-file object. A MAT-file is the data file format MATLAB® software uses for saving data to your disk.
MATFile is a C language opaque type.
The MAT-file interface library contains routines for reading and writing MAT-files. Call these routines from your own C/C++ and Fortran programs, using MATFile to access your data file.
The header file containing this type is:
#include "mat.h"
See the following examples in matlabroot/extern/examples/eng_mat.
matOpen, matClose, matPutVariable, matGetVariable, mxDestroyArray
|
http://www.mathworks.co.uk/help/matlab/apiref/matfileapi.html?nocookie=true
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Hi,
I want to use D3DX8 and D3DX9 (Sprite and Font class) in one project. Using D3D8 and D3D9 together is not a problem but the X variants have the same function and class name for DX8 and DX9. This causes problems while executing the (test) program.
//D3D8 part LPDIRECT3DDEVICE8 device; .... ID3DXSprite *sprite; D3DXCreateSprite(device, &sprite); sprite->Release(); //and D3D9 part in an other file LPDIRECT3DDEVICE9 device; .... ID3DXSprite *sprite; D3DXCreateSprite(device, &sprite); sprite->Release();
The D3D8 part will run fine but the D3D9 part crashes in the D3DXCreateSprite method. I checked the generated assemblercode and it turns out that both D3DXCreateSprite methods are the same so the code tries to initialize the D3DX9 Sprite with the D3DX8 Sprite code what can't work.
I hoped something like this could work but for sure it does not:
namespace D3D8 { #include <d3dx8.h> #pragma comment(lib, "d3d8.lib") #pragma comment(lib, "d3dx8.lib") } D3D8::ID3DXSprite *sprite; D3D8::D3DXCreateSprite(device, &sprite); sprite->Release();
The compiler doesn't create seperate methods.
Anyone has an idea how I can include both DX versions in one project? One solution could be to dynamicly load the d3dx_....dll and use GetProcAddress for D3DXCreateSprite but I hope there is an "build in" solution.
Thanks for your help!
|
http://www.gamedev.net/topic/644982-d3dx8-and-d3dx9-in-one-project/
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Ok, I tried to filter the word set before going into loops. This appears to reduce the search space by a factor of 30.
Run time is down to ~5 minutes.
Word counts for 2of12inf.txt, along with word "power":
============================
WORDS COMBINATIONS
len:number k:number
----------------------------
2: 62 6: 736281
3: 642 5: 142506
4: 2546 4: 23751
5: 5122 3: 3276
6: 8303 2: 351
7: 11571 1: 26
8: 12687 0: 1
============================
Expected hits: 12687*1 + 11571*26 + 8303*351 + 5122*3276 + 2546*23751
++ 642*142506 + 62*736281
= 217615878
Counted hits (with filtering):
= 217615878
[download]
Update2: added coverage check
#! /usr/bin/perl
use threads;
use threads::shared;
use Thread::Semaphore;
use Config;
use if $Config{longsize} >= 8, "integer";
my $HITS :shared = 0;
my $TOTAL;
my $tresh = 0;
my (%P, %N, %RES);
sub x2 { map { my $t = $_; map {$t.$_} @_ } @_ }
sub wfilt { tr/a-z//cd; length() <= 8; }
sub wv { my $x = 1; $x *= $P{$_} for split //; $x }
@P{'a'..'z'} = grep {(1x$_)!~/^(11+)\1+$/} 2..101; # Primes
# combinations with repetition over 'a'..'z':
my @C = ( 1, 26, 351, 3276, 23751, 142506, 736281, 3365856, 13884156 )
+;
open(WORDS, "words") or die;
my @words = grep wfilt, <WORDS>;
$N{wv()}++ for @words;
$TOTAL += $C[8-length] for @words;
my $SEM = Thread::Semaphore->new(8); # 8 threads
for ('a'..'z') {
$SEM->down();
report(0, map {$_->join()} threads->list(threads::joinable));
()=threads->new(sub {&worker, ()=$SEM->up()}, $_);
}
report(0, map {$_->join()} threads->list());
sub worker {
my ($pivot) = @_; # aaaPzzzz
my (%A, %Z);
$A{wv()} //= $_ for grep s/.$/$pivot/, x2(x2('a'..$pivot));
$Z{wv()} //= $_ for x2(x2($pivot..'z'));
my $aaa = sub { join '', /[^$pivot-z]/g };
my $zzzz = sub { join '', /[^a-$pivot]/g };
# map full wv to just the aaa factors:
my %Va = map {wv} map {$_ => &$aaa}
grep {length &$aaa < 4 and length &$zzzz < 5} @words;
for my $a (keys %A) {
my @V = grep {$a % $Va{$_} == 0} keys %Va;
my ($hits, @R);
for my $z (keys %Z) {
my ($v, $n) = ($a*$z, 0);
$v % $_ or $n += $N{$_} for @V;
$hits += $n;
push @R, ($A{$a}.$Z{$z} => $n) if ($n > $tresh);
}
report($hits, @R);
}
return (%RES);
}
sub report {
lock($HITS); $HITS += shift;
return unless @_;
%RES = (%RES, @_);
my @top = sort { $RES{$b} <=> $RES{$a} } keys %RES;
($tresh) = delete @RES{splice(@top, 20)} if @top > 20;
print "$_: $RES{$_}\n" for @top;
no integer;
printf "! coverage %s/%s (% 3.1f%%)\n@{['-'x40]}\n",
$HITS, $TOTAL, 100.0*$HITS/$TOTAL;
}
[download]
In reply to Re^2: Challenge: 8 Letters, Most Words
by oiskuu
in thread Challenge: 8 Letters, Most Words
|
http://www.perlmonks.org/index.pl?parent=1057558;node_id=3333
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
#include <stdlib.h>
These constants define the maximum length for the path and for the individual fields within the path.
_MAX_DIR
Maximum length of directory component
_MAX_DRIVE
Maximum length of drive component
_MAX_EXT
Maximum length of extension component
_MAX_FNAME
Maximum length of filename component
_MAX_PATH
Maximum length of full path
Note The C Runtime supports path lengths up to 32768 characters in length, but it is up to the operating system, specifically the file system, to support these longer paths. The sum of the fields should not exceed _MAX_PATH for full backwards compatibility with Windows 98 FAT32 file systems. Windows NT 4.0, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003, and Windows Server 2003 NTFS file system supports paths up to 32768 characters in length, but only when using the Unicode APIs. When using long path names, prefix the path with the characters \\?\ and use the Unicode versions of the C Runtime functions.
|
http://msdn.microsoft.com/en-us/library/930f87yf(d=printer,v=vs.80).aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
27 June 2012 19:58 [Source: ICIS news]
HOUSTON (ICIS)--US chemical producers must focus their business on exports to new growth markets in order to benefit from highly competitive feedstocks in the wake of the country’s shale gas boom, consultant and auditor KPMG said on Wednesday.
If US producers, who traditionally rely on the domestic market, fail to shift to an export-led business model, the nation's chemical industry “is destined to fall back into the historic cycle of oversupply followed by rationalisation”, KPMG warned in a report titled “The Future of the US Chemical Industry”.
KMPG said that shale gas made US-based chemicals production highly competitive, marking a "dramatic change" in outlook for US-based producers.
However, the industry faces the risk of “exponential addition of new capacity”, leading to an oversupply that outstrips demand on the mature US market, “returning the industry to the cyclicality that was such a problem in the past”, the report said.
"The opening up of many emerging markets to import growth can be a slow and complex process, and US chemical companies need to take actions today that will guarantee markets for products to be produced in four or five years time," said Mike Shannon, global and US leader of KPMG's chemicals practice.
According to ICIS, US-based producers - prompted by shale gas - are planning or considering new cracker projects that could, if realised, add 8.89m tonnes/year of ethylene capacity, or 33.4% of existing ?xml:namespace>
|
http://www.icis.com/Articles/2012/06/27/9573308/us-chems-must-pursue-exports-to-avoid-over-capacity-consultant.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Rewarded (or "reward-based") video ads are full screen video ads that users have the option of watching in full in exchange for in-app rewards.
This codelab will walk you through integrating rewarded video ads into an existing Android mobile app, including the design considerations, as well as implementation details, to follow the best practices for rewarded video ads.
What you will build
What you'll learn
- How to request rewarded video ads
- How to display rewarded video ads
- How to reward users for viewing ad content
What you'll need
- A recent version of Android Studio
- The Google Play services SDK from the Google Repository (available in the Android SDK Manager)
- The sample code
- Basic knowledge of Android development.
Download the Code
Follow the steps listed below to download all the code for this codelab:
This will unpack a root folder (
admob-rewarded-video-master), which contains a directory specific to either Android or iOS. For this codelab you'll navigate to the Android directory. The Android directory contains the start state for this codelab, located in the work directory. The end state for this codelab is located in the final directory. Both work and final directories contain an Android project within a subdirectory named RewardedVideoExample.
You'll do all of your coding in this codelab in Android project located within the work directory. If at any time you're having trouble, you can refer to the Android project located under the final directory.
Open the work project in Android Studio
- Open Android Studio and select Import Project.
- Navigate to the RewardedVideoExample project under the work directory and select it.
- Click OK
You should now have the project open in Android Studio.
The first step to adding the Firebase SDK to your Android project is including a Firebase configuration file within your app. Normally, you would download a google-services.json file from the Firebase console and include it at the app/ directory of your app. For convenience, a sample google-services.json file has already been included in this project.
Next, import the Firebase Ads SDK by adding the following rules to your project-level build.gradle file to include the google-services plugin:
build.gradle – RewardedVideoExample/
buildscript { ... dependencies { ... classpath 'com.google.gms:google-services:3.0.0' } }
You will also add the two lines shown below to your your app-level build.gradle file. Place the compile statement inside the dependencies section and the apply plugin statement at the bottom.
build.gradle – RewardedVideoExample/app/
... dependencies { ... compile 'com.google.firebase:firebase-ads:10.2.4' } ... apply plugin: 'com.google.gms.google-services'
- If you see a warning message across the top of the Android Studio window indicating that Gradle needs to perform a sync, click Sync Now. Gradle refreshes your project's libraries to include the dependency you just added.
- If you see a message asking you to install the Google Repository, just agree to the install and have Android Studio take care of the download for you. The Google Repository contains code for Gradle to incorporate.
- Once your
build.gradlefiles are modified and everything has synced, try rebuilding your project (Run app in the Run menu) to make sure it compiles correctly.
You won't see any changes, but including Firebase and the Mobile Ads SDK is the first step toward getting rewarded video ads into your app.
Before loading ads, your app will need to initialize the Mobile Ads SDK by calling
MobileAds.initialize() with your AdMob App ID. This only needs to be done once, ideally at app launch. You can find your app's App ID in the AdMob UI. For this codelab, you will use the test app ID value of
ca-app-pub-3940256099942544~3347511713. Following Android best practices, this value should be mapped to the strings resource file located at
RewardedVideoExample/app/src/main/res/values/strings.xml of your project. Add an new entry to this file for the AdMob App ID, as shown below.
strings.xml
<resources> ... <string name="admob_app_id">ca-app-pub-3940256099942544~3347511713</string> ... </resources>
Add the call to
initialize(), shown below, to the
onCreate() method of the
MainActivity class to perform Google Mobile Ads SDK initialization.
MainActivity.java
import com.google.android.gms.ads.MobileAds; ... public class MainActivity extends Activity { private RewardedVideoAd mAd; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Initialize the Google Mobile Ads SDK MobileAds.initialize(getApplicationContext(), getString(R.string.admob_app_id)); ... } ... }
Before going any further, a
RewardedVideoAd object is required. The singleton
RewardedVideoAd object instance can be retrieved using the
MobileAds.getRewardedVideoAdInstance() method. Add a call to this method in the
onCreate() method of the
MainActivity class and save the reference to a private instance variable.
MainActivity.java
import com.google.android.gms.ads.MobileAds; import com.google.android.gms.ads.reward.RewardedVideoAd; ... public class MainActivity extends Activity { ... private RewardedVideoAd mRewardedVideoAd; ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); MobileAds.initialize(getApplicationContext(), getString(R.string.admob_app_id)); // Get reference to singleton RewardedVideoAd object mRewardedVideoAd = MobileAds.getRewardedVideoAdInstance(this); } ... }
The
RewardedVideoAd object requires notifications of the parent Activity's lifecycle events. To forward these events, call the
RewardedVideoAd object's
resume(),
pause(), and
destroy() methods in the parent Activity's
onResume(),
onPause(), and
onDestroy() methods respectively. Update these methods in the
MainActivity class as shown below.
MainActivity.java
@Override public void onResume() { mRewardedVideoAd.resume(this); super.onResume(); ... } @Override public void onPause() { mRewardedVideoAd.pause(this); super.onPause(); ... } @Override public void onDestroy() { mRewardedVideoAd.destroy(this); super.onDestroy(); ... }
The
RewardedVideoAdListener notifies you of rewarded video lifecycle events. You are required to set the listener before loading an ad. The most important event in this listener is
onRewarded(), which is called when the user should be rewarded for watching a video.
Add the code shown below, to the
MainActivity class, to set a
RewardedVideoAdListener on the
RewardedVideoAd.
MainActivity.java
... import android.widget.Toast; ... import com.google.android.gms.ads.reward.RewardItem; import com.google.android.gms.ads.reward.RewardedVideoAd; import com.google.android.gms.ads.reward.RewardedVideoAdListener; ... @Override protected void onCreate(Bundle savedInstanceState) { ... mRewardedVideoAd.setRewardedVideoAdListener(new RewardedVideoAdListener() { @Override public void onRewardedVideoAdLoaded() { Toast.makeText(getBaseContext(), "Ad loaded.", Toast.LENGTH_SHORT).show(); } @Override public void onRewardedVideoAdOpened() { Toast.makeText(getBaseContext(), "Ad opened.", Toast.LENGTH_SHORT).show(); } @Override public void onRewardedVideoStarted() { Toast.makeText(getBaseContext(), "Ad started.", Toast.LENGTH_SHORT).show(); } @Override public void onRewardedVideoAdClosed() { Toast.makeText(getBaseContext(), "Ad closed.", Toast.LENGTH_SHORT).show(); } @Override public void onRewarded(RewardItem rewardItem) { Toast.makeText(getBaseContext(), "Ad triggered reward.", Toast.LENGTH_SHORT).show(); } @Override public void onRewardedVideoAdLeftApplication() { Toast.makeText(getBaseContext(), "Ad left application.", Toast.LENGTH_SHORT).show(); } @Override public void onRewardedVideoAdFailedToLoad(int i) { Toast.makeText(getBaseContext(), "Ad failed to load.", Toast.LENGTH_SHORT).show(); } }); }
Increment coin count
In the
onRewarded()method of the anonymous
RewardedVideoAdListener class you just implemented, reward the user for watching the ad by incrementing the coin count by the reward amount.
MainActivity.java
@Override public void onRewarded(RewardItem rewardItem) { // Reward the user for watching the ad. Toast.makeText(getBaseContext(), "Ad triggered reward.", Toast.LENGTH_SHORT).show(); addCoins(rewardItem.getAmount()); }
The next step to monetizing your app with rewarded video ads is making the ad request.
For this codelab, you will make ad requests to the test ad unit value of
ca-app-pub-3940256099942544/5224354917. As with previous string literals, this value should be stored in the strings resource file, as shown below.
strings.xml
<resources> ... <string name="ad_unit_id">ca-app-pub-3940256099942544/5224354917</string> ... </resources>
Ad requests should be made to the singleton
RewardedVideoAd instance. It is best practice to call
loadAd() as early as possible. Add the code shown below to make this call in the
startGame() method, which is invoked at the beginning of every game.
MainActivity.java
import com.google.android.gms.ads.AdRequest; ... private void startGame() { ... mGamePaused = false; mGameOver = false; // Load a reward based video ad mRewardedVideoAd.loadAd(getString(R.string.ad_unit_id), new AdRequest.Builder().build()); }
For your app to present the user with the option to watch an ad, you need to modify your main activity's layout file, located at
RewardedVideoExample/app/src/main/res/layout/activity_main.xml of your project. Add the XML element shown below to create a
Button, which when clicked, will display an ad."> ... <!-- Copy this Watch Video Button --> <Button android: ... android: </RelativeLayout>
The
android:text attribute on the
Button defines the text that is displayed in the button element. Like the AdMob App ID, this value should also be mapped to the strings resource file located at
RewardedVideoExample/app/src/main/res/values/strings.xml of your project. Add an new entry to this file for text that clearly presents the user with the option to watch an ad in exchange for a reward.
activity_main.xml
<resources> ... <string name="watch_video_button_text">Watch Video for additional coins</string> ... </resources>
Set click handler
Setting the
android:onClick attribute on the
Button element defines the click event handler for a button. A method with the corresponding name,
showRewardedVideo() in this case, will be invoked on the
Activity hosting the layout. Implement this method in the
MainActivity class.
MainActivity.java
public void showRewardedVideo(View view) { if (mRewardedVideoAd.isLoaded()) { mRewardedVideoAd.show(); } }
Within
showRewardedVideo(), the
show() method is invoked on the singleton
RewardedVideoAd object to display the rewarded video ad.
Currently, the button presenting the user with the option to watch a rewarded video ad is always visible. However, we only want this button to be visible at the end of the game. Start by hiding this button at the beginning of every game, as shown below.
MainActivity.java
private Button mShowVideoButton; ... private void onCreate(Bundle savedInstanceState) { ... // Get the "show ad" button, which shows a rewarded video when clicked. mShowVideoButton = ((Button) findViewById(R.id.watch_video)); mShowVideoButton.setVisibility(View.INVISIBLE); }
Now, you'll want to make this button visible at the end of a game if an ad has loaded and is ready to be shown. This can be determined by calling
isLoaded() on the
RewardedVideoAd object.
MainActivity.java
private void gameOver() { ... mRetryButton.setVisibility(View.VISIBLE); if (mRewardedVideoAd.isLoaded()) { mShowVideoButton.setVisibility(View.VISIBLE); } ... }
Handle Rotation
The final step is ensuring the watch video is shown following a rotation. In the
onResume() method of
MainActivity, check if the game is over and an ad is ready to be shown. If both conditions are true, make the show video button visible. These changes are shown below.
MainActivity.java
public void onResume() { super.onResume(); ... if(mGameOver && mRewardedVideoAd.isLoaded()) { mShowVideoButton.setVisibility(View.VISIBLE); } mRewardedVideoAd.resume(this); }
Your app is now ready to display rewarded video ads using the Google Mobile Ads SDK. Run the app and once the countdown timer has expired, you should be presented with the option to watch an ad for additional coins.
What we've covered
- How to install the Google Mobile Ads SDK
- How to load and display rewarded video ad
- How to reward a user for watching video ad content
Next Steps
- Create a native advanced ad unit
- Use native advanced in your own app with your own ad unit
Learn More
- Read the Rewarded Video Publisher Get Started guide for more details on AdMob rewarded video ads
- Post questions and find answers on the Google Mobile Ads SDK developer forum
|
https://codelabs.developers.google.com/codelabs/admob-rewarded-video-android?hl=en
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
A lot of times when you are working as a data scientist you will come across situations where you will have to extract useful information from images. If these images are in text format, you can use OCR and extract them. But, if they are images which contain data in a tabular form it becomes much easier to extract them directly as excel or CSV files. To do this we can use OpenCV and directly convert them into tabular form.
The purpose of this article is to extract information from a tabular image and store them directly as excel files. There are three steps to do this:
- Detecting the cells in the image
- Retrieving cells position
- Text extraction and placing in the cells
Data loading
Before we get into the implementation we will choose the image from which we need to extract the images. Let us choose a simple table with a few columns. The image I have selected is shown below.
This table contains information about salt concentration in some experiments and their results are noted. You can download this by clicking on the link.
Now, we will import the required libraries and load the dataset. I will be using google Colab for this hence I will mount the drive as well.
from google.colab import drive drive.mount('/content/gdrive') import cv2 import numpy as np import pandas as pd import matplotlib.pyplot as plt sample=r'/content/gdrive/My Drive/sample.png' read_image= cv2.imread(sample,0)
After importing the data and loading it, we will now start with the first step.
Detecting the cells
For the purpose of converting an image to excel we need to first detect the cells that are the horizontal and vertical lines that form the cells. To do this, we need to first convert the image to binary and turn them into grayscale with OpenCV.
convert_bin,grey_scale = cv2.threshold(read_image,128,255,cv2.THRESH_BINARY | cv2.THRESH_OTSU) grey_scale = 255-grey_scale grey)graph = plt.imshow(grey_scale,cmap='gray') plt.show()
Here, we have converted the image into a binary format. Now let us define two kernels to extract the horizontal and vertical lines from these cells.))
Now, using the erode and dilate function we will apply it to our image and detect and extract the horizontal lines.
horizontal_detect = cv2.erode(grey_scale, horizontal_kernel, iterations=3) hor_line = cv2.dilate(horizontal_detect, horizontal_kernel, iterations=3) plotting = plt.imshow(horizontal_detect,cmap='gray') plt.show()
In the same way, we will repeat these steps to detect the vertical lines by building another kernel.
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, length)) vertical_detect = cv2.erode(grey_scale, vertical_kernel, iterations=3) ver_lines = cv2.dilate(vertical_detect, vertical_kernel, iterations=3) show = plt.imshow(vertical_detect,cmap='gray') plt.show()
Once we have these ready, all we need to do is combine them and make them into a grid-like structure and get a clear tabular representation without the content inside. To do this, we will first combine the vertical and horizontal lines and create a final rectangular kernel. Then we will convert the image back from greyscale to white.
final = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2)) combine = cv2.addWeighted(ver_lines, 0.5, hor_line, 0.5, 0.0) combine = cv2.erode(~combine, final, iterations=2) thresh, combine = cv2.threshold(combine,128,255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) convert_xor = cv2.bitwise_xor(read_image,combine) inverse = cv2.bitwise_not(convert_xor) output= plt.imshow(inverse,cmap='gray') plt.show()
This shows the final combined table where the cells are present.
Now we can move on to the next step.
Retrieving the cell positions
Now that we have our empty table ready, we need to find the right location to add the text. That is the column and the row where the text needs to be inserted. To do this, we need to get bounding boxes around each cell. Contours are the best way to highlight the cell lines and determine the bounding boxes. Let us now write a function to get the contours and the bounding box. It is also important to make sure the contours are read in a particular order which will be written in the function.
cont, _ = cv2.findContours(combine, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) def get_boxes(num, method="left-to-right"): invert = False flag = 0 if method == "right-to-left" or method == "bottom-to-top": invert = True if method == "top-to-bottom" or method == "bottom-to-top": flag = 1 boxes = [cv2.boundingRect(c) for c in num] (num, boxes) = zip(*sorted(zip(num, boxes), key=lambda b:b[1][i], invert=invert)) return (num, boxes) cont, boxes = get_boxes(cont, method="top-to-bottom")
Next, we will retrieve the dimensions of each contour and store them. Since these contours and rectangles, we will have 4 sides. We need to set the dimensions which are up to the user. Here I have set the width to be 500 and height to be 500. These values depend on the size of the image.
final_box = [] for c in cont: s1, s2, s3, s4 = cv2.boundingRect(c) if (s3<500 and s4<500): rectangle_img = cv2.rectangle(read_image,(s1,s2),(s1+s3,s2+s4),(0,255,0),2) final_box.append([s1,s2,s3,s4]) graph = plt.imshow(rectangle_img,cmap='gray') plt.show()
You can see the boxes are highlighted as shown above and we have the position of each cell of the image stored in the list. But we also need the location of each of the cells so that they can be extracted in order. To get the location we will take the average value of the boxes and then add the height and width with them. Then we will sort them into respective boxes and find the midpoint of these boxes so that they can be aligned well.
dim = [boxes[i][3] for i in range(len(boxes))] avg = np.mean(dim) hor=[] ver=[] for i in range(len(box)): if(i==0): ver.append(box[i]) last=box[i] else: if(box[i][1]<=last[1]+avg/2): ver.append(box[i]) last=box[i] if(i==len(box)-1): hor.append(ver) else: hor.append(ver) ver=[] last = box[i] ver.append(box[i]) total = 0 for i in range(len(hor)): total = len(hor[i]) if total > total: total = total mid = [int(hor[i][j][0]+hor[i][j][2]/2) for j in range(len(hor[i])) if hor[0]] mid=np.array(mid) mid.sort()
Value Extraction
Since we have the boxes, and the dimensions along with the midpoint we can now move on to our text extraction. But before that, we will make sure that our cells are in the correct order. To do this, follow these steps.
order = [] for i in range(len(hor)): arrange=[] for k in range(total): arrange.append([]) for j in range(len(hor[i])): sub = abs(mide-(row[i][j][0]+row[i][j][2]/4)) lowest = min(sub) idx = list(sub).index(lowest) arrange[idx].append(row[i][j]) order.append(arrange)
Now we will use the pytesseract to perform OCR since it is compatible with OpenCV and Python.
try: from PIL import Image except ImportError: import Image import pytesseract
We will take every box and perform eroding and dilating on it and then extract the information in the cells with OCR.
extract=[] for i in range(len(order)): for j in range(len(order[i])): inside='' if(len(order[i][j])==0): extract.append(' ') else: for k in range(len(order[i][j])): side1,side2,width,height = order[i][j][k][0],order[i][j][k][1], order[i][j][k][2],order[i][j][k][3] final_extract = bitnot[side2:side2+h, side1:side1+width] final_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 1)) get_border = cv2.copyMakeBorder(final_extract,2,2,2,2, cv2.BORDER_CONSTANT,value=[255,255]) resize = cv2.resize(get_border, None, fx=2, fy=2, interpolation=cv2.INTER_CUBIC) dil = cv2.dilate(resize, final_kernel,iterations=1) ero = cv2.erode(dil, final_kernel,iterations=2) ocr = pytesseract.image_to_string(ero) if(len(ocr)==0): ocr = pytesseract.image_to_string(ero, config='--psm 3') inside = inside +" "+ ocr extract.append(inside)
Now we will convert this extracted array into a dataframe and write it to an excel file.
a = np.array(extract) dataset = pd.DataFrame(a.reshape(len(hor), total)) dataset.to_excel("/content/gdrive/My Drive/output1.xlsx")
The final output is as follows
You can see that in excel it has extracted the data. Though it is not perfectly aligned (numbers 12 and 15 are not in line with others) it is still good and you can edit these values as well.
Conclusion
In this article, we saw how to use OpenCV to convert a table image into an editable excel spreadsheet. These can be useful for extracting important information from these tables and directly manipulating them with CSV and Excel formats..
|
https://analyticsindiamag.com/how-to-use-opencv-to-extract-information-from-table-images/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
’ll need the following:
- A free Twilio Account. If you use this link to register, you will receive $10 credit when you upgrade to a paid account.
- Python 3.6 or newer
- Ngrok. This will make the development version of our application accessible over the Internet.
- A smartphone with an active number and WhatsApp installed.
Creating a Python environment
Let’s create a directory where our project will reside. From the terminal, run the following command:
$ mkdir twilio_whatsapp_bot: A Twilio helper library that will make it easy for us to create valid TwiML.
- Requests: A Python library for making HTTP requests.
To install all the dependencies at once, run the following command:
$ pip install flask twilio requests
Creating the Bot
Before we get started with building the bot, let’s quickly go through how the bot will work. In order for the bot to work as expected, messages that require conversion need to be in a particular format. So for example, a user needs to send a message in the format “Convert 5 BTC to USD”, where 5 is the number of BTC units to convert and USD is the currency code of the currency you would like to get the equivalent number of BTC units in.
The Twilio API for WhatsApp makes use of webhooks to notify our application whenever our bot receives a message. Let’s create a simple function that will respond to this webhook.
At the root of your project’s directory, create a
main.py and add the following code to the file:
from flask import Flask, request from twilio.twiml.messaging_response import MessagingResponse import requests app= Flask(__name__) @app.route('/incoming/twilio', methods=['POST']) def incoming_twilio(): incoming_message=request.form['Body'].lower() message= incoming_message.split() resp= MessagingResponse() msg=resp.message() if 'hello' in incoming_message: reply = standard_response_format() msg.body(reply) return str(resp) if len(message) < 4: reply = standard_response_format() msg.body(reply) return str(resp) btc_unit = message[1] currency_code = message[4].upper() r = requests.get("") if r.status_code == 200: rates = r.json()['data']['rates'] if currency_code in rates: unit_rate = rates[currency_code] unit_conversion = get_unit_conversion(btc_unit, unit_rate) reply=f"{btc_unit} BTC is {unit_conversion:,} {currency_code}" else: reply=f"{currency_code} is not a valid currency code" else: reply= "Something went wrong" msg.body(reply) return str(resp) def standard_response_format(): return ("Welcome to the Bitcoin Currency Converter bot!\n\n" "Please use the following format to chat with the bot: \n" "Convert 1 BTC to USD \n" "1 is the number of BTC unit you would like to convert \n" "USD is the currency code you would like to convert too \n") def get_unit_conversion(unit, unit_rate): return round(round(float(unit_rate), 2) * float(unit), 2) if __name__ == '__main__': app.run()
Let’s go over what’s happening in the
incoming_twilio() function.
The first thing we did was to obtain the content of the message that was sent using Flask’s
request object. This comes in the payload of the
POST request with a key of
Body. Since we’ll be doing some basic analysis on the message, the message is converted to lowercase to avoid any issue that might arise as a result of case variations.
Next, the message itself is converted into a
list using the
split() method. Based on the agreed messaging format, the default separator in this case will be any whitespace.
A further analysis is carried out to check if the keyword
hello is contained in the message. If
hello is found in the message, a generic response is sent to the back to the user informing them of how to make use of the bot.
Similarly, another check is carried out to ensure that after the message has been converted into a list, the number of items contained in the list is not less than 4. Again, this is due to the agreed messaging format with the bot.
The number of BTC units to convert and the currency code are contained at indexes 1 and 4 of the list` respectively. Next, a HTTP GET request is made to the Coinbase API to obtain the current exchange rates with BTC as the base currency. The returned rates will define the exchange rate for one unit of BTC. Here’s an example of the response received from the API.
{ "data": { "currency": "BTC", "rates": { "AED": "36.73", "AFN": "589.50", "ALL": "1258.82", "AMD": "4769.49", "ANG": "17.88", "AOA": "1102.76", "ARS": "90.37", "AUD": "12.93", "AWG": "17.93", "AZN": "10.48", "BAM": "17.38", ... } } }
Once the response from the API has been obtained, a check is carried out to ensure that the currency code obtained from the
list earlier can be found as a key under the
rates payload. If it exists, the
get_unit_conversion() function, which does the actual conversion, is then called passing in the unit rate of the identified currency code as well as the number of BTC units to convert.
Once we’ve applied our logic to determine the bot’s response based on the
body of the message received, we need to send back a reply. Twilio expects this reply to be in Twilio Markup Language (TwiML). This is an XML-based language but we’re not required to create XML directly. The
MessagingResponse class from the Twilio Python helper library helps us to send the response in the right format.
Setting up Ngrok
Since our application is currently local, there’s no way for Twilio to be able to send POST requests to the endpoint we just created. We can use Ngrok to set up a temporary public URL so that our app is accessible over the web.
To start the application, run the following command from the terminal:
$ python main.py
With the application running,.
Configure Twilio WhatsApp Sandbox
Before you’re able to send and receive messages using the Twilio API for WhatsApp in production, you’ll need to enable your Twilio number for WhatsApp. Since “Programmable SMS / WhatsApp Beta” and paste the Ngrok URL we noted earlier on the “WHEN A MESSAGE COMES IN” field. Don’t forget to append
/incoming/twilio at the end of the URL so that it points at our Flask endpoint. Click “Save” at the bottom of the page.
Testing
You can try testing the functionality of the bot by sending messages to it from your smartphone using WhatsApp. Here’s an example of me chatting with the bot:
Conclusion
In this tutorial, we have created a simple WhatsApp chatbot that returns the Bitcoin equivalent price in any supported currency. This was implemented using Flask and the Twilio API for WhatsApp. The Bitcoin exchange rate was obtained from the Coinbase API. The GitHub repository with the complete code for this project can be found here.
Dotun Jolaoso
Website:
Github:
Twitter:
|
https://www.twilio.com/blog/build-whatsapp-bitcoin-currency-conversion-bot-python-twilio
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
In machine learning, while building a predictive model for classification and regression tasks there are a lot of steps that are performed from exploratory data analysis to different visualization and transformation. There are a lot of transformation steps that are performed to pre-process the data and get it ready for modelling like missing value treatment, encoding the categorical data, or scaling/normalizing the data. We do all these steps and build a machine learning model but while making predictions on the testing data we often repeat the same steps that were performed while preparing the data.
So there are a lot of steps that are followed and while working on a big project in teams we can often get confused about this transformation. To resolve this we introduce pipelines that hold every step that is performed from starting to fit the data on the model.
Through this article, we will explore pipelines in machine learning and will also see how to implement these for a better understanding of all the transformations steps.
What we will learn from this article?
- What are the pipelines in Machine learning?
- Advantages of building pipelines?
- How to implement a pipeline?
- What are the pipelines in Machine learning?
Pipelines are nothing but an object that holds all the processes that will take place from data transformations to model building. Suppose while building a model we have done encoding for categorical data followed by scaling/ normalizing the data and then finally fitting the training data into the model. If we will design a pipeline for this task then this object will hold all these transforming steps and we just need to call the pipeline object and rest every step that is defined will be done.
This is very useful when a team is working on the same project. Defining the pipeline will give the team members a clear understanding of different transformations taking place in the project. There is a class named Pipeline present in sklearn that allows us to do the same. All the steps in a pipeline are executed sequentially. On all the intermediate steps in the pipeline, there has to be a first fit function called and then transform whereas for the last step there will be only fit function that is usually fitting the data on the model for training.
As soon as we fit the data on the pipeline, the pipeline object is first transformed and then fitted on each of the steps. While making predictions using the pipeline, all the steps are again repeated except for the last function of prediction.
- How to implement a pipeline?
Implementation of the pipeline is very easy and involves 4 different steps mainly that are listed below:-
- First, we need to import pipeline from sklearn
- Define the pipeline object containing all the steps of transformation that are to be performed.
- Now call the fit function on the pipeline.
- Call the score function to check the score.
Let us now practically understand the pipeline and implement it on a data set. We will first import the required libraries and the data set. We will then split the data set into training and testing sets followed by defining the pipeline and then calling the fit score function. Refer to the below code for the same.
import pandas as pd from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split df = pd.read_csv('pima.csv') X = df.values[:,0:7] Y = df.values[:,8] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.30, random_state=7) pipe = Pipeline([('sc',StandardScaler()),('rfcl', RandomForestClassifier())])
We have defined the pipeline with the object name as pipe and this can be changed according to the programmer. We have defined sc objects for StandardScaler and rfcl for Random Forest Classifier.
pipe.fit(X_train,y_train)
print(pipe.score(X_test, y_test)
If we do not want to define the objects for each step like sc and rfcl for StandardScaler and Random Forest Classifier since there can be sometimes many different transformations that would be done. For this, we can make use of make_pipeling that can be imported from the pipeline class present in sklearn. Refer to the below example for the same.
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(StandardScaler(),(RandomForestClassifier()))
We have just defined the functions in this case and not the objects for these functions. Now let’s see the steps present in this pipeline.
print(pipe.steps)
pipe.fit(X_train,y_train)
print(pipe.score(X_test, y_test))
Conclusion
Through this article, we discussed pipeline construction in machine learning. How these can be helpful while different people working on the same project to avoid confusion and get a clear understanding of each step that is performed one after another. We then discussed steps for building a pipeline that had two steps i.e scaling and the model and implemented the same on the Pima Indians Diabetes data set. At last, we explored one other way of defining a pipeline that is building a pipeline using make a pipeline.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
I am currently enrolled in a Post Graduate Program In Artificial Intelligence and Machine learning. Data Science Enthusiast who likes to draw insights from the data. Always amazed with the intelligence of AI. It's really fascinating teaching a machine to see and understand images. Also, the interest gets doubled when the machine can tell you what it just saw. This is where I say I am highly interested in Computer Vision and Natural Language Processing. I love exploring different use cases that can be build with the power of AI. I am the person who first develops something and then explains it to the whole community with my writings.
|
https://analyticsindiamag.com/everything-about-pipelines-in-machine-learning-and-how-are-they-used/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
The npm package swell-js receives a total of 261 downloads a week. As such, we scored swell-js popularity level to be Limited.
Based on project statistics from the GitHub repository for the npm package swell-js, we found that it has been starred 15 times, and that 2 other projects on the ecosystem are dependent on it.
Downloads are calculated as moving averages for a period of the last 12 months, excluding weekends and known missing data points.
Snyk detected that the latest version of swell-js swell-js is missing a security policy.
# Install the Snyk CLI and test your project
npm i snyk && snyk test swell-js
Further analysis of the maintenance status of swell-js based on released npm versions cadence, the repository activity, and other data points determined that its maintenance is Inactive.
We found that swell swell-js is missing a Code of Conduct.
We detected a total of 15 direct & transitive dependencies for swell-js. See the full dependency tree of swell-js
swell-js has more than a single and default latest tag published for the npm package. This means, there may be other tags available for this package, such as next to indicate future releases, or stable to indicate stable releases.
Universal JavaScript client for Swell's Frontend API, providing client-safe access to store and customer data. You can use it in JAMstack or SSR apps to:
This SDK implements a subset of operations available in Swell's Backend API and is authorized with a public key + session token, making it safe to use in any context. You should only use the Backend API server-side, and keep your secret keys stored as environment variables.
About Swell
Swell is a customizable, API-first platform for powering modern B2C/B2B shopping experiences and marketplaces. Build and connect anything using your favorite technologies, and provide admins with an easy to use dashboard.
npm install swell-js # or yarn add swell-js
The client uses your store ID and public key for authorization. You can find these in your dashboard under Settings > API.
swell.init('<store-id>', '<public_key>');
Note:
swell.auth()was renamed to
swell.init()in v1.3.0.
If your application uses camelCase, you can set a flag to transform the API's snake_case responses. This works on objects you pass to it as well.
const options = { useCamelCase: // true | false (default is false) }; swell.init('<store-id>', '<public_key>', options)
If a code example has the
await prefix, the method returns a promise. All other methods are synchronous. We're using ES6 async/await syntax here, but you can use regular Promises too.
import swell from 'swell-js'; // Initialize the client first swell.init('my-store', 'pk_md0JkpLnp9gBjkQ085oiebb0XBuwqZX9'); // Now you can use any method await swell.products.list({ category: 't-shirts', limit: 25, page: 1, });
Returns an object representing store settings, and saves it to an internal cache for accessing synchronously.
Note: This must be called before trying to get a setting by path
await swell.settings.get();
Returns a value from the store settings object using path notation, with an optional default if the value is undefined.
swell.settings.get('colors.primary.dark', '#000000');
Returns an array containing store navigation menus, and saves it to an internal cache for accessing synchronously.
Note: This must be called before trying to get a menu by ID
await swell.settings.menus();
Returns a single navigation menu object.
swell.settings.menus('header');
Returns an object representing payment settings, and saves it to an internal cache for using with checkout methods.
swell.settings.payments();
Returns all products, with offset pagination using
limit and
page.
await swell.products.list({ limit: 25, // Max. 100 page: 1, });
Returns all products and their active variants, with offset pagination using
limit and
page.
await swell.products.list({ limit: 25, // Max. 100 page: 1, expand: ['variants'], });
Returns products in a specific category, with offset pagination using
limit and
page.
await swell.products.list({ category: 't-shirts', // Slug or ID limit: 25, // Max. 100 page: 1, });
Returns a single product.
// By slug await swell.products.get('blue-shoes'); // By ID await swell.products.get('5c15505200c7d14d851e510f');
Perform a full text search with a string. The search operation is performed using AND syntax, where all words must be present in at least one field of the product.
Returns products matching the search query string, with offset pagination using
limit and
page.
await swell.products.list({ search: 'black jeans', // Any text string limit: 25, // Max. 100 page: 1, });
Resolve the correct
price,
sale_price,
orig_price and
stock_status values based on the customer's chosen options. Typically you would retrieve a product earlier in the page's lifecycle and pass it to this method along with the options. Options can be either an array or an object with option name/value pairs.
Returns a new object with product and option/variant values merged together.
await swell.products.variation(product, { Size: 'Medium', Color: 'Turquoise', });
Returns a list of product categories, with offset pagination using
limit and
page.
await swell.categories.list({ limit: 25, page: 1, });
Returns a single category.
// By slug await swell.categories.get('mens-shirts'); // By ID await swell.categories.get('5c15505200c7d14d851e510g');
Returns a list of product attributes, with offset pagination using
limit and
page.
await swell.attributes.list({ limit: 25, page: 1, });
Returns a single attribute.
// By slug await swell.attributes.get('color'); // By ID await swell.attributes.get('5c15505200c7d14d851e510g');
Retrieve the cart attached to the current session.
Returns the cart object or
null if no items have been added yet.
await swell.cart.get();
Add a single item to the cart. Item options can be either an array of product options or an object with product option name/value pairs.
Returns the updated cart object.
// Options as array await swell.cart.addItem({ product_id: '5c15505200c7d14d851e510f', quantity: 1, options: [ { name: 'Size', value: 'S' }, { name: 'Color', value: 'Midnight blue' }, ], }); // Options as object await swell.cart.addItem({ product_id: '5c15505200c7d14d851e510f', quantity: 1, options: { Size: 'S', Color: 'Midnight blue', }, });
Update properties of a single cart item by ID.
Returns the updated cart object.
await swell.cart.updateItem('7d51p8ce72f5542e009fa4c8', { quantity: 2, });
If you want to update multiple items at once, you can clone
cart.items, iterate through the items to perform your operation(s), then use this method to replace
cart.items with your updated array.
Returns the updated cart object.
await swell.cart.setItems([ { id: '5c15505200c7d14d851e510f', quantity: 2, options: [{ id: 'Color', value: 'Midnight blue' }], }, { id: '5c15505200c7d14d851e510g', quantity: 3, options: [{ id: 'Color', value: 'Ivory' }], }, { id: '5c15505200c7d14d851e510h', quantity: 4, options: [{ id: 'Color', value: 'Bright red' }], }, ]);
Remove a single item from the cart by ID.
Returns the updated cart object.
await swell.cart.removeItem('5c15505200c7d14d851e510f');
Remove all items from the cart.
Returns the updated cart object.
await swell.cart.setItems([]);
Normally used with an abandoned cart recovery email. The email should have a link to your store with a
checkout_id identifying the cart that was abandoned. Calling this method will add the cart to the current session and mark it as
recovered.
Returns the recovered cart object.
await swell.cart.recover('878663b2fb4175b128e40de428cd7b0c');
Update the cart with customer account information.
An account is assigned to a cart by email address.
guest=true.
account_logged_in=false. You can use this to prompt the user to log in to continue. Once the account is logged in,
account_logged_inwill be
true.
Returns the updated cart object.
await swell.cart.update({ account: { email: 'julia@example.com', email_optin: true, // Optional; indicates the customer has consented to receive marketing emails password: 'thepassword', // Optional; sets the customer's password if one doesn't exist yet }, });
Update the cart with customer shipping information.
Returns the updated cart object.
await swell.cart.update({ shipping: { name: 'Julia Sanchez', address1: '560 Olive Drive', address2: '', city: 'Ellinwood', state: 'KS', zip: '67526', country: 'United States', phone: '620-564-3737', }, });
Update the cart with customer billing information. This method can update both shipping and billing at once if desired.
Returns the updated cart object.
await swell.cart.update({ billing: { name: 'Julia Sanchez', address1: '560 Olive Drive', address2: '', city: 'Ellinwood', state: 'KS', zip: '67526', country: 'United States', phone: '620-564-3737', // Paying with credit card card: { // Token from swell.card.createToken() or Stripe.js token: 'tok_1H0Qu92eZvKYlo2CsKGk6...', }, // Paying with PayPal paypal: { payer_id: '...', payment_id: '...', }, // Paying with Amazon Pay amazon: { access_token: '...', order_reference_id: '...', }, // Paying with Affirm affirm: { checkout_token: '...', }, }, });
Note: In February 2019, PayPal introduced Smart Payment Buttons. Swell's integration uses a previous version named checkout.js, which continues to be supported by PayPal and Swell. More details and examples.
Use to apply a coupon or gift card code to the cart (works with both so you can have a single input field). A cart can have one coupon and multiple gift card codes applied at once. Codes are not case sensitive.
Returns the updated cart object if code is valid. Otherwise, returns a validation error.
await swell.cart.applyCoupon('SUMMERTIME');
Use to apply a gift card code to the cart. A cart can have multiple gift card codes applied at once. Codes are not case sensitive.
Returns the updated cart object if code is valid. Otherwise, returns a validation error.
await swell.cart.applyGiftcard('BUYS SFX4 BMZH YY7N');
Use to remove the coupon code from the cart, if one was applied.
await swell.cart.removeCouponCode();
Use to remove a gift card from the cart, by passing the ID that was assigned to
cart.giftcards.id.
await swell.cart.removeGiftcard('5c15505200c7d14d851e51af');
A shipment rating contains all available shipping services and their price, based on cart items and the customer's shipping address. The cart must have at least
shipping.country set to generate a rating.
Returns an object with shipping services and rates.
await swell.cart.getShippingRates();
When a customer has entered all the information needed to finalize their order, call this method to process their payment and convert the cart to an order.
Returns the newly created order.
await swell.cart.submitOrder();
When a cart is submitted, the newly created order will be returned. However, you can use this method if you need to get the order information separately. You can also retrieve an order with a
checkout_id, allowing you to display order details from an email containing a link like{checkout_id}.
Returns order with the passed ID, or if no parameters are passed, the last order placed in the current session.
// Get the last order placed in the current session await swell.cart.getOrder(); // Get an order by checkout_id await swell.cart.getOrder('878663b2fb4175b128e40de428cd7b0c');
Use to retrieve settings that can affect checkout behavior.
Returns object with:
name- Store name
currency- Store base currency
support_email- Email address for customer support
fields- Set of checkout fields to show as optional or required
scripts- Custom scripts including script tags
accounts- Indicates whether account login is
optional,
disabledor
required
email_optin- Indicates whether email newsletter opt-in should be presented as optional
terms_policy- Store terms and conditions
refund_policy- Store refund policy
theme- Checkout style settings
countries- List of country codes that have shipping zones configured
payment_methods- List of active payment methods
coupons- Indicates whether the store has coupons
giftcards- Indicates whether the store has gift cards
await swell.cart.getSettings();
Authenticate customers and fetch/manage manage their account data.
Use to authenticate a customer with their email address and password. If the email/password combo is correct, their account will be added to the session, making customer-specific methods available. This will set
account_logged_in=true and
guest=false.
await swell.account.login('julia@example.com', 'thepassword');
Use to disconnect the account from the current session. This will set
account_logged_in=false and
guest=true.
await swell.account.logout();
Use to get information about the customer currently logged in.
Returns the account object, or
null if the customer is not logged in.
await swell.account.get();
Use to create a new customer account and attach it to the current session.
Returns the newly created account object.
await swell.account.create({ email: 'julia@example.com', first_name: 'Julia', // Optional last_name: 'Sanchez', // Optional email_optin: true, // Optional password: 'thepassword', // Optional });
Use to update properties of the currently logged in account.
Returns the updated account object if successful. Otherwise, returns a validation error.
await swell.account.update({ email: 'julia@anotherexample.com', first_name: 'Julia', // Optional last_name: 'Sanchez', // Optional email_optin: true, // Optional password: 'thepassword', // Optional });
Use to send a email to the customer with a link to reset their password. If the email address provided doesn't exist in the system, no email will be sent.
Returns a value indicating success in either case.
await swell.account.recover({ email: 'julia@example.com', });
Use to set the customer's new password. This requires the
reset_key from the recovery email (see above). The password recovery email should link to your storefront with
reset_key as a URL parameter that you can pass to this method.
await swell.account.recover({ reset_key: 'e42e66fc7e3f00e9e179w20ad1841146', password: 'thenewpassword', });
Use to get a list of addresses on file for the account. These are stored automatically when a non-guest user checks out and chooses to save their information for later.
Returns all addresses, with offset pagination using
limit and
page.
await swell.account.getAddresses();
Use to add a new address to the account.
Returns the newly created address object.
await swell.account.createAddress({ name: 'Julia Sanchez', address1: 'Apartment 16B', address2: '2602 Pinewood Drive', city: 'Jacksonville', state: 'FL', zip: '32216', country: 'United States', phone: '904-504-4760', });
Use to remove an existing address from the account by ID.
Returns the deleted address object.
await swell.account.deleteAddress('5c15505200c7d14d851e510f');
Use to get a list of credit cards on file for the account. These are stored automatically when a non-guest user checks out and chooses to save their information for later.
Returns all addresses, with offset pagination using
limit and
page.
await swell.account.getCards();
Use to save a tokenized credit card to the account for future use. Credit card tokens can be created using
swell.card.createToken or Stripe.js.
await swell.account.createCard({ token: '...', });
Use to remove a saved credit card from the account by ID.
await swell.account.deleteCard('5c15505200c7d14d851e510f');
Return a list of orders placed by a customer.
await swell.account.getOrders({ limit: 10, page: 2, });
Return a list of orders placed by a customer including shipments with tracking information.
Returns all orders, with offset pagination using
limit and
page.
await swell.account.getOrders({ expand: 'shipments', });
Fetch and manage subscriptions associated with the logged in customer's account.
Return a list of active and canceled subscriptions for an account.
Returns all subscriptions, with offset pagination using
limit and
page.
await swell.subscriptions.list();
Return a single subscription by ID.
await swell.subscriptions.get(id);
Subscribe the customer to a new product for recurring billing.
await swell.subscriptions.create({ product_id: '5c15505200c7d14d851e510f', // the following parameters are optional variant_id: '5c15505200c7d14d851e510g', quantity: 1, coupon_code: '10PERCENTOFF', items: [ { product_id: '5c15505200c7d14d851e510h', quantity: 1, }, ], });
await swell.subscriptions.update('5c15505200c7d14d851e510f', { // the following parameters are optional quantity: 2, coupon_code: '10PERCENTOFF', items: [ { product_id: '5c15505200c7d14d851e510h', quantity: 1, }, ], });
await swell.subscriptions.update('5c15505200c7d14d851e510f', { product_id: '5c15505200c7d14d851e510g', variant_id: '5c15505200c7d14d851e510h', // optional quantity: 2, });
await swell.subscriptions.update('5c15505200c7d14d851e510f', { canceled: true, });
await swell.subscriptions.addItem('5c15505200c7d14d851e510f', { product_id: '5c15505200c7d14d851e510f', quantity: 1, options: [ { id: 'color', value: 'Blue', }, ], });
await swell.subscriptions.updateItem('5c15505200c7d14d851e510f', '<item_id>', { quantity: 2, });
await swell.subscriptions.setItems('5c15505200c7d14d851e510e', [ { id: '5c15505200c7d14d851e510f', quantity: 2, options: [ { id: 'color', value: 'Blue', }, ], }, { id: '5c15505200c7d14d851e510g', quantity: 3, options: [ { id: 'color', value: 'Red', }, ], }, { id: '5c15505200c7d14d851e510h', quantity: 4, options: [ { id: 'color', value: 'White', }, ], }, ]);
await swell.subscriptions.removeItem('5c15505200c7d14d851e510f', '<item_id>');
await swell.subscriptions.setItems([]);
Render 3rd party payment elements with settings configured by your Swell store. This method dynamically loads 3rd party libraries such as Stripe, Braintree and PayPal, in order to standardize the way payment details are captured.
Note: when using a card element, it's necessary to tokenize card details before submitting an order.
Render Stripe elements to capture credit card information. You can choose between a unified card element or separate elements (cardNumber, cardExpiry, cardCvc).
import swell from 'swell-js'; swell.init('my-store', 'pk_...'); swell.payment.createElements({ card: { elementId: '#card-element-id', // default: #card-element options: { // options are passed as a direct argument to stripe.js style: { base: { fontWeight: 500, fontSize: '16px', }, }, }, onSuccess: (result) => { // optional, called on card payment success }, onError: (error) => { // optional, called on card payment error }, }, });
import swell from 'swell-js'; swell.init('my-store', 'pk_...'); swell.payment.createElements({ card: { separateElements: true, // required for separate elements cardNumber: { elementId: '#card-number-id', // default: #cardNumber-element options: { // options are passed as a direct argument to stripe.js style: { base: { fontWeight: 500, fontSize: '16px', }, }, }, }, cardExpiry: { elementId: '#card-expiry-id', // default: #cardExpiry-element }, cardCvc: { elementId: '#card-expiry-id', // default: #cardCvc-element }, onSuccess: (result) => { // optional, called on card payment success }, onError: (error) => { // optional, called on card payment error }, }, });
Note: see Stripe documentation for options and customization.
Render a PayPal checkout button.
import swell from 'swell-js'; swell.init('my-store', 'pk_...'); swell.payment.createElements({ paypal: { elementId: '#element-id', // default: #paypal-button style: { layout: 'horizontal', // optional color: 'blue', shape: 'rect', label: 'buynow', tagline: false, }, onSuccess: (data, actions) => { // optional, called on payment success }, onCancel: () => { // optional, called on payment cancel }, onError: (error) => { // optional, called on payment error }, }, });
Note: see PayPal documentation for details on available style parameters.
When using a payment element such as
card with Stripe, it's necessary to tokenize card details before submitting a payment form. Note: Some payment methods such as PayPal will auto-submit once the user completes authorization via PayPal, but tokenizing is always required for credit card elements.
If successful,
tokenize() will automatically update the cart with relevant payment details. Otherwise, returns a validation error.
import swell from 'swell-js'; swell.init('my-store', 'pk_...'); swell.payment.createElements({ card: { ... }, }); const form = document.getElementById('payment-form'); form.addEventListener('submit', function(event) { event.preventDefault(); showLoading(); const result = await swell.payment.tokenize(); hideLoading(); if (result.error) { // inform the customer there was an error } else { // finally submit the form form.submit(); } });
If a payment element isn't available for your credit card processor, you can tokenize credit card information directly.
Returns an object representing the card token. Pass the token ID to a cart's
billing.card.token field to designate this card as the payment method.
const response = await swell.card.createToken({ number: '4242 4242 4242 4242', exp_month: 1, exp_year: 2099, cvc: 321, // Note: some payment gateways may require a Swell `account_id` and `billing` for card verification (Braintree) account_id: '5c15505200c7d14d851e510f', billing: { address1: '1 Main Dr.', zip: 90210, // Other standard billing fields optional }, });
{ token: 't_z71b3g34fc3', brand: 'Visa', last4: '4242', exp_month: 1, exp_year: 2029, cvc_check: 'pass', // fail, checked zip_check: 'pass', // fail, checked address_check: 'pass', // fail, checked }
{ errors: { gateway: { code: 'TOKEN_ERROR', message: 'Declined', params: { cvc_check: 'fail', zip_check: 'pass', address_check: 'unchecked', }, }, }, }
Returns
true if the card number is valid, otherwise
false.
swell.card.validateNumber('4242 4242 4242 4242'); // => true swell.card.validateNumber('1111'); // => false
Returns
true if the card expiration date is valid, otherwise
false.
swell.card.validateExpry('1/29'); // => true swell.card.validateExpry('1/2099'); // => true swell.card.validateExpry('9/99'); // => false
Returns
true if the card CVC code is valid, otherwise
false.
swell.card.validateCVC('321'); // => true swell.card.validateCVC('1'); // => false
|
https://snyk.io/advisor/npm-package/swell-js
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Cube for data input/output, import and export
Project description
Cube for data input/output, import and export
Massive Store.
WARNING: This store may be only used with PostgreSQL for now, as it relies on the COPY FROM method, and on specific PostgreSQL tables to get all the indexes.
Workflow of Massive Store
The Massive Store workflow is the following:
-
Drop indexes and constraints from the meta-data tables (entities, is_instance_of, …);
-
Insertion of data:
- using the create_entity function for entities;
- using the relate function for relations;
- using the related_by_iid function for relations based on external identifiers;
- each insertion of a rtype that has not been seen yet will trigger the creation of a temporary table for this rtype, to store the results.
- each insertion of an etype that has not been seen yet will remove all the indexes/constraints on the entity table.
-
At a given point, one should call the flush method:
- it will flush the entities data into the database based on COPY_FROM.
- it will flush the relations data into the database based on COPY_FROM.
- it will flush the relations-iid data into the database based on COPY_FROM.
- it will create the metadata (entities, …) for the insered entities.
- it will commit.
-
If some relations are created based on external identifiers (relate_by_iid), the conversion should be manually done using the convert_relations method.
-
At the end of the insertion, one should call the cleanup method:
- it will re-create the indexes/constraints/primary key for the entities/relations tables.
- it will re-create the indexes/constraints on the meta-data tables.
- it will remove temporary tables and internal store tables.
Entities/Relations in Massive Store
Due to the technical constraints on the database insertion, there are some following specific points to notice:
- a create_entity call will return an entity with a specific eid. Eids are automatically dealt with by the Massive Store (it will fetch for a given range of eids for its internal use), but you can pass a specific eid in the kwargs of the create_entity call to bypass the automatic assignation of an eid.
- inlined-relations are not supported in the relate method.
A buffer will be created for the call to the PostgreSQL COPY_FROM clause. If the separator used for the creation of this tabular file is found in the data of the entities (or relations), it will be replace by the replace_sep of the store (default is to ‘’).
Basic use of Massive Store
A simple script using the Massive Store:
# Initialize the store store = MassiveObjectStore(session) # Initialize the Relation table store.init_rtype_table('Person', 'lives', 'Location') # Import logic ... entity = store.create_entity('Person', ...) entity = store.create_entity('Location', ...) # Flush the data in memory to sql database store.flush() # Import logic ... entity = store.create_entity('Person', ...) entity = store.create_entity('Location', ...) # Person_iid and location_iid are unique iid that are data dependant (e.g URI) store.relate_by_iid(person_iid, 'lives', location_iid) ... # Flush the data in memory to sql database store.flush() # Convert the relation store.convert_relations('Person', 'lives', 'Location') # Clean the store / rebuild indexes store.cleanup()
In this case, iid_subj and iid_obj represent an unique id (e.g. uri, or id from the imported database) that can be used to create relations after importing entities.
Advanced use of Massive Store
The simple and default use of the Massive Store is conservative to avoid issues in meta-data management. However it is possible to increase insertion speed:
- the flushing of meta-data could be costly if done too many times. A good practive is to do only once at the end of the import. For doing so, you should set autoflush_metadata to False in the store creation, and you should call the flush_meta_data at the end of the import (but before the call to `cleanup`).
- you may avoid to commit at each flush, by setting commit_at_flush to False in the store creation. Thus you should explicitely call the commit method at least once before flushing the meta data and cleaning up the store.
- you could avoid dropping the different indexes and constraints using the drop_index attribute during the store creation.
- you could set a different starting point of the eids sequence using the eids_seq_start attribute during the store creation.
- additional callbacks could be given to deal with commit and rollback (on_commit_callback and on_rollback_callback).
Example of advanced use of Massive Store:
store = MassiveObjectStore(session, autoflush_metadata=False, commit_at_flush=False) store.init_rtype_table('Location', 'names', 'LocationName') for ind, infos in enumerate(ucsvreader(open(dumpname))): entity = {'name': infos[1], ...} entity['my_inlined_relation'] = my_dict.get(infos[2]) entity = store.create_entity('Location', **entity) store.relate_by_iid(entity.cwuri, 'my_external_relation', infos[3]) if ind and ind % 200000 == 0: store.flush() store.commit() store.flush() store.commit() store.flush_meta_data() store.convert_relations('Location', 'my_external_relation', 'Location', 'cwuri', 'cwuri') store.cleanup()
Restoring a database after Massive Store failure
The Massive Store remove some constraints and indexes that are automatically rebuild during the cleanup call. If there is an error during the import process, you could still call to the cleanup method, or even recreate after the failure another store and call the cleanup method of this store.
The Massive Store create the following tables for its internal use:
- dataio_initialized: information on the initialized etype/rtype tables.
- dataio_constraints: the queries that may be used to restore the constraints/indexes for the different etype/rtype tables.
- dataio_metadata: the etypes that have already have their meta-data pushed.
Slave Mode
A slave mode is available for parallel use of the Massive Store:
- a Massive Store (master) should be created.
- for all the possible etype/rtype that may be encoutered during the import, the init_etype_table/init_relation_table methods of the master store should be called.
- different slave stores could be created using the slave_mode attribute during the store creation. The autoflush_metadata attribute should be setted to False.
- each slave store could be used in a different thread, for creating entity and relation, and should only call to its flush and commit methods.
- The master store should call its flush_meta_data and cleanup methods at the end of the import.
RDF Store
The RDF Store is used to import RDF data into a CubicWeb data, based on a Yams <-> RDF schema conversion. The conversion rules are stored in a XY structure.
Building an XY structure
You have to create a file (usually called xy.py) in your cube, and import the dataio version of xy:
from cubes.dataio import xy
You have to register the different prefixes (common prefixes as skos or foaf are already registered):
xy.register_prefix('diseasome', '')
By default, the entity type is based on the rdf property “rdf:type”, but you may changed it using:
xy.register_rdf_etype_property('skos:inScheme')
It is also possible to give a specific callback to determine the entity type from the rdf properties:
def _rameau_etype_callback(rdf_properties): if 'skos:inScheme' in rdf_properties and 'skos:prefLabel' in rdf_properties: return 'Rameau' xy.register_etype_callback(_rameau_etype_callback)
The URI is fetched from the “rdf:about” property, and can be normalized using a specific callback:
def normalize_uri(uri): if uri.endswith('.rdf'): return uri[:-4] return uri xy.register_uri_conversion_callback(normalize_uri)
Defining the conversion rules
Then, you may write the conversion rules:
xy.add_equivalence allows you to add a basic equivalence between entity type / attribute / relations, and RDF properties. You may use “*” as a wild cart in the Yams part. E.g. for entity types:
xy.add_equivalence('Gene', 'diseasome:genes') xy.add_equivalence('Disease', 'diseasome:diseases')
E.g. for attributes:
xy.add_equivalence('* name', 'diseasome:name') xy.add_equivalence('* label', 'rdfs:label') xy.add_equivalence('* label', 'diseasome:label') xy.add_equivalence('* class_degree', 'diseasome:classDegree') xy.add_equivalence('* size', 'diseasome:size')
E.g. for relations:
xy.add_equivalence('Disease close_match ExternalUri', 'diseasome:classes') xy.add_equivalence('Disease subtype_of Disease', 'diseasome:diseaseSubtypeOf') xy.add_equivalence('Disease associated_genes Gene', 'diseasome:associatedGene') xy.add_equivalence('Disease chromosomal_location ExternalUri', 'diseasome:chromosomalLocation') xy.add_equivalence('* sameas ExternalUri', 'owl:sameAs') xy.add_equivalence('Gene gene_id ExternalUri', 'diseasome:geneId') xy.add_equivalence('Gene bio2rdf_symbol ExternalUri', 'diseasome:bio2rdfSymbol')
A base URI can be given to automatically determine if a Resource should be considered as an external URI or an internal relation:
xy.register_base_uri('')
A more complex logic can be used by giving a specific callback:
def externaluri_callback(uri): if uri.startswith(''): if uri.endswith('disease') or uri.endswith('gene'): return False return True return True xy.register_externaluri_callback(externaluri_callback)
The values of attributes are built based on the Yams type. But you could use a specific callback to compute the correct values from the rdf properties:
def _convert_date(_object, datetime_format='%Y-%m-%d'): """ Convert an rdf value to a date """ try: return datetime.strptime(_object.format(), datetime_format) except: return None xy.register_attribute_callback('Date', _convert_date)
or:
def format_isbn(rdf_properties): if 'bnf-onto:isbn' in rdf_properties: isbn = rdf_properties['bnf-onto:isbn'][0] isbn = [i for i in isbn if i in '0123456789'] return int(''.join(isbn)) if isbn else None xy.register_attribute_callback('Manifestation formatted_isbn', format_isbn)
Importing data
Data may thus be imported using the “import-rdf” command of cubicweb-ctl:
cubicweb-ctl import-rdf <my-instance> <filer-or-folder>
The default library used for reading the data is “rdflib” but one may use “librdf” using the “–lib” option.
It is also possible to force the rdf-format (it is automatically determined, but this may sometimes lead to errors), using the “–rdf-format” option.
Exporting data
The view ‘rdf’ may be called and will create a RDF file from the result set. It is a modified version of the CubicWeb RDFView, that take into account the more complex conversion rules from the dataio cube. The format can also be forced (default is XML) using the “–format” option in the url (xml, n3 or nt).
Examples
Examples of use of dataio rdf import could be found in the nytimes and diseasome cubes.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/cubicweb-dataio/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Back to: Angular Tutorials For Beginners and Professionals
Routing in Angular Application
In this article, I am going to discuss Routing in Angular Application with Examples. Please read our previous article, where we discussed how to create Angular Custom Pipes in Detail. At the end of this article, you will understand the following pointers in detail.
- What is Routing in Angular?
- Why Routing in Angular Application?
- Understanding Angular Routing with an example.
- Configuring Routes in Angular Application.
- What is Router-outlet and routerLink?
- What is Angular Router?
What is Routing in Angular?
The Angular Routing is a mechanism which is used for navigating between pages and displaying appropriate component or pages on the browser.
In other words we can say that the Routing in Angular Application helps us to navigate across the application from one one view to another view. It also allows us to maintain the state, implement modules, and then loads the modules based on the roles of the user. If this is not clear at the moment, then don’t worry, we will try to understand each and everything with examples.
Why Routing in Angular Application?
We access our application through one URL such as and our application is not aware of any other URLs such as
Most web applications need to support different URLs to navigate different pages of the application. This is where angular routing comes into picture.
Understanding Angular Routing with an example:
Let us understand the Routing in Angular Application with an example. We want to create a page as shown below.
When the user click on the student link, we need to display the following.
And when the user click on the student details link, we need to display the following.
Let us see how to implemeny this using Angular Routing. Here, we need to create two components i.e. student and studentdetails component.
Creating Student Component:
Open terminal and type ng g c student and press enter as shown in the below image.
Once you press enter it will create four files within the student folder which you can find within the src/app folder as shown in the below image.
Modifying the student.component.html file:
Open student.component.html file and then copy and paste the following code in it.
<p>You are in the student.component.html</p>
Creating Student Details Component:
Open terminal and type ng g c studentdetail and press enter as shown in the below image.
Once you press enter it will create four files within the studentdetail folder which you can find inside the src/app folder as shown in the below image.
Modifying studentdetail.component.html file:
Open studentdetail.component.html file and then copy and paste the following code in it.
<p>You are in studentdetail.component.html</p>
Adding Routing in Angular:
When you are creating an angular 9 application, at the time of creation it is asking whether you want to add angular routing to your project or not. If you select yes, then it automatically add the routing model in your project. If you select no then you need to add it manually. You can use the below CLI code to generate the router in Angular Application.
ng generate module app-routing –flat –module=app
Here –flat puts the file in src/app folder instead of its own folder and –module=app tells the CLI to register it in the imports array of the AppModule.
Configuring Routes in Angular Application:
Once you created the routing module, then you need to configure the path and their respective component in the AppRoutingModule as shown in the below image. As you can see, here we have created two paths i.e. studentLink and studentdetailsLink (you need to use the path properties to set the path. You can give any meaningful name here) and also set the respective components using the component property (Here you need to provide the component class name).
So, open app-routing.module.ts file and then copy and paste the following code in it.
import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { StudentComponent } from './student/student.component'; import { StudentdetailComponent } from './studentdetail/studentdetail.component'; const routes: Routes = [ { path:'studentLink', component:StudentComponent }, { path:'studentdetailsLink', component: StudentdetailComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
Note: While generating the link you need to use the string studentLink and studentdetailsLink. Let us see how to use these routing path to generate link and navigates.
Generating to Links:
In order to generate links, open app.component.html file and then copy and paste the following code in it.
<h2>Angular Routing Example</h2> <a [routerLink] = "['/studentLink']" >Student</a> <br/> <a [routerLink] = "['/studentdetailsLink']" >student details</a> <div> <router-outlet></router-outlet> </div>
With the above changes in place, now run the application and you should get the output as expected. From the above HTML Code, we need to understand two important concepts i.e. routerLink and router-outlet
Router-outlet:
The Router Outlet is a dynamic component that the router uses to display views based on router navigation. In our example, whenever the user click on the Student Link, then it will display student component view in the router-outlet div. So, the role of <router-outlet> is to mark where the router displays the view. This is the location where angular will insert the component.
The <router-outlet> tells the router where to display routed view. The RouterOutlet is one of the router directives that become available to the AppComponent because AppModule imports AppRoutingModule which exported RouterModule.
Router Link:
With the help of routerLink directive, you can link to routes of your application right from the HTML Template. You just need to add the directive to an HTML Element. When the user clicks on that element, angular navigates to that specified location.
The routerLink is the selector for the RouterLink directive that turns user clicks into router navigations. You can assign a string to the Router link. This directive generates the link based on the route path.
Router Link: Client side
The syntax is given below. As you can see within the anchor tag we have routerLink and we have specified the studentLink as its path. If you remember we have set this studentLink path in the routing module and it is pointing to the Student Component. So, when the Student List is clicked, then the student component is going to be load on the router-outlet directive.
Router Link: Server side
Sometimes it is also required to set the route dynamically based on some condition and that can be done at server side. For your application to work with server side rendering, the element hosting directive has to be a link (anchor) element.
It is also possible to navigate to a route from code. To do so, we need angular router and this need to be done in your typescript file. The syntax is given below.
Once we have the router, then the navigation is quite simple. Just call the navigate function or Router. This function takes an array. The first element of the array defines the route we want to navigate. The second is optional and allows us to pass a route parameter. The syntax is given below.
Let us see an example to understand this:
First modify the app.component.ts file as shown below.
import { Component} from '@angular/core'; import {Router} from '@angular/router'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor(private router : Router){} GetStudent() { this.router.navigate(['/studentLink']); } GetStudentDetails() { this.router.navigate(['/studentdetailsLink']); } }
Then modify app.component.html file as shown below.
<h2>Angular Routing Example</h2> <button (click)="GetStudent()">Student</button> <button (click)="GetStudentDetails()">GetStudentDetails</button> <div> <router-outlet></router-outlet> </div>
With the above changes in place, now run the application and it should works as expected.
What is Angular Router?
The Angular Router is an official Routing Library managed by the Angular Core Team. Its a JavaScript router implementation that’s designed to work with Angular and is packaged as @angular/router
The Angular Router will take cares of the duties of a JavaScript router. It activates all required Angular Components to compose a page when a user navigates to a certain URL. It lets user navigate from one page to another without page reload.
It updates the browsers history so that the user can use the back and forward buttons when navigating back and forth between pages.
In the next article, I am going to discuss Redirecting Routes in Angular Application in detail. Here, In this article, I try to explain the basics of Routing in Angular Application. I hope you enjoy this article.
1 thought on “Routing in Angular”
While creating routing i got error “unknown -flat”. after i refered Angularjs site i got the below cmd. now its working…
ng generate module app-routing –flat –module=app
|
https://dotnettutorials.net/lesson/routing-angular/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
I received an email asking how to structure your Django Project if you are using an API.
It’s common for people to want to know the best practices for structuring your code with your Django project.
People think that maybe if I can structure my project correctly, it would be easier to do X.
Sometimes your
.py files can get so out of hand that you might want to break them up into smaller files.
Your API is just another Django app
But, How do you organize your API views and URLs?
You can do it pretty simply.
When I create an API using Django REST Framework, I’ll put my API code in it’s separate app and put all the views, URLs, Serializers, etc. inside that one directory. Everything is exactly the same.
I’ll update my URLs so that each route to my API starts with something like
/api/v1/
Steps to creating an API
- Go into the directory with
manage.py
- Run command to create a new app for your new API:
python manage.py startapp api
- Update the URLConf and include the URLs for your API:
url(r'^api/v1/', include('api.urls', namespace="api")),
- Add some files in your
apifolder:
api_views.py,
serializers.py, etc.
- Then, sign up for my Free Django REST Framework Email course to learn how to setup your own API.
|
https://chrisbartos.com/articles/how-to-structure-your-django-app-with-api/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
L10n
L10n gives your GORM models the ability to localize for different Locales. It can be a catalyst for the adaptation of a product, application, or document content to meet the language, cultural, and other requirements of a specific target market.
Usage
L10n utilizes GORM callbacks to handle localization, so you will need to register callbacks first:
import ( "github.com/jinzhu/gorm" "github.com/qor/l10n" ) func main() { db, err := gorm.Open("sqlite3", "demo_db") l10n.RegisterCallbacks(&db) }
Making a Model Localizable
Embed
l10n.Locale into your model as an anonymous field to enable localization, for example, in a hypothetical project which has a focus on Product management:
type Product struct { gorm.Model Name string Code string l10n.Locale }
l10n.Locale will add a
language_code column as a composite primary key with existing primary keys, using GORM's AutoMigrate to create the field.
The
language_code column will be used to save a localized model's Locale. If no Locale is set, then the global default Locale (
en-US) will be used. You can override the global default Locale by setting
l10n.Global, for example:
l10n.Global = 'zh-CN'
Create localized resources from global product
// Create global product product := Product{Name: "Global product", Description: "Global product description"} DB.Create(&product) product.LanguageCode // "en-US" // Create zh-CN product product.Name = "中文产品" DB.Set("l10n:locale", "zh-CN").Create(&product) // Query zh-CN product with primary key 111 DB.Set("l10n:locale", "zh-CN").First(&productCN, 111) productCN.Name // "中文产品" productCN.LanguageCode // "zh"
Create localized resource directly
By default, only global data allowed to be created, local data have to localized from global one.
If you want to allow user create localized data directly, you can embeded
l10n.LocaleCreatable for your model/struct, e.g:
type Product struct { gorm.Model Name string Code string l10n.LocaleCreatable }
Keeping localized resources' fields in sync
Add the tag
l10n:"sync" to the fields that you wish to always sync with the global record:
type Product struct { gorm.Model Name string Code string `l10n:"sync"` l10n.Locale }
Now the localized product's
Code will be the same as the global product's
Code. The
Code is not affected by localized resources, and when the global record changes its
Code the localized records'
Code will be synced automatically.
Query Modes
L10n provides 5 modes for querying.
- global - find all global records,
- locale - find localized records,
- reverse - find global records that haven't been localized,
- unscoped - raw query, won't auto add
localeconditions when querying,
- default - find localized record, if not found, return the global one.
You can specify the mode in this way:
dbCN := db.Set("l10n:locale", "zh-CN") mode := "global" dbCN.Set("l10n:mode", mode).First(&product, 111) // SELECT * FROM products WHERE id = 111 AND language_code = 'en-US'; mode := "locale" db.Set("l10n:mode", mode).First(&product, 111) // SELECT * FROM products WHERE id = 111 AND language_code = 'zh-CN';
Qor Integration
Although L10n could be used alone, it integrates nicely with QOR.
By default, QOR will only allow you to manage the global language. If you have configured Authentication, QOR Admin will try to obtain the allowed Locales from the current user.
- Viewable Locales - Locales for which the current user has read permission:
func (user User) ViewableLocales() []string { return []string{l10n.Global, "zh-CN", "JP", "EN", "DE"} }
func (user User) EditableLocales() []string { if user.role == "global_admin" { return []string{l10n.Global, "zh-CN", "EN"} } else { return []string{"zh-CN", "EN"} } }
|
https://doc.getqor.com/guides/l10n.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
#!/home/philou/install/perl-5.8.2-threads/bin/perl -w
use strict;
use threads;
use threads::shared;
use Thread::Semaphore;
my $s = 'Thread::Semaphore'->new( 1 );
sub t {
while( $s->down() ) {
print "DOWN!\n";
sleep( 1 );
}
}
'threads'->new( \&t );
while( <> ) {
$s->up();
print "UP!\n";
}
[download]
Declaring the `$s' variable as `shared' does not help at all :
#!/home/philou/install/perl-5.8.2-threads/bin/perl -w
use strict;
use threads;
use threads::shared;
use Thread::Semaphore;
my $s : shared = 'Thread::Semaphore'->new( 1 );
sub t {
while( $s->down() ) {
print "DOWN!\n";
sleep( 1 );
}
}
'threads'->new( \&t );
while( <> ) {
$s->up();
print "UP!\n";
}
[download]
thread failed to start: Can't call method "down" on unblessed referenc
+e at ./t.pl line 11.
[download]
Any suggestion appreciated.
Philou
my $s = &shared( Thread::Semaphore->new );
[download]
First off, I've never used Thread::Semaphore.
Part of the reason I've never used it is because to my knowledge, you cannot share objects between threads, but for Thread::Semaphore to be useful, you would need to be able to invoke the methods from multiple threads? And I don't believe that this will work.
From my perspective, almost everything in the Thread::* namespace was written for use with perl5005threads, which are now deprecated in favour of iThreads, because they never really worked properly.
As such, I think that everything in the Thread::* namespace should be withdrawn or if it is really iThreads compatible, be renamed into the threads::* namespace so that it becomes clear which of those packages in Thread::* are actually usable with iThreads.
Of course, this won't happen because the namespace nazis will say that having packages where the first element of the name is all lowercase will confuse people, because all lowercase is reserved for pragmas. The fact that 90% of the modules in the Thread::* namespace were never designed or tested for use with iThreads, which just makes people that try to use them with iThreads think that iThreads are broken. But then, most of those same "namespace nazis" don't believe that Perl should have ever had threads in the first place, wouldn't use them or promote them on principle, and therefore don't give a flying f*** about the confusion this stupid decision causes.
Of course, never having tried to use Thread::Semaphore (I never saw a the need for it), I could be completely wrong about it. Thread::Queue does work and I couldn't live without that module. Even so, the gist of my rant above remains true regardless.
Until something is done to segregate those modules that are designed, for and tested with, iThreads; from those that are left overs from 5005threads that will never work; from those that were designed to be used with Forks (a mechanisms for bypassing threads completely), that have never been tested for use with threads proper, all of the heroic effort by Arthur Bergman to get iThreads to where they are now will tend to be wasted, because people will be trying to use the wrong modules with iThreads and fall foul of the stupid namespace.
|
http://www.perlmonks.org/index.pl?node_id=394885
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Toyblocks: A toy blockchain implementation in Python
toyblocks - A small Python implementation to begin understanding blockchains... (more…)Read more »
“Speaker: Raymond Hettinger Python’s dictionaries are stunningly good. Over the years, many great ideas have combined together to produce the modern implemen… Read more
toyblocks - A small Python implementation to begin understanding blockchains... (more…)Read more »
import operator f = lambda n: reduce(operator.mul, range(1,n+1))...Read more »
What open source programming language is the best for data science, R or Python »
|
https://fullstackfeed.com/modern-python-dictionaries-a-confluence-of-a-dozen-great-ideas-pycon-2017/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
0
Right now I'm trying to create an oppish translator. That is, after a consonant or several consonants in a row, you add 'op' to those letters. As an example, cow would become copowop or street which would become stropeetop. This is what I have so far:
def oppish(phrase): #with this function I'm going to append 'op' or 'Op' to a string. consonants = ['b','c','d','f','g','h','i','j','k','l','m','n','p','q','r','s','t','v','w','x','y','z'] vowels = ['a', 'e', 'i', 'o', 'u'] #this and the preceding line create a variable for the vowels we will be looking to append 'op' to. if phrase == '': #error case. if the input is nothing then the program will return False. return False phrase_List = list(' ' + phrase) # turns the input phrase into a list which allows for an index to search for consonants later on. new_phrase_List = list() #creates new list for oppish translation for i in range(1, len(phrase_List)): if phrase_List[i] == phrase_List[1]: new_phrase_List.append(phrase_List[i]) elif phrase_List[i] in consonants: new_phrase_List.append('op') #adds op to the end of a consonant within the list and then appends it to the newlist new_phrase_List.append(phrase_List[i]) #if the indexed letter is not a consonant it is appended to the new_phrase_list. print 'Translation: ' + ''.join(new_phrase_List) oppish('street')
The only problem here is that the above code yields this
Translation: ssoptopreeopt
I'm not sure what I've done wrong, I've tried going through a visualizer but to no avail. All help is appreciated! :)
|
https://www.daniweb.com/programming/software-development/threads/485442/how-to-append-a-string-to-only-consonants-in-a-list
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
AVR910: In-System Programming using mbed
This notebook page details the implementation of the AVR910 In-System Programming protocol as specified in the AVR910 application note.
How it works
The AVR910 protocol uses a six pin connection which consists of a 3-wire SPI interface, a reset line, VCC and GND. All of these can be directly connected to the mbed. To initiate programming, the reset line must be pulled low, after which commands [specified in the AVR910 application note and AVR microcontroller datasheets] can be sent to the device being programmed.
The programmer first issues an enable programming command, reads the signature bytes of the device, then takes a raw binary file [.bin specified by the PATH_TO_BINARY definition in AVR910.h] and loads it byte by byte into the AVR microcontroller's page buffer which is written to flash memory whenever it is full.
At the end, flash memory is read byte by byte and compared to the binary file to check whether programming was successful or not.
Program
You can find the code for the programmer here:
It should be noted that the current implementation assumes the AVR microcontroller has paged flash memory.
Code
/** * Program an AVR with an mbed. */ // ATMega328 Datasheet: // // #include "AVR910.h" LocalFileSystem local("local"); Serial pc(USBTX, USBRX); AVR910 mbedISP(p5, p6, p7, p8); //mosi, miso, sclk, nreset. int main() { int success = -1; FILE *fp = fopen(PATH_TO_BINARY, "rb"); if(fp == NULL){ pc.printf("Failed to open binary. Please check the file path\n"); } else{ pc.printf("Binary file opened successfully\n"); success = mbedISP.program(fp); fclose(fp); } if(success < 0){ printf("Programming failed.\n"); } else{ printf("Programming was successful!\n"); } }
To do
- Support AVR chips which don't use paged memory
- Introduce a verbose and quiet mode to tidy up debugging statements
- Check command responses for synchronization errors during programming
- Anything else I've forgotten!
Example
I used the programmer to put this Attitude Heading Reference System (AHRS) onto the SparkFun 9DOF Razor IMU, which contains an ATMega328P.
After compiling the code with arduino-0018, I used hex2bin to convert the resulting .hex output to a raw binary which was then downloaded onto the chip using the programmer.
The AHRS program spits out heading data over a 2-wire serial interface - a nice way of visualising it is the python graphic interface via an FTDI USB-Serial cable:
However, as I needed the raw data on the mbed, I simply used a Serial object and parsed out the roll, pitch and yaw values:
#include "mbed.h" Serial pc(USBTX, USBRX); Serial razor(p9, p10); float roll; float pitch; float yaw; int main() { razor.baud(57600); while(1) { //Make sure we get to the start of a line. while(razor.getc() != '\n'); razor.scanf("!ANG:%f,%f,%f\n", &roll, &pitch, &yaw); pc.printf("%f,%f,%f\n", roll, pitch, yaw); } }
Now the data is available for any application I desire!
Please log in to post a comment.
very coll, I was a little worried that I might be forced to only use the bootloader for the Razor IMU. Now I can program it using the mbed. Many thanks for uploading this. Could you let us know what should we look for in the Atmel data sheet to figure out if a given microcontroller is programmable via this method.
|
https://os.mbed.com/users/aberk/notebook/avr910-in-system-programming-using-mbed/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
TemplatePrinter
Provides a print template with access to page setup and printer settings and control over print jobs initiated from the template.
Remarks
The methods of TEMPLATEPRINTER give a print template control over the start and end of print jobs (startDoc and stopDoc), control over the printing of each individual page in a print job (printPage), and control over the display of printing dialog boxes (showPageSetupDialog and showPrintDialog). The properties of TEMPLATEPRINTER enable a print template to set or retrieve the page setup settings and current print job settings. For instance, a print template might set or retrieve pageWidth and pageHeight for page setup, or pageTo and pageFrom to determine the page range to print.
A TEMPLATEPRINTER element is intended for use when building a print template. Its functionality is disabled when it is used outside a print template.
This element must occur once and only once in a print template if the template is to support printing.
For security reasons, the TEMPLATEPRINTER element is enabled only when embedded in a print template; otherwise, this element is disabled. For more information, please review Beyond Print Preview: Print Customization for Internet Explorer 5.5.
When using the TEMPLATEPRINTER element, you must prefix it with an XML namespace. Declare the namespace by using the IMPORT processing instruction. For example, the namespace "IE" can be declared using the following statement:
<?import implementation="#default" namespace="IE">
The TEMPLATEPRINTER element syntax to use with this namespace is <IE:TEMPLATEPRINTER ... />.
The TEMPLATEPRINTER element is an unscoped element; that is, it does not have a closing tag. It must have a forward slash (/) before its closing bracket.
In some cases—usually when the print or print preview dialog box is displayed—the content source document has time to load before printing starts. In other cases, usually when there is no user prompt prior to printing, a print template must be designed to wait for the document to load before beginning a print job. Accomplish this by setting an onreadystatechange event handler on the contentDocument that checks the document's readyState property to see when it equals "complete." This handler is not always necessary, so a template should only set it when the property is not "complete" at the start of a print job.
* script block and the opening lines of the BODY element are taken from a print template. They show the minimum features necessary to provide printing support in a print template. The Init function is an event handler attached to the print template's body element for the onload event. "Layout1" is the id for the first LAYOUTRECT of the print template. "PageX," where X is a positive integer, is the id format for the DEVICERECTs in the print template.
<SCRIPT language="JScript"> <?import implementation="#default" namespace="IE"> function Init() { switch (dialogArguments.__IE_PrintType) { case "Prompt": if (Printer.showPrintDialog()) DoPrint(); break; case "NoPrompt": DoPrint(); break; case "Preview": default: break; } } function DoPrint() { if (Layout1.contentDocument.readyState == "complete") { // This block is called when printing with user prompt // because the Print and Preview dialog boxes give time for // the content document to complete loading PrintNow(); } else { // This block is usually called when printing without a user prompt. // It sets an event handler that listens for the loading of the content // document before printing. Sometimes, however, even without a user prompt, // the content document is loaded in time for the previous // block to execute. Layout1.contentDocument.onreadystatechange = PrintWhenContentDocComplete; } } function PrintWhenContentDocComplete() { if (Layout1.contentDocument.readyState == "complete") { Layout1.contentDocument.onreadystatechange = null; PrintNow(); } } function PrintNow() { firstPage = Printer.pageFrom; lastPage = Printer.pageTo; Printer.startDoc("A print job"); for (i = firstPage; i <= lastPage; i++) { if (document.all("Page" + i)) Printer.printPage("Page" + i); else alert("Print Error"); } Printer.stopDoc(); } </SCRIPT> </HEAD> <BODY onload="Init()"> <IE:TEMPLATEPRINTER . . .
Requirements
See also
Reference
Other Resources
Beyond Print Preview: Print Customization for Internet Explorer 5.5
Print Preview 2: The Continuing Adventures of Internet Explorer 5.5 Print Customization
|
https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/aa969431(v=vs.85)
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Debug Windows3:29 with Jeremy McLain and Justin Horner
We’ve set several breakpoints in our application, but what if we want to remove a few, or simply disable them? Luckily, Visual Studio ships with windows for managing breakpoints so we don’t have to remember where every breakpoint in our application is located.
Breakpoints Window
- Open the Breakpoints Window using the menu (Debug -> Windows -> Breakpoints) to show all the breakpoints set in the solution
- In a large application, the breakpoints window will be extremely helpful because it is the central place where breakpoints can be managed
Immediate Window
Sometimes we’ll want to create variables and call methods while debugging. We can use the Immediate Window to accomplish this.
- Open the Immediate Window via the menu (Debug > Windows > Immediate)
Call Stack Window
What if we’ve hit a breakpoint, but want to know what’s been executed previously? The Call Stack Window in Visual Studio will show us what’s been executed and will update as we step through code to different functions/methods.
- Open the Call Stack Window via the menu (Debug > Windows > Call Stack)
Bookmarks Window
While being able to navigate around our code using breakpoints is handy, you might want to be able to navigate code that you don’t necessarily want to debug. Managing enable/disable states for breakpoints you’re using simply to navigate is a bit overkill. Once again, Visual Studio has a mechanism that will allow us to set places just for the purpose of navigation called Bookmarks.
- Open the Bookmark Window via the menu (View -> Bookmark Window)
- The Bookmarks Window will allow you to set bookmarks on lines of code that you want to revisit later. You can also organize the bookmarks into virtual folders to keep track of the different areas they represent.
- 0:00
We've set several breakpoints in our application, but
- 0:03
what if we want to remove a few or simply disable them.
- 0:07
The Breakpoints window provides a central place to manage all of our breakpoints.
- 0:11
This is super helpful in a large project.
- 0:13
To open the breakpoints window, click Debug>Window and then Breakpoints.
- 0:19
Notice, that it has all of our breakpoints listed here.
- 0:23
From here, we're able to quickly navigate to every breakpoint location in our code
- 0:27
simply by double clicking the rows.
- 0:29
You might wanna keep a breakpoint by temporarily avoid hitting it when
- 0:32
debugging.
- 0:33
From this window,
- 0:34
you can use the checkboxes beside the breakpoints to enable or disable them.
- 0:38
Let's disable a bunch of these breakpoints now.
- 0:44
We can also create a special type of breakpoint from here called
- 0:47
a Function Breakpoint by clicking on the New button.
- 0:51
This Data Breakpoint type is only available when debugging C++ code
- 0:55
right now but we can create function breakpoints.
- 0:58
From here, we enter the name of the function.
- 1:01
This will cause the debugger to pause before entering any method named Prompt.
- 1:06
This is very helpful when debugging methods that might be overridden in
- 1:10
subclasses or when we want to break on all implementations of an interface.
- 1:14
This will cause the debugger to pause before running the base method or
- 1:18
any overridden methods.
- 1:20
I'll disable this right now.
- 1:24
Finally, I want to mention that from the Breakpoints window, you can export and
- 1:28
import sets of breakpoints.
- 1:30
That's the purpose of these two buttons up here.
- 1:33
This is one way to tell your fellow developers where exactly in the code
- 1:37
an issue can be found or if you need to switch to debugging an unrelated issue,
- 1:42
you can save off the breakpoints and then come back to them at a later time.
- 1:46
When a breakpoint is hit, we often want to know how the program got there.
- 1:51
That's the purpose of the Call Stack window.
- 1:53
This is one of the windows that's usually open when Visual Studio is in debug mode.
- 1:58
You can also open it by going to the debug windows menu.
- 2:02
Lets run the debugger until we hit one of these breakpoints here where we're setting
- 2:05
the song.name property.
- 2:12
We can look here on the Call Stack to see that we've stopped where we're setting
- 2:16
the Name property.
- 2:18
We can see that this is being called from within the AddSong method,
- 2:23
which in turn was called from the Main method.
- 2:27
To go to any one of these methods, we can just double click on them.
- 2:30
There are a bunch of options here when we right click on the call stack.
- 2:34
Here, we can change what is shown.
- 2:36
We can show or hide which assembly the method resides in or
- 2:39
we can also pick whether or not to show the parameter types, names and values.
- 2:44
This is helpful because the Call Stack can get quite full when everything is shown.
- 2:49
Notice that nothing calls the Main method or does it?
- 2:53
This is only showing code that is in this Visual Studio project.
- 2:58
We can see what calls the Main method by right-clicking and
- 3:01
clicking Show External Code.
- 3:08
You might be surprised to see that there are so
- 3:10
many things involved with starting your program.
- 3:13
Calling Main isn't the only place we run into external code.
- 3:16
We see it all the time when using code that's calling our code.
- 3:20
Such as in web applications or other types of software frameworks.
- 3:24
It's sometimes helpful to take a deeper look into the Call Stack to understand how
- 3:28
things are working.
|
https://teamtreehouse.com/library/debug-windows
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Displaying Validation Messages7:20 with Chris Ramacciotti
As the final piece of our form validation discussion, in this video we cover how to make validation results available to Thymeleaf templates, as well as how to use Thymeleaf to display validation messages if there are any present.
Git Command to Sync Your Code to the Start of this Video
git checkout -f s4v.
Validation Messages in Your Annotations
One approach to customizing error messages is to use the validation annotation's
message element to specify exactly the message you'd like to use. For example:
@Size(min = 5, max = 12, message = "The category name must be {min} to {max} characters in length.") private String name;
If validation fails on the
name field, the message above will be interpolated as "The category name must be 5 to 12 characters in length."
Externalizing Validation Messages
If you want to get validation messages completely out of your source code (and you should), you have some options. A useful option is to include a file named messages.properties on the classpath (in src/main/resources works nicely). In this file, you'll add a property key for each validation message you wish to define. Then, as the value for each key, you'll type the same message as you would have used in the annotation. Here is an example of using a properties file to do the equivalent of the above annotation:
messages.properties
category.name.size = The category name must be {min} to {max} characters in length.
Category.java
@Size(min = 5, max = 12, message = "{category.name.size}") private String name;
In order to configure Spring to use your messages.properties file as the source of validation messages, you'll need to update your
AppConfig (or other
@Configuration class) to extend
WebMvcConfigurerAdapter, and include the following (unrelated code omitted for brevity):
AppConfig.java
public class AppConfig extends WebMvcConfigurerAdapter{ @Bean public MessageSource messageSource() { ReloadableResourceBundleMessageSource bean = new ReloadableResourceBundleMessageSource(); bean.setBasename("classpath:messages"); bean.setDefaultEncoding("UTF-8"); return bean; } @Bean public LocalValidatorFactoryBean validator() { LocalValidatorFactoryBean bean = new LocalValidatorFactoryBean(); bean.setValidationMessageSource(messageSource()); return bean; } @Override public Validator getValidator() { return validator(); } }
- 0:00
To keep our users fully informed of what's happening when they try to add a category
- 0:04
and we redisplay the form, let's add some form validation messages.
- 0:08
Here we'll once again utilize the redirect attributes object to pass
- 0:12
on validation results to the form upon redirection.
- 0:16
This is where in my opinion,
- 0:18
as of the recording of this video, Spring falls a bit short on data validation.
- 0:23
Though this solution that I'm going to give you works wonderfully,
- 0:26
it isn't one that's overly apparent, so here it is.
- 0:31
If the binding result reveals that we do have errors upon validation, we want
- 0:36
to include those validation errors for redirects, so we'll do that right here.
- 0:40
We will include validation errors upon redirect,
- 0:47
so to do that we say redirectAttributes.addFlashAttribute,
- 0:51
that's not new that's what we did with the categories.
- 0:54
So that the data that the user entered for a category will
- 0:57
be included when the form is redisplayed, it'll pre-populate the form.
- 1:01
Now up here, here's the name of the class that we need to add.
- 1:05
It's a fully qualified class name
- 1:10
org.springframework.validation.BindingRes- ults.Category.
- 1:20
And then we will add the binding result object under that
- 1:25
attribute name right there.
- 1:28
Notice that we are going to use the fully qualified name
- 1:31
of the binding result class, and then we will use .category.
- 1:35
In the Thymeleaf template, Thymeleaf will be able to combine
- 1:41
the data that's stored in that category, that is this model attribute,
- 1:46
it will be able to combine that data with this validation data here.
- 1:52
So then, Thymeleaf will have access to the binding result in a way that it expects.
- 1:56
And speaking of Thymeleaf, let's head there now and make the necessary changes.
- 2:00
So I want to go to the category's form view.
- 2:03
Now that this template will have access to the binding result,
- 2:07
what we can do is add markup to display when there are errors for any given field.
- 2:12
Let me show you an example.
- 2:14
I'll start with the category name so I'll add some markup after this input element.
- 2:19
I'll add a div with the class name of error-message.
- 2:24
The contents of this div will contain the actual error message,
- 2:28
should validation errors be present.
- 2:30
So that to make sure that this div is only displayed when validation errors
- 2:34
are present, we'll use a th:if attribute and here's how that looks.
- 2:40
With a dollar sign in a pair of curly braces inside we will say
- 2:45
fields.hasErrors and we will check the name field.
- 2:51
And the text we want to display in this div will come from some default error
- 2:57
message and get all referenced in just a second, but to make sure that we have
- 3:00
access to that error messaging instead of using a th:text we'll use a th:errors.
- 3:07
And as the value for that we will reference the bound objects name property.
- 3:14
So what's going on here is that we're using this fields object,
- 3:18
which allows us to query validation errors with the hasErrors method.
- 3:23
And referencing the currently bound object, as I mentioned, which is category,
- 3:28
we are referencing its name property.
- 3:33
So what is happening here is this method call right here, fields.hasErrors,
- 3:38
is querying that binding result object that we added with
- 3:42
org.springframework.validation.bindingres- ult.Category.
- 3:46
And this right here references the errors of the currently bound object,
- 3:52
specifically, its name property.
- 3:54
And if it has errors, then the text that will be displayed comes from default
- 3:59
messaging and we want that default messaging to be related to the validation
- 4:04
that's present on the name field of the category entity.
- 4:10
And now while we're at it, let's do the same for the color code.
- 4:14
What I'm going to do is I'm going to copy that div and paste it for
- 4:20
the color code and simply change the field names.
- 4:25
So I will paste that right here and
- 4:29
i'll change this to colorCode and this to colorCode.
- 4:36
That is the property that I am validating,
- 4:39
which is part of the currently bound object, the category object.
- 4:44
Now so that the CSS I included for error messages catches in the browser,
- 4:48
I'm going to add the error class to the enclosing div that is this div right here.
- 4:57
As well as this div, I wanna add the error class
- 5:00
if errors are present on those respective fields, and here is how that looks.
- 5:05
You can take whatever classes are present in the static HTML and
- 5:08
you can use the th:classappend attribute.
- 5:13
So I'll drop in here a quick condition, and I'll return to this in a second such
- 5:17
that when that condition is true I want to append the class name of error.
- 5:22
Otherwise I want to append a blank class name to that,
- 5:27
that is, append nothing to these class names right here.
- 5:31
And what do I put in here?
- 5:33
Well, I can put the same expression that I used in here to display the div
- 5:39
only if the name field has validation errors, I'll drop that right in there.
- 5:44
And then I'll do the same for the div down below.
- 5:46
And so that I don't have to retype all that, I will copy that, I will paste it
- 5:51
in here, and I'll make sure to change the field name to colorCode.
- 5:57
Great. Now we've got all the pieces in place to
- 6:01
display form validation messages, so let's reboot our app and check it out.
- 6:06
Mine is still running, so I will go ahead and kill my last instance.
- 6:11
And let's reboot that.
- 6:14
My code is currently compiling and now my Spring app is booting.
- 6:19
If all goes wel,l the application will start successfully,
- 6:22
it looks like it did so.
- 6:23
Now with that started, let's switch to Chrome and
- 6:26
try to add a category without entering any data.
- 6:31
So I will try to add a category, I won't add any data and great, there they are.
- 6:37
Now you might be wondering where does the text for
- 6:40
each of these error messages come from?
- 6:42
Currently, we're seeing default messages that come with those validation
- 6:45
annotations, but there are a couple ways to customize those so
- 6:50
check the teacher's notes for options.
- 6:52
An important feature to note and
- 6:54
demonstrate here is what I've already mentioned.
- 6:57
And that is that the flash attributes added to the redirect attributes object
- 7:01
are present only for one redirect.
- 7:05
So this means that our next request won't contain this data, so
- 7:09
if I refresh, they're gone.
- 7:14
Next, we'll talk about a different kind of messaging that lets users know about
- 7:18
the actions they've just taken.
|
https://teamtreehouse.com/library/displaying-validation-messages
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Replacing get_absolute_url
This page is a work in progress - I'm still figuring out the extent of the problem before I start working out a solution.
The problem
It's often useful for a model to "know" it's URL. This is especially true for sites that follow RESTful principles, where any entity within the site should have one and only one canonical URL.
It's also useful to keep URL logic in the same place as much as possible. Django's {% url %} template tag and reverse() function solve a slightly different problem - they resolve URLs for view functions, not for individual model objects, and treat the URLconf as the single point of truth for URLs. {% url myapp.views.profile user.id %} isn't as pragmatic as {{ user.get_absolute_url }}, since if we change the profile-view to take a username instead of a user ID in the URL we'll have to go back and update all of our templates.
Being able to get the URL for a model is also useful outside of the template system. Django's admin, syndication and sitemaps modules all attempt to derive a URL for a model at various points, currently using the get_absolute_url method.
The current mechanism for making model's aware of their URL is the semi-standardised get_absolute_url method. If you provide this method on your model class, a number of different places in Django will use it to create URLs. You can also over-ride this using settings.ABSOLUTE_URL_OVERRIDES.
Unfortunately, get_absolute_url is mis-named. An "absolute" URL should be expected to include the protocol and domain, but in most cases get_absolute_url just returns the path. It was proposed to rename get_absolute_url to get_url_path, but this doesn't make sense either as some objects DO return a full URL from get_absolute_url (and in fact some places in Django check to see if the return value starts with http:// and behave differently as a result).
From this, we can derive that there are actually two important URL parts for a given model:
- The full URL, including protocol and domain. This is needed for the following cases:
- links in e-mails, e.g. a "click here to activate your account" link
- URLs included in syndication feeds
- links used for things like "share this page on del.icio.us" widgets
- links from the admin to "this object live on the site" where the admin is hosted on a separate domain or subdomain from the live site
- The path component of the URL. This is needed for internal links - it's a waste of bytes to jam the full URL in a regular link when a path could be used instead.
A third type of URL - URLs relative to the current page - is not being considered here because of the complexity involved in getting it right. That said, it would be possible to automatically derive a relative URL using the full path and a request-aware template tag.
So, for a given model we need a reliable way of determining its path on the site AND its full URL including domain. The path can be derived from the full URL, and sometimes vice versa depending on how the site's domain relates to the model objects in question.
Django currently uses django.contrib.sites in a number of places to attempt to derive a complete URL from just a path, but this has its own problems. The sites framework assumes the presence of a number of things: a django_site table, a SITE_ID in the settings and a record corresponding to that SITE_ID. This arrangement does not always make sense - consider the case of a site which provides a unique subdomain for every one of the site's users (simonwillison.myopenid.com for example). Additionally, making users add a record to the sites table when they start their project is Yet Another Step, and one that many people ignore. Finally, the site system doesn't really take development / staging / production environments in to account. Handling these properly requires additional custom code, which often ends up working around the sites system entirely.
Finally, it's important that places that use get_absolute_url (such as the admin, sitemaps, syndication etc) always provide an over-ridable alternative. Syndication feeds may wish to include extra hit-tracking material on URLs, admin sites may wish to link to staging or production depending on other criteria etc. At the moment some but not all of these tools provide over-riding mechanisms, but without any consistency as to what they are called or how they work.
It bares repeating that the problem of turning a path returned by get_absolute_url in to a full URL is a very real one: Django actually solves it in a number of places, each one taking a slightly different approach, none of which are really ideal. The fact that it's being solved multiple times and in multiple ways suggests a strong need for a single, reliable solution.
Current uses of get_absolute_url()
By grepping the Django source code, I've identified the following places where get_absolute_url is used:
grep -r get_absolute_url django | grep -v ".svn" | grep -v '.pyc'
- contrib/admin/options.py: Uses hasattr(obj, 'get_absolute_url') to populate 'has_absolute_url' and 'show_url' properties which are passed through to templates and used to show links to that object on the actual site.
- contrib/auth/models.py: Defines get_absolute_url on the User class to be /users/{{ username }}/ - this may be a bug since that URL is not defined by default anywhere in Django.
- contrib/comments/models.py: Defines get_absolute_url on the Comment and FreeComment classes, to be the get_absolute_url of the comment's content object + '#c' + the comment's ID.
- contrib/flatpages/models.py: Defined on FlatPage model, returns this.url (which is managed in the admin)
- contrib/sitemaps/init.py: Sitemap.location(self, obj) uses obj.get_absolute_url() by default to figure out the URL to include in the sitemap - designed to be over-ridden
- contrib/syndication/feeds.py: The default Feed.item_link(self, item) method (which is designed to be over-ridden) uses get_absolute_url, and raises an informative exception if it's not available. It also uses its own add_domain() function along with current_site.domain, which in turn uses Site.objects.get_current() and falls back on RequestSite(self.request) to figure out the full URL (both Site and RequestSite come from the django.contrib.sites package).
- db/models/base.py: Takes get_absolute_url in to account when constructing the model class - this is where settings.ABSOLUTE_URL_OVERRIDES setting has its affect.
- views/defaults.py: The thoroughly magic shorcut(request, content_type_id, object_id) view, which attempts to figure out a full URL to something based on a content_type and an object_id, makes extensive use of get_absolute_url - including behaving differently if the return value starts with http://.
- views/generic/create_update.py: Both create and update views default to redirecting the user to get_absolute_url() if and only if post_save_redirect has not been configured for that view.
Finally, in the documentation:
- docs/contributing.txt - mentioned in coding standards, model ordering section
- docs/generic_views.txt
- docs/model-api.txt - lots of places, including "It's good practice to use get_absolute_url() in templates..."
- docs/settings.txt - in docs for ABSOLUTE_URL_OVERRIDES
- docs/sitemaps.txt
- docs/sites.txt - referred to as a "convention"
- docs/syndication_feeds.txt
- docs/templates.txt: - in an example
- docs/unicode.txt - "Taking care in get_absolute_url..."
- docs/url_dispatch.txt
And in the tests:
ABSOLUTE_URL_OVERRIDES is not tested.
get_absolute_url is referenced in:
- tests/regressiontests/views/models.py
- tests/regressiontests/views/tests/defaults.py
- tests/regressiontests/views/tests/generic/create_update.py
- tests/regressiontests/views/urls.py
The solution
I'm currently leaning towards two complementary methods:
- get_url_path() - returns the URL's path component, starting at the root of the site - e.g. "/blog/2008/Aug/11/slug/"
- get_url() - returns the full URL, including the protocol and domain - e.g."
Users should be able to define either or both of these methods. If they define one but not the other, the default implementation of the undefined method can attempt to figure it out based on the method that IS defined. This should actually work pretty well - get_url_path() is trival to derive from get_url(), whereas for sites that only exist on one domain get_url() could simply glue that domain (defined in settings.py, or derived from SITE_ID and the sites framework) on to get_url_path().
I don't think this needs to be all that complicated, and in fact the above scheme could allow us to delete a whole bunch of weird special case code scattered throughout Django.
Update 11th September 2008: Here's a prototype implementation (as a mixin class):
The code for the prototype mixin is as follows:
from django.conf import settings import urlparse class UrlMixin(object): def get_url(self): if hasattr(self.get_url_path, 'dont_recurse'): raise NotImplemented try: path = self.get_url_path() except NotImplemented: raise # Should we look up a related site? #if getattr(self._meta, 'url_by_site'): prefix = getattr(settings, 'DEFAULT_URL_PREFIX', '') return prefix + path get_url.dont_recurse = True def get_url_path(self): if hasattr(self.get_url, 'dont_recurse'): raise NotImplemented try: url = self.get_url() except NotImplemented: raise bits = urlparse.urlparse(url) return urlparse.urlunparse(('', '') + bits[2:]) get_url_path.dont_recurse = True
And you use it like this:
from django.db import models from django_urls.base import UrlMixin class ArticleWithPathDefined(models.Model, UrlMixin): slug = models.SlugField() def get_url_path(self): return '/articles/%s/' % self.slug class AssetWithUrlDefined(models.Model, UrlMixin): domain = models.CharField(max_length=30) filename = models.CharField(max_length = 30) def get_url(self): return '' % (self.domain, self.filename)
|
https://code.djangoproject.com/wiki/ReplacingGetAbsoluteUrl?version=8
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Interesting Things
- If you are testing with jasmine, it’s a good idea to include your CSS files in the test runner. This prevents, for example, issues where you’ve hidden an element in the CSS and expect it to be visible in your JavaScript. If you are using Sass, then you should regenerate your CSS from the Sass before running the jasmine tests. The current version of haml-edge updates the sass command-line tool with an update command. On one of our client projects, we added the following to our jasmine_config.rb file:
def update_sass puts "Updating sass files..." rails_root = File.expand_path(File.dirname(__FILE__) + "/../../../") sass_path = "#{rails_root}/vendor/bundler_gems/ruby/1.8/gems/haml-edge-2.*/bin/sass" puts `#{sass_path} --update #{rails_root}/app/stylesheets:#{rails_root}/public/stylesheets/generated` puts "done." end alias_method :old_start, :start def start(*args) update_sass old_start(*args) end alias_method :old_start_server, :start_server def start_server(*args) update_sass old_start_server *args end
Sass provides a method for doing the same thing within ruby:
Sass::Plugin.update_stylesheets
If you’re running within your app container you won’t need to configure anything, otherwise you’ll need to set the `Sass::Plugin.options` accordingly first.
February 11, 2010 at 1:57 pm
Thanks, Chris!
February 11, 2010 at 3:34 pm
FWIW, Jasmine, in most cases, will no longer have a jasmine_config.rb. New users would probably be best off adding functionality to the jasmine:ci or jasmine rake tasks that jasmine-ruby provides.
February 14, 2010 at 3:31 am
|
http://pivotallabs.com/standup-2-11-2010-sass-with-jasmine/
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Support for move semantics is implemented using Boost.Move. If rvalue references are available it will use them, but if not it uses a close, but imperfect emulation. On such compilers you'll need to use Boost.Move to take advantage of using movable container elements, also note that:
C++11 introduced a new allocator system. It's backwards compatible due to
the lax requirements for allocators in the old standard, but might need some
changes for allocators which worked with the old versions of the unordered
containers. It uses a traits class,
allocator_traits
to handle the allocator adding extra functionality, and making some methods
and types optional. During development a stable release of
allocator_traits wasn't available so an
internal partial implementation is always used in this version. Hopefully
a future version will use the standard implementation where available.
The member functions
construct,
destroy and
max_size are now optional, if they're not
available a fallback is used. A full implementation of
allocator_traits
requires sophisticated member function detection so that the fallback is
used whenever the member function call is not well formed. This requires
support for SFINAE expressions, which are available on GCC from version 4.4
and Clang.
On other compilers, there's just a test to see if the allocator has a member, but no check that it can be called. So rather than using a fallback there will just be a compile error.
propagate_on_container_copy_assignment,
propagate_on_container_move_assignment,
propagate_on_container_swap
and
select_on_container_copy_construction
are also supported. Due to imperfect move emulation, some assignments might
check
propagate_on_container_copy_assignment
on some compilers and
propagate_on_container_move_assignment
on others.
The.
pointer_traits aren't used.
Instead, pointer types are obtained from rebound allocators, this can cause
problems if the allocator can't be used with incomplete types. If
const_pointer is not defined in the allocator,
boost::pointer_to_other<pointer, const value_type>::type
is used to obtain a const pointer.
Since the containers use
std::pair
they're limited to the version from the current standard library. But since
C++11
std::pair's
piecewise_construct
based constructor is very useful,
emplace
emulates it with a
piecewise_construct
in the
boost::unordered namespace. So for example, the
following will work:
boost::unordered_multimap<std::string, std::complex> x; x.emplace( boost::unordered::piecewise_construct, boost::make_tuple("key"), boost::make_tuple(1, 2));
Older drafts of the standard also supported variadic constructors for
std::pair,
where the first argument would be used for the first part of the pair, and
the remaining for the second part.
When swapping,
Pred and
Hash are not currently swapped
by calling
swap, their copy
constructors are used. As a consequence when swapping an exception may be
throw from their copy constructor.
Variadic constructor arguments for
emplace
are only used when both rvalue references and variadic template parameters
are available. Otherwise
emplace
can only take up to 10 constructors arguments.
|
http://www.boost.org/doc/libs/1_49_0/doc/html/unordered/compliance.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Letter from an Occupant
Someone in Java-ville wonders why useless statements compile
One of the key traits of Java is its philosophy of "failing early". Strong typing, among other things, allows the compiler to catch problems that could otherwise linger at runtime. So maybe Java developers invest a certain level of trust in the compiler, thinking that once your code compiles, you've reached a certain threshold. It may still misbehave, but at least the obviously broken stuff has been accounted for. Right?
Maybe not. Today's Feature Article, looks at the curious case of the statement:
if (condition);
This compiles, but it is always meaningless, since the semicolon implicitly creates an empty "then" block. And that begs the question, or more to the point, the not-so-stupid question.
(Not So) Stupid Questions 14: Why Do Pointless if Statements Even Compile? opens up for discussion the idea of whether the compiler should be responsible for catching things that, though they may be syntactically legal, are meaningless, useless, and probably unintentional.
We hope you'll join in the conversation, and think about the big picture. To wit, maybe a semicolon after the
if statement is syntactically legal, so it makes sense that it should compile without complaint. If that's your position, then consider the following code:."
public class EarlyReturn {
public static void main (String[] arrrImAPirate) {
return;
int i = 0;
}
}
Does this compile? Why or why not? Should it or shouldn't it?
See you in the not-so-stupid question discussion...
In Java Today,
the Java Desktop Community notes a fun game applet posted as part of a Japanese-language blog: NekoZori. The Desktop Community page comments: "the page probably makes more sense in the Japanese locale. Cat flying around in a boat. There's something strangely appealing here. The game was included in a roll-up of physics simulation games, more about this particular game here. Also, be sure not to miss the author's zombie game and variations."
Java 3D 1.5 beta-1 has been released, and adds support for two new platforms -- Mac OS X (via the JOGL pipeline) and Windows/XP 64-bit -- as well as a few new features and most of the planned bug fixes, including JOGL Rendering Pipeline, non-power-of-two textures, NIO image buffer support for textures, by-reference support for geometry indices, rendering error listeners, and vecmath accessors/mutators
The transcript of last week's Ask the Experts session on Swing has been posted. "In this session, Swing Architect, Scott Violet, Swing Technical Lead, Shannon Hickey, Java 2D Engineer, Chris Campbell, and AWT Technical Lead, Oleg Sukhodolsky answered a variety of questions about building graphical interfaces using Swing."
Kohsuke Kawaguchi has an administrative update on some of his projects in System-level java.net projects moved under the java-net project in today's Weblogs: "Over the last few years, I've been involved in many of the 'system-level' java.net projects, which provide tools and services for any java.net project owner. Gary kindly moved them under the java-net project to show the authenticity."
Ben Galbraith is trying to figure out how to deal with
Heaps of Memory:
"Is there a way to determine the largest set of strongly referenced heap data a process creates at any one time?"
Kirill Grouchnikov long-running series on scalable graphics pays off with striking examples in
SVG and Java UIs part 6: transcoding SVG to pure Java2D code:
"This entry on using SVG in Java UIs describes the utility to convert SVG to pure Java2D painting code."
Joshua Marinacci has more to say about SwingX's design philosophy in
today's Forums. In
Re: Painter Refactoring: Solving the Layering problem, he writes:
"I've looked through the original thread and come up with a new proposal. In the six months since we originally discussed this I've written a lot more painters and started figuring out what I will really want to do with them in practice. I think that having a layering ability is important, though I want to preserve a simple API. I've also come to realize that there is absolutely no reliable way to separate a normal component's foreground and background. The Windows Look and Feel implementation of a button, for example, does not provide a way to draw the text separate from the background without making specific hacks just for this look and feel. This means we need to figure out how our layering system will address these issues."
robross :
- October 23-25 - The Ajax Experience: Boston
- October 24-27 - Java Training Philippines
- October 27-29 - Lone Star Software Symposium 2006: Dallas Edition
- November 3-5 - Northern Virginia Software Symposium 2006: Fall Edition
- November 7-10 - J2EE Training Philippines
- November 10-12 - Rocky Mountain Software Symposium 2006: Fall Edition
- November 11 - GWT (Google Web Toolkit) Tutorial
- November 17-19 - Great Lakes Software Symposium 2006
- November 21 - San Diego JUG Meeting + Sun Evangelist Visit
- November 27go.
Someone in Java-ville wonders why useless statements compile
- Login or register to post comments
- Printer-friendly version
- editor's blog
- 1269 reads
|
https://weblogs.java.net/node/236459/atom/feed
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Style Guide for Packaging Python Libraries
NOTE: This is a draft proposal. Comments are welcome and encouraged.
NOTE: Also see Python/AppStyleGuide for alternative single package debian/rules., invoked via python setup.py test.
- The package has some documentation buildable by Sphinx.
Your Debian packaging uses debhelper, dh_python2, and is preferably source format 3.0 (quilt).
- debhelper v8 or newer is assumed, but v7 probably works too (not sure about earlier ones, anyway, update your package!)
The design of this document is to capture best-practices that you can cargo cult into your own packages. It will forever be a work in progress.
debian/control, so you need to watch carefully to make sure your local builds don't do this, and be sure to add the http_proxy line in your debian/rules file (see below)..
Start with a fairly straightforward debian/control file.-docs Description: Python frobnicator (Python 2) This package frobnicates a Python-based doohicky so that you no longer need to kludge a dingus to make the doodads work. . This is the Python 2 version of the package. Package: python3-foo Architecture: all Depends: ${python3:Depends}, ${misc:Depends} Suggests: python-foo-docs Description: Python frobnicator (Python 3) This package frobnicates a Python-based doohicky so that you no longer need to kludge a dingus to make the doodads work. . This is the Python 3 version of the package.
Notice the Suggests line. If the same set of documentation is available for both Python 2 and Python 3, it's best to put those files into a separate python-foo-docs package, like so:
Package: python-foo-docs
Here is a debian/rules file for you to start with. First, I'll show you the whole thing, then I'll explain it line by line.
#DH_VERBOSE=1 PYTHON2_VERSIONS=$(shell pyversions -vr) PYTHON3_VERSIONS=$(shell py3versions -vr) PYTHON_VERSIONS=${PYTHON2_VERSIONS} ${PYTHON3_VERSIONS} # Prevent setuptools/distribute from accessing the internet. export http_proxy = %: dh $@ --with python2,python3,sphinxdoc ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS))) test-python%: python$* setup.py test -vv override_dh_auto_test: $(foreach pyversion,${PYTHON_VERSIONS},$(pyversion:%=test-python%)) endif build-python%: python$* setup.py build override_dh_auto_build: $(PYTHON3:%=build-python%) dh_auto_build install-python%: python$* setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb override_dh_auto_install: $(PYTHON3:%=install-python%) dh_auto_install override_dh_installchangelogs: dh_installchangelogs -k foo/NEWS.rst override_dh_installdocs: python setup.py build_sphinx dh_installdocs build/sphinx/html override_dh_auto_clean: dh_auto_clean rm -rf build rm -rf *.egg-info
The file starts with the standard #! line. I like adding the (commented out when uploading) DH_VERBOSE line because it can make build problems easier to debug.
The next three lines are:
PYTHON2_VERSIONS=$(shell pyversions -vr) PYTHON3_VERSIONS=$(shell py3versions -vr) PYTHON_VERSIONS=${PYTHON2_VERSIONS} ${PYTHON3_VERSIONS}
This gives you make variables which will contain (just) the version numbers for the Python versions you are requesting your package be built against. See the manpage for details, but essentially this consults the X-Python-Version field in your debian/control file and matches it against the Python versions available for your distroversion. For example, on Wheezy, PYTHON2_VERSIONS would contain 2.6 2.7.
py3versions does the same, but for Python 3, and it uses the X-Python3-Version field in debian/control. For example, on Wheezy, PYTHON3_VERSIONS would contain 3.2. If upstream does not yet support Python 3, you can still leave this line enabled, or comment it out, but you will have to adjust some of the lines that follow.
The next line is a safety valve:
# Prevent setuptools/distribute from accessing the internet. export http_proxy =
setuptools-based setups have the annoying -- and buggy -- behavior of trying to download dependencies from PyPI when they aren't locally available. Some build environments block this (e.g. by disabling the network), but if not, then the build could appear to succeed, when in fact it's only satisfying its setup.py dependencies external to the archive. To prevent this even in local builds, where downloading may succeed, add export http_proxy = to your debian/rules file. Port 9 is the Discard Protocol, so this should safely prevent downloading even if something is actually listening on the port. Now if you are missing a dependency, even your local builds will fail early, which is a good thing!
The next line is the standard debhelper-based catch-all rule which is used to get the whole build started:
%: dh $@ --with python2,python3,sphinxdoc
What's important to note here is that both of the dh_python2 and dh_python3 helpers are being invoked, as well as the Sphinx documentation helper. If upstream does not yet support Python 3, omit the python3 helper for now. If upstream doesn't have Sphinx-based documentation, omit the sphinxdoc helper for now.
The next few lines enable the test suites to be run for each version of the package that gets built. If upstream has a test suite that passes on Debian, then it's a very good idea to enable it in your build. Testing is always a good thing, and in general (although there are exceptions), you want your build to fail if the test suite fails.
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS))) test-python%: python$* setup.py test -vv override_dh_auto_test: $(foreach pyversion,${PYTHON_VERSIONS},$(pyversion:%=test-python%)) endif
Things to note here.
The make conditional wrapping the rules allows you to set DEB_BUILD_OPTIONS=nocheck to temporarily disable the test suite. This can be useful while your debugging your build if the test suite takes a long time to run.
You will need to change the rule's command if the package's test suite is invoked in a way other than python setup.py test. The way it is above enables a higher level of verbose output, which again is useful for general diagnostics.
The dependencies in the override_dh_auto_test target uses PYTHON_VERSIONS to cover all versions of Python 2 and Python 3 for which your package is built.
These next few lines build the package:
build-python%: python$* setup.py build override_dh_auto_build: $(PYTHON3:%=build-python%) dh_auto_build
Note that debhelper will automatically build the Python 2 version, but does not yet know how to build Python 3 packages. Thus, the override is necessary to add dependencies that trigger the Python 3 build (first rule above). In the body of the override, the standard dh_auto_build is invoked, which will properly build the Python 2 version of the package. If upstream doesn't yet support Python 3, you can comment out both of these rules and just let the normal dh_auto_build process build the Python 2 version.
The next few lines install the package for all supported versions of Python.
install-python%: python$* setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb override_dh_auto_install: $(PYTHON3:%=install-python%) dh_auto_install
Again, this is only necessary because dh doesn't know how to install the Python 3 versions by default. It works in a similar manner as the build rules above. Like above, you can comment out all of these lines (or omit them) if upstream does not yet support Python 3.
Also note that --install-layout=deb is a Debian-only argument for setup.py. It was added to python-distribute to support Debian's dist-packages convention.
The next lines are useful if upstream has a non-standard change log that needs to be installed. In this case, the foo package has a news file that serves as its log of changes. Consult the dh_installchangelogs manpage for details about whether you need this override or not.
override_dh_installchangelogs: dh_installchangelogs -k foo/NEWS.rst
If upstream has documentation that's built by Sphinx, these next few lines will build and install them for the separate python-foo-docs package mentioned above.
override_dh_installdocs: python setup.py build_sphinx dh_installdocs build/sphinx/html
Finally, you should ensure that your package can be built twice in a row, by properly cleaning up any artifacts of your build. By default dh_clean will do most of the work, but Python packages need a little extra help, thus this override.
override_dh_auto_clean: dh_auto_clean rm -rf build rm -rf *.egg-info
Some packages produce other artifacts, or modify files in the upstream original tarball. Those are harder to clean and may produce build failures when built twice in a row.
debian/*.install files
When the same source is used to build three different binary packages (e.g. the Python 2 version, the Python 3 version, and the common documentation package), you will need debian/*.install files to get everything in the proper place for packaging. For most simple Python packages, this is fairly easy. You will not need an .install file for the documentation (TBD: explain why this works automatically), but you will for the Python 2 and Python 3 code.
Here is the debian/python-foo.install file (i.e. the Python 2 version):
usr/lib/python2*
and here is the debian/python3-foo.install file:
usr/lib/python3
Usually, that is all you need.
debian/*.pyremove
Upstream source may install some files that you don't care about including in your Debian packages. The way to handle this is by adding a debian/python-foo.pyremove file. This should work for both Python 2 and Python 3. For example:
foo/conf.py foo/README.rst foo/NEWS.rst foo*.egg-info/SOURCES.txt
TBD: better explanation of the rationale for removing these.
TODO
- describe egg related information, what should go in the binary package and what shouldn't.
|
https://wiki.debian.org/Python/LibraryStyleGuide?action=recall&rev=21
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Are you using namespace::clean/namespace::autoclean/namespace::sweep anywhere?
In reply to Re: Moose and Exporter
by tobyink
in thread Moose and Exporter
by Anonymous Monk
Priority 1, Priority 2, Priority 3
Priority 1, Priority 0, Priority -1
Urgent, important, favour
Data loss, bug, enhancement
Out of scope, out of budget, out of line
Family, friends, work
Impossible, inconceivable, implemented
Other priorities
Results (252 votes),
past polls
|
http://www.perlmonks.org/index.pl/jacques?parent=1007693;node_id=3333
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
You must implement the storing and reloading of data in a bean-managed persistent (BMP) bean. The bean implementation manages the data within callback methods. All the logic for storing data to your persistent storage is included in the
ejbStore method, and reloaded from your storage in the
ejbLoad method. The container invokes these methods when necessary.
This chapter demonstrates simple BMP EJB development with a basic configuration and deployment. Download the BMP entity bean example (
bmpapp.jar) from the OC4J sample code page on the OTN site.
The following sections discuss how to implement data persistence:
As Chapter 3, "CMP Entity Beans" indicates, the steps for creating an entity bean are as follows:
javax.ejb.EJBObject.
javax.ejb.EJBHome. It defines the
createand finder methods, including
findByPrimaryKey, for your bean.
java.lang.String, or defined within its own class.
ejbCreate, which must create the persistent data, and
ejbPostCreatemethods with parameters matching each of the
createmethods defined in the home interface.
ejbFindByPrimarykey method, which corresponds to the
findByPrimaryKeymethod of the home interface, retrieves the primary key and validates that it exists.
javax.ejb.EntityBeaninterface. The
ejbCreate,
ejbPostCreate, and
ejbFindByPrimaryKeyare already mentioned above. The other methods are as follows:
ejbStoremethod.
ejbLoadmethod.
ejbPassivatemethod.
ejbActivatemethod.
application.xml, create an EAR file, and install the EJB in OC4J.
The BMP entity bean definition of the remote and home interfaces are identical to the CMP entity bean. For examples of how the remote and home interface are implemented, see "Creating Entity Beans".
Because the container is no longer managing the primary key nor the saving of the persistent data, the bean callback functions must include the implementation logic for these functions. The container invokes the
ejbCreate,
ejbFindByPrimaryKey, other finder methods,
ejbStore, and
ejbLoad methods when it is appropriate.
The
ejbCreate method is responsible primarily for the creation of the primary key. This includes creating the primary key, creating the persistent data representation for the key, initializing the key to a unique value, and returning this key to the container. The container maps the key to the entity bean reference.
The following example shows the
ejbCreate method for the employee example, which initializes the primary key,
empNo. It should automatically generate a primary key that is the next available number in the employee number sequence. However, for this example to be simple, the
ejbCreate method requires that the user provide the unique employee number.
In addition, because the full data for the employee is provided within this method, the data is saved within the context variables of this instance. After initialization, it returns this key to the container.
// The create methods takes care of generating a new empNo and returns // its primary key to the container public Integer
ejbCreate(Integer empNo, String empName, Float salary)
throws CreateException, RemoteException { this.empNo = empNo; this.empName = empName; this.salary = salary; return (empNo); }
The deployment descriptor defines only the primary key class in the
<prim-key-class> element. Because the bean is saving the data, there is no definition of persistence data in the deployment descriptor. Note that the deployment descriptor does define the database the bean uses in the
<resource-ref> element. For more information on database configuration, see "Modify XML Deployment Descriptors".
<enterprise-beans> <entity> <display-name>EmployeeBean</display-name> <ejb-name>EmployeeBean</ejb-name> <home>employee.EmployeeHome</home> <remote>employee.Employee</remote> <ejb-class>employee.EmployeeBean</ejb-class> <persistence-type>Bean</persistence-type>
<prim-key-class>java.lang.Integer</prim-key-class><reentrant>False</reentrant> <resource-ref> <res-ref-name>jdbc/OracleDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Application</res-auth> </resource-ref> </entity> </enterprise-beans>
Alternatively, you can create a complex primary key based on several data types. You define a complex primary key within its own class, as follows:
package employee; public class EmployeePK implements java.io.Serializable { public Integer empNo; public String empName; public Float salary; public EmployeePK(Integer empNo) { this.empNo = empNo; this.empName = null; this.salary = null; } public EmployeePK(Integer empNo, String empName, Float salary) { this.empNo = empNo; this.empName = empName; this.salary = salary; } }
For a primary key class, you define the class in the
<prim-key-class> element, which is the same for the simple primary key definition.
<enterprise-beans> <entity> <display-name>EmployeeBean</display-name> <ejb-name>EmployeeBean</ejb-name> <home>employee.EmployeeHome</home> <remote>employee.Employee</remote> <ejb-class>employee.EmployeeBean</ejb-class> <persistence-type>Bean</persistence-type>
<prim-key-class>employee.EmployeePK</prim-key-class><reentrant>False</reentrant> <resource-ref> <res-ref-name>jdbc/OracleDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Application</res-auth> </resource-ref> </entity> </enterprise-beans>
The employee example requires that the employee number is given to the bean by the user. Another method would be to generate the employee number by computing the next available employee number, and use this in combination with the employee's name and office location.
After defining the complex primary key class, you would create your primary key within the
ejbCreate method, as follows:
public EmployeePK ejbCreate(Integer empNo, String empName, Float salary) throws CreateException, RemoteException { pk = new EmployeePK(empNo, empName, salary); ... }
The other task that the
ejbCreate (or
ejbPostCreate) should handle is allocating any resources necessary for the life of the bean. For this example, because we already have the information for the employee, the
ejbCreate performs the following:
ejbPassivateand
ejbRemove, and reallocated in
ejbActivate.
This is executed, as follows:
public EmployeePK ejbCreate(Integer empNo, String empName, Float salary) throws CreateException, RemoteException { pk = new EmployeePK(empNo, empName, salary); conn = getConnection(dsName); ps = conn.prepareStatement(INSERT INTO EMPLOYEEBEAN (EmpNo, EmpName, SAL) VALUES ( this.empNo.intValue(), this.empName, this.salary.floatValue()); ps.close(); return pk; }
The
ejbFindByPrimaryKey implementation is a requirement for all BMP entity beans. Its primary responsibility is to ensure that the primary key is valid. Once it is validated, it returns the primary key to the container, which uses the key to return the remote interface reference to the user.
This sample verifies that the employee number is valid and returns the primary key, which is the employee number, to the container. A more complex verification would be necessary if the primary key was a class.
public Integer ejbFindByPrimaryKey(Integer empNoPK) throws FinderException, RemoteException { if (empNoPK == null) { throw new FinderException("Primary key cannot be null"); } ps = conn.prepareStatement(SELECT EMPNO FROM EMPLOYEEBEAN WHERE EMPNO = ?); ps.setInt(1, empNoPK.intValue()); ps.executeQuery(); ResultSet rs = ps.getResultSet(); if (rs.next()) { /*PK is validated because it exists already*/ } else { throw new FinderException("Failed to select this PK"); } ps.close(); return empNoPK; }
You can create other finder methods beyond the single
ejbFindByPrimaryKey.
To create other finder methods, do the following:
These finder methods need only to gather the primary keys for all of the entity beans that should be returned to the user. The container maps the primary keys to references to each entity bean within either a
Collection (if multiple references are returned) or to the single class type.
The following example shows the implementation of a finder method that returns all employee records.
public Collection ejbFindAll() throws FinderException, RemoteException { Vector recs = new Vector(); ps = conn.prepareStatement(SELECT EMPNO FROM EMPLOYEEBEAN); ps.executeQuery(); ResultSet rs = ps.getResultSet(); int i = 0; while (rs.next()) { retEmpNo = new Integer(rs.getInt(1)); recs.add(retEmpNo); } ps.close(); return recs; }
The container invokes the
ejbStore method when the persistent data should be saved to the database. This includes whenever the primary key is "dirtied", or before the container passivates the bean instance or removes the instance. The BMP bean is responsible for ensuring that all data is stored to some resource, such as a database, within this method.
public void ejbStore() throws RemoteException { //Container invokes this method to instruct the instance to //synchronize its state by storing it to the underlying database ps = conn.prepareStatement(UPDATE EMPLOYEEBEAN SET EMPNAME=?, SALARY=? WHERE EMPNO=?); ps.setString(1, this.empName); ps.setFloat(2, this.salary.floatValue()); ps.setInt(3, this.empNo.intValue()); if (ps.executeUpdate() != 1) { throw new RemoteException("Failed to update record"); } ps.close(); }
The container invokes the
ejbLoad method after activating the bean instance. The purpose of this method is to repopulate the persistent data with the saved state. For most
ejbLoad methods, this implies reading the data from a database into the instance data variables.
public void ejbLoad() throws RemoteException { //Container invokes this method to instruct the instance to //synchronize its state by loading it from the underlying database this.empNo = ctx.getPrimaryKey(); ps = conn.prepareStatement(SELECT EMP_NO, EMP_NAME, SALARY WHERE EMPNAME=?"); ps.setInt(1, this.empNo.intValue()); ps.executeQuery(); ResultSet rs = ps.getResultSet(); if (rs.next()) { this.empNo = new Integer(rs.getInt(1)); this.empName = new String(rs.getString(2)); this.salary = new Float(rs.getFloat(3)); } else { throw new FinderException("Failed to select this PK"); } ps.close(); }
The
ejbPassivate method is invoked directly before the bean instance is serialized for future use. Normally, this is invoked when the instance has not been used in a while. It will be re-activated, through the
ejbActivate method, the next time the user invokes a method on this instance.
Before the bean is passivated, you should release all resources and release any static information that would be too large to be serialized. Any large, static information that can be easily regenerated within the
ejbActivate method should be released in this method.
In our example, the only resource that cannot be serialized is the open database connection. It is closed in this method and reopened in the
ejbActivate method.
public void ejbPassivate() { // Container invokes this method on an instance before the instance // becomes disassociated with a specific EJB object conn.close(); }
As the
ejbPassivate method section states, the container invokes this method when the bean instance is reactivated. That is, the user has asked to invoke a method on this instance. This method is used to open resources and rebuild static information that was released in the
ejbPassivate method.
Our employee example opens the database connection where the employee information is stored.
public void ejbActivate() throws RemoteException { // Container invokes this method when the instance is taken out // of the pool of available instances to become associated with // a specific EJB object conn = getConnection(dsName); }
The container invokes the
ejbRemove method before removing the bean instance itself or by placing the instance back into the bean pool. This means that the information that was represented by this entity bean should be removed--both by the instance being destroyed and removed from within persistent storage. The employee example removes the employee and all associated information from the database before the instance is destroyed. Close the database connection.
public void ejbRemove() throws RemoveException, RemoteException { //Container invokes this method befor it removes the EJB object //that is currently associated with the instance ps = conn.prepareStatement(DELETE FROM EMPLOYEEBEAN WHERE EMPNO=?); ps.setInt(1, this.empNo.intValue()); if (ps.executeUpdate() != 1) { throw new RemoteException("Failed to delete record"); } ps.close(); conn.close(); }
In addition to the configuration described in "Creating Entity Beans", you must modify and add the following to your
ejb-jar.xml deployment descriptor:
Bean" in the
<persistence-type>element.
<resource-ref>element.
Our employee used the database environment element of "
jdbc/OracleDS". This is configured in the
<resource-ref> element as follows:
<resource-ref> <res-ref-name>jdbc/OracleDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Application</res-auth> </resource-ref>
The database specified in the
<res-ref-name> element maps to a
<ejb-location> element in the
data-sources.xml file. Our "
jdbc/OracleDS" database is configured in the
data-sources.xml file, as shown below:
<data-source
If your entity bean stores its persistent data within a database, you need to create the appropriate table with the proper columns for the entity bean. This table must be created before the bean is loaded into the database. The container will not create this table for BMP beans, but it will create it automatically for CMP beans.
In our employee example, you must create the following table in the database defined in the
data-sources.xml file:
The following shows the SQL commands that create these fields.
CREATE TABLE EMPLOYEEBEAN ( EMPNO NUMBER NOT NULL, EMPNAME VARCHAR2(255) NOT NULL, SALARY FLOAT NOT NULL, CONSTRAINT EMPNO PRIMARY KEY )
|
http://docs.oracle.com/cd/A97329_03/web.902/a95881/bmp.htm
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
This preface lists changes in the Oracle Big Data Connectors User's Guide.
The following are changes in Oracle Big Data Connectors User's Guide for Oracle Big Data Connectors Release 2 (2.0).
Oracle Big Data Connectors support Cloudera's Distribution including Apache Hadoop version 4 (CDH4). For other supported platforms, see the individual connectors in Chapter 1.
The name of Oracle Direct Connector for Hadoop Distributed File System changed to Oracle SQL Connector for Hadoop Distributed File System.
Oracle SQL Connector for Hadoop Distributed File System
Automatic creation of Oracle Database external tables from Hive tables, Data Pump files, or delimited text files.
Management of location files.
Oracle Loader for Hadoop
Support for Sockets Direct Protocol (SDP) for direct path loads
Support for secondary sort on user-specified columns
New input formats for regular expressions and Oracle NoSQL Database. The Avro record InputFormat is supported code instead of sample code.
Simplified date format specification
New reject limit threshold
Improved job reporting and diagnostics
Oracle Data Integrator Application Adapter for Hadoop
Uses Oracle SQL Connector for HDFS or Oracle Loader for Hadoop to load data from Hadoop into an Oracle database.
Oracle R Connector for Hadoop
Several analytic algorithms are now available: linear regression, neural networks for prediction, matrix completion using low rank matrix factorization, clustering, and non-negative matrix factorization.
Oracle R Connector for Hadoop supports Hive data sources in addition to HDFS files.
Oracle R Connector for Hadoop can move data between HDFS and Oracle Database. Oracle R Enterprise is not required for this basic transfer of data.
The following functions are new in this release:
as.ore.* hadoop.jobs hdfs.head hdfs.tail is.ore.* orch.connected orch.dbg.lasterr orch.evaluate orch.export.fit orch.lm orch.lmf orch.neural orch.nmf orch.nmf.NMFalgo orch.temp.path ore.* predict.orch.lm print.summary.orch.lm summary.orch.lm
The following features are deprecated in this release, and may be desupported in a future release:
Oracle SQL Connector for Hadoop Distributed File System
Location file format (version 1): Existing external tables with content published using Oracle Direct Connector for HDFS version 1 must be republished using Oracle SQL Connector for HDFS version 2, because of incompatible changes to the location file format.
When Oracle SQL Connector for HDFS creates new location files, it does not delete the old location files.
oracle.hadoop.hdfs.exttab namespace (version 1): Oracle SQL Connector for HDFS uses the following new namespaces for all configuration properties:
oracle.hadoop.connection: Oracle Database connection and wallet properties
oracle.hadoop.exttab: All other properties
HDFS_BIN_PATH directory: The preprocessor directory name is now OSCH_BIN_PATH.
See "Oracle SQL Connector for Hadoop Distributed File System Setup."
Oracle R Connector for Hadoop
keyval: Use
orch.keyval to generate key-value pairs.
orch.reconnect: Use
orch.connect to reconnect using a connection object returned by
orch.dbcon.
The following features are no longer supported by Oracle.
Oracle Loader for Hadoop
oracle.hadoop.loader.configuredCounters
The following are additional changes in the release:
Oracle Loader for Hadoop
The installation zip archive now contains two kits:
oraloader-2.0.0-2.x86_64.zip for CDH4
oraloader-2.0.0-1.x86_64.zip for Apache Hadoop 0.20.2 and CDH3
See "Oracle Loader for Hadoop Setup."
|
http://docs.oracle.com/cd/E40622_01/doc.21/e36961/release_changes.htm
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
sizeof operator
Queries size of the object or type
Used when actual size of the object must be known
Syntax
Both versions return a constant of type std::size_t.
Explanation
1) returns size in bytes of the object representation of type.
2) returns size in bytes of the object representation of the type, that would be returned by expression, if evaluated.
Notes
Depending on the computer architecture, a byte may consist of 8 or more bits, the exact number being recorded in CHAR_BIT.
sizeof(char), sizeof(signed char), and sizeof(unsigned char) always return 1.
sizeof cannot be used with function types, incomplete types, or bit-field l.
Keywords
Example
The example output corresponds to a system with 64-bit pointers and 32-bit int.
#include <iostream> struct Empty {}; struct Bit {unsigned bit:1; }; int main() { Empty e; Bit b; std::cout << "size of empty class: " << sizeof e << '\n' << "size of pointer : " << sizeof &e << '\n' // << "size of function: " << sizeof(void()) << '\n' // compile error // << "size of incomplete type: " << sizeof(int[]) << '\n' // compile error // << "size of bit field: " << sizeof b.bit << '\n' // compile error << "size of array of 10 int: " << sizeof(int[10]) << '\n'; }
Output:
size of empty class: 1 size of pointer : 8 size of array of 10 int: 40
|
http://en.cppreference.com/mwiki/index.php?title=cpp/language/sizeof&oldid=66856
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
NAME
sys/msg.h - XSI message queue structures
SYNOPSIS
#include <sys/msg.h>
DESCRIPTION
The constant as a message operation flag: MSG_NOERROR No error if big message. The msqid_ds structure shall contain, size_t, and ssize_t types shall be defined, .
|
http://manpages.ubuntu.com/manpages/maverick/man7/sys_msg.h.7posix.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
There are many ways to execute a SQL Server 2005 Integration Services (SSIS) package. You can use the command line utility DTExec.exe or its window equivalent, DTExecUI.exe. A package can be executed within SQL Server Business Intelligence Studio (Visual Studio) or from a SQL Server Agent Job Step. A package can also be executed from .NET code!
For the purposes of demonstration, I created a simple package that takes some data out of a SQL Server AdventureWorks database and dumps it into a flatfile. The package also has one variable.
In any kind of "real world" scenario, your package will usually be driven by a configuration file. An SSIS configuration file is an XML file with a .dtsConfig extension that contains settings you can apply to a package (without actually changing or editing the package). In my example files, you can edit the configuration file with your SQL Server and flat file connection information and run the package. You'll never have to edit the actual package file. Here's a very good tutorial on configuration file syntax. Configurations can also be stored in a SQL Server table, but I won't cover that here.
You need to add a reference to Microsoft.SQLServer.ManagedDTS.dll. I believe that this DLL is only installed on a machine that has SQL Server components installed.
The amount of code to execute a SSIS package is surprisingly small and concise. Notice that I added a using directive for the Microsoft.SqlServer.Dts.Runtime namespace.
using
Microsoft.SqlServer.Dts.Runtime namespace
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.SqlServer.Dts.Runtime;
namespace ExecuteSSIS
{
class Program
{
static void Main(string[] args)
{
Application app = new Application();
//
// Load package from file system
//
Package package = app.LoadPackage("c:\\ExamplePackage.dtsx", null);
package.ImportConfigurationFile("c:\\ExamplePackage.dtsConfig");
Variables vars = package.Variables;
vars["MyVariable"].Value = "value from c#";
DTSExecResult result = package.Execute();
Console.WriteLine("Package Execution results: {0}",result.ToString());
//
// Load package from SQL Server
//
Package package2 = app.LoadFromSqlServer(
"ExamplePackage","server_name", "sa", "your_password", null);
package2.ImportConfigurationFile("c:\\ExamplePackage.dtsConfig");
Variables vars2 = package2.Variables;
vars2["MyVariable"].Value = "value from c# again";
DTSExecResult result2 = package2.Execute();
Console.WriteLine("Package Execution results: {0}",
result2.ToString());
}
}
}
First, you create an Application object, which provides access to the DTS (Integration Services) runtime. Then you use the Application object to load a package from either the file system or from SQL Server, I've demonstrated both. Once you have the package loaded into a Package object, you call the ImportConfigurationFile() method to load and apply the configuration file to the package. The Package object also has a Variables collection that provides access to the package's variable. Finally, to actually execute a package, call the Execute() method.
Application
Package
ImportConfigurationFile()
Variables
Execute()
This article was meant to quickly demonstrate how to load and execute a package. There is much, much more you can do with the managed DTS namespace. There are objects and additional namespaces that allow you to load and inspect packages or even create new packages from .NET code.
namespace.
|
http://www.codeproject.com/Articles/14229/Execute-SQL-Server-2005-Integration-Services-packa?msg=2770700&PageFlow=FixedWidth
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
#include <ldap.h> int ldap_modify_ext(
ld, char *dn, LDAPMod *mods[], LDAPControl **sctrls, LDAPControl **cctrls, int **msgidp ); int ldap_modify_ext_s(ld, char *dn, LDAPMod *mods[], LDAPControl **sctrls, LDAPControl **cctrls, int **msgidp ); int ldap_modify_ext_s(
- LDAP *
ld, char *dn, LDAPMod *mods[], LDAPControl **sctrls, LDAPControl **cctrls ); void ldap_mods_free(ld, char *dn, LDAPMod *mods[], LDAPControl **sctrls, LDAPControl **cctrls ); void ldap_mods_free(
- LDAP *
mods, int freemods );mods, int freemods );
- LDAPMod **
typedef struct ldapmod { int mod_op; char *mod_type; union { char **modv_strvals; struct berval **modv_bvals; } mod_vals; struct ldapmod *mod_next; } LDAPMod; #define mod_values mod_vals.modv_strvals #define mod_bvalues mod_vals.modv_bvals
The mod_op field is used to specify the type of modification to perform and should be one of LDAP_MOD_ADD, LDAP_MOD_DELETE, or LDAP_MOD_REPLACE. The mod_type and mod_values fields specify the attribute type to modify and a null-terminated array of values to add, delete, or replace respectively. The mod_next field is used only by the LDAP server and may be ignored by the client.
If you need to specify a non-string value (e.g., to add a photo or audio attribute value), you should set mod_op to the logical OR of the operation as above (e.g., LDAP_MOD_REPLACE) and the constant LDAP_MOD_BVALUES. In this case, mod_bvalues should be used instead of mod_values, and it should point to a null-terminated array of struct bervals, as defined in <lber.h>._values field should be set to NULL. For LDAP_MOD_REPLACE modifications, the attribute will have the listed values after the modification, having been created if necessary. All modifications are performed in the order in which they are listed.
ldap_mods_free() can be used to free each element of a NULL-terminated array of mod structures. If freemods is non-zero, the mods pointer itself is freed as well.
ldap_modify_ext_s() returns a code indicating success or, in the case of failure, indicating the nature of the failure. See ldap_error(3) for details
The ldap_modify_ext() operation works the same way as ldap_modify_ext_s(), except that it is asynchronous. The integer that msgidp points to is set to the message id of the modify request. The result of the operation can be obtained by calling ldap_result(3).
Both ldap_modify_ext() and ldap_modify_ext_s() allows server and client controls to be passed in via the sctrls and cctrls parameters,.
|
http://www.makelinux.net/man/3/L/ldap_modify_ext_s
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Issues
ZF-2802: Allow Zend_Loader to load classes with real namespaces
Description
PHP 5.3 introduces real namespaces (Namespace::Class). With the following small patch Zend_Loader can handle them too.
PHP 5.3 introduces real namespaces (Namespace::Class). With the following small patch Zend_Loader can handle them too.
Posted by Lars Strojny (lars) on 2008-03-04T07:46:01.000+0000
Allow Zend_Loader to handle classes with real namespaces
Posted by Wil Sinclair (wil) on 2008-03-25T21:30:49.000+0000
Please categorize/fix as needed.
Posted by Matthew Weier O'Phinney (matthew) on 2008-11-22T09:15:06.000+0000
Until 5.3 has a stable API, there is no sense in changing this functionality.
Posted by Keith Pope (mute) on 2009-11-01T03:14:55.000+0000
Now that 5.3 has been released could this be applied in a 1.x release?
I am just looking at doctrine 2 integration with the 1.x series and this would be a helpful addition, though this can be achieved by adding a custom autoloader into the stack as a workaround.
Posted by Benjamin Eberlei (beberlei) on 2009-11-01T08:29:35.000+0000
@Keith I have exactly the same problem while writing a Doctrine 2 resource plugin, I'll reopen the issue and put it to very high priority.
Posted by Benjamin Eberlei (beberlei) on 2009-11-01T08:30:23.000+0000
Hm issue is not reopenable. I'll add a new one.
|
http://framework.zend.com/issues/browse/ZF-2802
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
I'm afraid that there are no great coding secrets revealed in this article. I'm posting it because although there seem to be a great number of similar tools available they either cost cold hard cash or are free but fairly inflexible in what they do and I thought (hoped) that other .NET developers would find this either immediately useful or a good leg up for developing their own (improved) version.
If you're completely new to XSLT that you may find the recursive stuff and the demonstration of how to pass global parameters to a template from code useful otherwise it's all pretty vanilla.
All versions of C# and later versions of VB.NET allow the addition of summary comment blocks on types, methods and properties and these summary comment blocks can be output as an XML file during compilation if document file generation is requested. Document file generation can be requested from the IDE (project / properties) or by specifying the /doc flag on the command line.
The conversion of the compiler generated XML to a display format meeting local standards is left to the programmer. This document outlines the construction of a simple summary HTML document generator using the tools available in Framework 3.5.
The compiler recognises the following tags in source code:
<summary>
<remarks>
<value>
<param>
<returns>
<exception cref="...">
<exception cref="System.ArgumentException">
</exception>
<example>
<c>
<code>
<see cref="{a.n.other_member}">...</see>
<seealso cref="{a.n.other_member}">...</seealso>
<list>
<listpara>
The utility as supplied doesn't deal with the less commonly used elements listed above but it does allow you to embed simple HTML formatting in the <remarks> section as shown below.
/// <summary>
/// Apply an XSL transform to a well formed XML string
/// returning the transform output as a string.
/// </summary>
/// <param name="xmlToTransform">A string containing well formed XML.</param>
/// <param name="xslTemplatePath">Fully specified XSLT file path</param>
/// <param name="paramList">A name, value dictionary of global template parameters. Can be empty but not null.</param>
/// <returns>A well formed XML string.</returns>
/// <example>
///string template = Server.MapPath(@"~/XSL/ToTypeHierarchyXML.xsl");
///string transformOutput = Library.XML.ApplyTransform(source, template, new Dictionary(string, string));
/// </example>
/// <exception cref="System.Xml.Xsl.XsltException"></exception>
/// <exception cref="System.Xml.XmlException">
/// Method rethrows any XML or XSLT exceptions it encounters.
/// </exception>
/// <remarks>
/// <ol type="1">
/// <li>The template file must exist and the process must have read access.</li>
/// <li>This and other methods are not intended for use with large XML documents.</li>
/// <li>Not intended for use with large XML documents.</li>
/// </ol>
/// </remarks>
public static string ApplyTransform(string xmlToTransform,
string xslTemplatePath,
Dictionary<string,string> paramList)
Each component of the assembly is represented by a <member> block and the ownership of methods and properties by types is shown by the use of fully qualified names rather than in the structure of the XML. Each fully qualified member name is given a single letter prefix to indicate its classification.
<member>
Member prefixes include:
The extract below is from an XML document file produced by the compiler.
<?xml version="1.0"?>
<doc>
<assembly>
<name>Documenter</name>
</assembly>
:
<member name="T:Documenter.Library.XML">
<summary>
Group XML appropriate methods.
</summary>
</member>
<member name="M:Documenter.Library.XML.FileTransform(System.String,System.String)">
<summary>
Apply a transform to a file returning a string
</summary>
<param name="filePath"></param>
<param name="xslTemplate"></param>
<returns></returns>
</member>
:
<member name="T:Documenter.Library.ForTesting.myHandler">
<summary>
Delegate : Here to generate an event member for test purposes.
</summary>
<param name="alpha">first parameter</param>
<param name="beta">second parameter</param>
<returns>True if method invocation succeeds.</returns>
</member>
:
</doc>
You'll note from the fragment above that:
Some experimentation (and not a little swearing) showed this "flat" format generated by the compiler not to be suitable for direct generation of output using XSLT 1.0 (Framework 3.5 does not support XSLT 2.0) so the document generation process is run as a two step process:
The templates used are:
This uses a simple for-each to identify the <member> element for a type as well as any nested types it may contain and these elements are used to generate a cut-down copy of the "flat" source XML.
There are three parts to this template worth mentioning:
The flat nature of the compiler generated XML means that the easiest way to process it is using nested for-each iterators (beware; for-each is not an indexed for loop). This is almost certainly not the fastest nor most elegant solution, but it is easy to implement and understand.
A side effect of this approach is that if the owning type (class/interface/struct) doesn't have a summary block then none of its methods will be documented.
Member (method, property, event) names were initially extracted using one or more calls to the substring-after() method, but very, very occasionally the first one or two characters of the member name would be stripped. Most unsatisfactory.
after()
There is no built in delimited string splitter in XSLT 1.0 so we have to roll our own. Complicating factors are that xsl:variables are write once read many and there is no equivalent of an indexed for loop. The standard approach is to use recursion.
This template takes a fully qualified member name, such as "M:Documenter.Library.Extensions.DefaultValue", and returns the member name without a namespace prefix. Use of the term "returns" is slightly misleading, it may be better to think of recursive templates as delaying the writing of the element or attribute to the output stream until the desired end point is reached.
M:Documenter.Library.Extensions.DefaultValue
This is handed a CSV list of parameter types. Unlike unadornedName we are interested in writing to the output stream at each stage not just at the end. As each parameter type is encountered a <param> node is written. If the input string is not empty then a recursive call is made to the template.
unadornedName
An interesting (read annoying) wrinkle was found late on.
If you have a method with a signature :
public static string ApplyTransform(string xmlToTransform,
string xslTemplate,
Dictionary<string,string> paramList)
You end up with intermediate XML of the form:
String,String,Dictionary{System.String,System.String}
So it becomes necessary to treat the "{" & "}" as escape characters in toNodes to avoid splitting the generic type's arg list. The result is a nested <xsl:choose> structure to handle this.
{
}
<xsl:choose>
Having said that the toNodes and unAdorned look to me as though they are eminently reusable.
The converted output has the following general layout:
<assembly name="...">
<type name="...">
<typeHeader>
<summary> a summary comment</summary>
<!--
Delegates and other paramterised types will also have param,
value and returns elements.
-->
<param name="firstArg">The first argument.</param>
<param name="secondArg">The second argument.</param>
</typeHeader>
<!-- method comment -- >
<method name="..." paramTypes="...">
<summary>... </summary>
<paramType typeName="..." />
<paramType typeName="..." />
<param name="..." />
<param name="..." />
<returns>...</returns>
<remarks>...</remarks>
<example>...</example>
<exception cref="">...</exception>
</method>
<!-- property comment -- >
<property name="..." paramTypes="...">
<summary>... </summary>
<paramType typeName="..." />
<paramType typeName="..." />
<param name="..." />
<param name="..." />
<returns>...</returns>
<remarks>...</remarks>
<example>...</example>
<exception cref="">...</exception>
</property>
<!-- event comment -- >
<event name="..." paramTypes="...">
<summary>... </summary>
<paramType typeName="..." />
<paramType typeName="..." />
<param name="..." />
<param name="..." />
<returns>...</returns>
<remarks>...</remarks>
<example>...</example>
<exception cref="">...</exception>
</event>
<!-- T:Assembly.Namespace.Class.AType-->
<nestedType xref="Assembly.Namespace.Class.AType"
name="AType"
summary="A nested type (struct, class, enum, delegate)." />
</type>
</assembly>
<type name="Library.Extensions">
<!-- M:Documenter.Library.Extensions.DefaultValue(System.String,System.String) -->
<method name="DefaultValue" paramTypes="System.String,System.String">
<summary>Deal with null strings.</summary>
<paramType typeName="System.String" />
<paramType typeName="System.String" />
<param name="s" />
<param name="defaultValue" />
</method>
Notes:
<paramType>
This template has only one section of significance; the paramType template.
paramType
This template is responsible for matching parameter names with parameter types. Within the template the most important lines are :
<xsl:variable
<xsl:variable
The first line notes the position ( a 1 based index ) of the current <paramType> node in the sequence of <paramType> nodes for the current member. The next line retrieves the parameter name from the sequence of <param> nodes of the parent member (method etc.) of the current <paramType>. If the summary block for the member is up to date there will be a 1:1 correspondence. This correspondence means that we can detect the addition of new parameter where there has been no update of the member's <summary> block by the absence of a matching <param> node for a <paramType> node . Unfortunately it isn't possible to identify the removal of or renaming of a parameter.
Something else worth noting is the separation of the position() call from the node access. Use of a single line :
position()
<xsl:variable
...results in the first node in the <param> sequence being retrieved for each <paramType> regardless of the <paramType> index position. This is unexpected; position() should, "...report the position of the context item in the sequence." It would seem that when used to index the param[] sequence it interprets the context as <param> rather than <paramType>. By retrieving the value as $position in the first line we ensure that the correct context is used.
param[]
$position
<!-- Lay out parameters where we have parameter types available. -->
<xsl:template
<span class="typeName">
<!-- Mark reference types with (out) -->
<xsl:choose>
<xsl:when
<xsl:value-of
<xsl:value-of
</xsl:when>
<xsl:otherwise>
<xsl:value-of
<xsl:text </xsl:text>
</xsl:otherwise>
</xsl:choose>
</span>
<xsl:variable
<span class="parameterName">
<!-- If the summary block is up to date show the parameter name
otherwise note that the block is out of date. -->
<xsl:variable
<xsl:choose>
<xsl:when
<span class="remarks">{ Summary block needs updating. }</span>
</xsl:when>
<xsl:otherwise>
<xsl:value-of
</xsl:otherwise>
</xsl:choose>
</span>
<!-- Write out any remarks for this parameter -->
<div class="indentedRemarks">
<xsl:value-of
</div>
</xsl:template>
Once we have our templates creating the output couldn't really be any easier ...
void onGenerate(object sender, EventArgs e)
{
string templatePath = null;
Dictionary<string, string> searchParams = new Dictionary<string,string>();
string typeName = fqClassName.Text;
// Get the content of the document file.
HttpPostedFile f = FileUpload1.PostedFile;
byte[] buffer = new byte[f.InputStream.Length];
f.InputStream.Read(buffer, 0, buffer.Length);
System.Text.Encoding enc = new System.Text.UTF8Encoding();
string documentation = enc.GetString(buffer).Trim();
if (string.IsNullOrEmpty(documentation))
Response.Write(@"Couldn't upload the XML. Try again.");
else
{
// If we're only interested in one type then extract it and its constituents
// into a mini version of the source XML.
if (!string.IsNullOrEmpty(typeName))
{
searchParams.Add("typeNameSought", typeName);
templatePath = Server.MapPath(@"~/XSL/SelectType.xsl");
documentation = Library.XML.ApplyTransform(documentation,
templatePath,
searchParams);
}
// Now turn the flattish compiler output into something
// with a bit more of a hierarchy about it then...
templatePath = Server.MapPath(@"~/XSL/ToTypeHierarchyXML.xsl");
documentation = Library.XML.ApplyTransform(documentation,
templatePath,
new Dictionary<string, string>());
// ...turn the hierarchical XML into HTML before...
templatePath = Server.MapPath(@"~/XSL/ToHTML.xsl");
documentation = Library.XML.ApplyTransform(documentation,
templatePath,
new Dictionary<string, string>());
// ...pushing it back to the user.
Response.Write(documentation);
}
Response.End();
}
If there are non-printing characters before the opening <XML ... > tag in the source XML then an invalid XML exception is thrown when the transform is attempted. VB.NET seems to be guilty of this.
The Library.XML class is just a wrapper for some standard .NET CompiledTransform calls and MSDN has a good explanation of their use
Library.XML
CompiledTransform
Running the transforms from a browser to a web page has a number of advantages:
The yellow title bars? Ahh BeOS. Now there was a proper operating system...
I stumbled across a couple of points that may be worth pointing out if you are new to XSL.
Don't be afraid of using <xsl:variable> it may not be good style to do so, but they can make things a great deal easier to read (and write) especially
if you've got deeply nested string function calls.
<xsl:variable>
substring-before returns an empty string if the string doesn't contain the delimiting string or character used. This is unhelpful. There are a couple of ways around this, I've used both. Either an <xsl:choose> block, see unAdorned name for an example. The choose block is a little verbose. Much more straightforward is the use of concat() to ensure the delimiter was found.
substring-before( concat( @name, '(' ) , '(' )
substring-after( concat( '(', @name ) , '(' )
[ ] Project Manager
[ ] Team Leader,
[ ] Evil Overlord
(tick all that apply)
No need to worry. You can just point the PM, TL or EO at a utility such as this. That and a few class diagrams from the IDE dropped into your favourite word processor will go a very long way to satisfying that.
|
http://www.codeproject.com/Articles/557138/Formatting-NET-Assembly-Summary-Documentation
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Well, in my java project , I need to create using netbeans a program similar to the Amazon website, well I post you the link so you can see all of it, it can help you: (c) CJoint.com, 2012
Well I need to make a menu in which the user can register as a member, modify profile etc, browse for books and view their data,etc..
Here is my code from the class Member:
And here is my code the class MemberList(not complete yet, i don't know how to do save and load behaviors)And here is my code the class MemberList(not complete yet, i don't know how to do save and load behaviors)package amazonapplication; import java.util.Scanner; /** *This class describes a Member * @author */ public class Member { private String username; private String password; /** * * @param user name as a string * @param pass password as a string */ public Member(String user, String pass ){ username=user; password=pass; } /** * Setter of Password * @param password the password to set */ public void setPassword(String password){ this.password=password; } /** * Setter of Username * @param username the username to set */ public void setUsername(String username){ this.username=username; } /** * Getter of Username * @return the username */ public String getUsername(){ // get method for username return username; } /** * Getter of Password * @return the password */ public String getPassword(){ // get method for password return password; } /** * Method used to change current password * @param password current password to change * @param newpass new password to set * @return boolean change of password successful or not */ public boolean changePass(String password, String newpass){ if(this.getPassword().equals(password)){ this.setPassword(newpass); return (true); } else return (false); } /** * Method used to set new password if old one is lost/forgotten * @param username member's username * @param newpass memeber's new password * @return boolean: procedure successful or not */ public boolean forgotpass(String username, String newpass){ if(this.username.equals(username)){ this.setPassword(newpass); return (true); } else return (false); } /** * Method used to authenticate user * @param username name as string * @param password password as string * @return boolean: authentification successful or not */ public void authenticate (String username, String password)throws PassException{ if(this.username.equals(username) && this.password.equals(password)) System.out.println("Authentification Successful!"); else throw new PassException (); } /** * Prints member information * @return String to print */ @Override public String toString(){ return "username is:"+this.username+ "\npassword is:"+this.password; } }
Well now that I have coded those classes, I should use them for my mainWell now that I have coded those classes, I should use them for my main/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package amazonapplication; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.LinkedList; import java.util.List; import java.util.ListIterator; /** * * @author */ public class MemberList { List<Member> members; public MemberList(){ this.members=new LinkedList<>(); } public boolean addmember(Member memb){ return members.add(memb); } public boolean removeMember(Member memb) { return members.remove(memb); } }
But I don't know how, in my main, I can manage when a user wants to register or do something else from the menu. Does it have a relation with associations ? Its said in my assignement: Finish the implementation of the class Member, a member has an association with OrderList.
Well I am lost
|
http://www.javaprogrammingforums.com/collections-generics/19435-problem-assignement-creating-collections-class-use-them.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
in reply to
SOAP reply with correct namespace
Am I right in what I think?
I think no.
Anyone know how to make SOAP::Transport::HTTP::Server reply with the namespace provided by the client?
Specify a namespace, use one the client gives you, basic SOAP::Data.... make some xml
BTW, the namespace seems to be an issue because Visual Studio client won't recognise the result without the namespace it provided.
That sounds backwards, but it probably isn't the case. The likely issue is your WSDL specification is wrong, in addition to SOAP::Lite not handling it correctly.
PS. I hate soap.
Priority 1, Priority 2, Priority 3
Priority 1, Priority 0, Priority -1
Urgent, important, favour
Data loss, bug, enhancement
Out of scope, out of budget, out of line
Family, friends, work
Impossible, inconceivable, implemented
Other priorities
Results (252 votes),
past polls
|
http://www.perlmonks.org/?node_id=991364
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Implicit Block Usage.
This is where I got stuck at the previous step, that bit of code reminded me of the nightmares that I had while learning java threading nearly half a lifetime ago. So let's skip over this bit. As far as I am concerned implicit blocks exist only for the explicit purpose of creating unreadable and unmanageable code.
Classes and Objects
Here is a short and sweet definition of an empty class:
class MyClass < Object end
Want to know if obj is an instance of some class? then obj.instance_of? someclass ought to do the trick
In both Java and Ruby, Object is at the root of all (evil). But while Java has what it calls primitives, Ruby does not, everything in Ruby is an object.
The constructor is always named ‘initialize’ and instance variables are always referenced by using an @ sign. Which means @ is the equivelent of ‘this.’ in java and ‘this->’ in PHP. A bit of quirk is that when calling another member method, you write it as self.other_method instead of @other_method. self. No prizes for guessing that self is the current object aka this.
In order to encourage bad programming, Ruby allows you to override methods in classes without extending that class. Just redefine the existing class and add a new method or replace an existing one.
class ExistingClass
def dumbWayToOverride
dumb code here
end
end
By default all instance variables are private. Getters and setters like in Java Beans. A getter would look like:
def param
@param
end
on the other hand a setter would look like the following and would give a small whiff of operator overloading in C++
def param=(value)
@param = value
end
But you really want your instance variables to be public you can make it so with the
attr_writer :param1, :param2
attr_reader :param2, :param2
note that read and write ability has to be granted separately. However it turns out that all this can be shortened still further with attr_accessor :parm1, :param2
With methods you can use the private, protected and public keywords as a prefix (if absent, assumed public). Curiously the constructor is private where as in java it’s public
When a Rubist say class scoped variables they are talking about what you and me call static variables. Static variables are accessed with a @@ prefix. To learn more about variables and how Ruby pays homage to perl. visit:
Also available are static methods
class Myclass
def Myclass.static_baby
some code
end
end
These poor souls are only allowed to access class scoped things. The method call looks like as Myclass.static_baby
Modules and require
Worshippers of many ‘object oriented languages’ argue that multiple inheritance is evil, all the while twisting themselves into awkward shapes to overcome the lack of multiple inheritance. In Ruby, Modules is the workaround of lack of multiple inheritance. Modules can be ‘include’d in a class
And lastly all this is tied together with good old require which is like require_once in PHP and allows you to scatter your code through several files.
Conditionals Revisited
Oh did I tell you that if statements need to have a ‘then’
Ruby has a ternary operator
You can append an if conditional to the end of a statement exactly like you do in perl. This is called a statement modifier. Rubists say perl is impossible to read, I wonder why.
puts “i am fat” if weight > 10
you can append an if conditional to a begin end block but why anyone would want to do such a thing isn't properly explained.
The while and the until conditionals can also be appended to the end of a statement, it’s called the loop modifier construct. By placing the while or until to the end of a begin/end block the do/while behaviour can be observed. In other words you can create a loop which is executed at least once.
Switch, Case
Ruby sure has switch/case badly mixed up but then so does SQL. In Ruby case is used instead of switch, when instead of case and else instead of default, which is exactly the way it’s done in stored procedures. Also possible is the ‘target less’ mode that you use on ordinary SQL statements. That is the ‘case constant’ changes to just plain old ‘case’ and ‘when’ changes to ‘when condition’, ‘when condition2’ and so on. The advantage of the second form of course is that you are not just testing for equality which is the limitation in languages like PHP.
The break keyword in C like Switch case is not needed because one does not fall through a when. That's a good thing because fall through in C switch case wasn't a very hot idea in the first place.
Just because break is not needed with switch case doesn't mean it's absent. It exists! and is intended to be used to break out of a loop. The continue of C is replaced by next. A nicety is that an if conditional can be appended to a break (since it’s just another statement) then it’s known as a statment modifier
The for loop in ruby isn’t the for loop in C (isn’t that becoming a recurring theme?) it’s the for loop in python (not that I remember it too well) but it does taste a bit like foreach in PHP. It’s also like a substitute for ruby each (see above)
Exception Handling
Ruby replaces try and catch with rescue, that certainly makes life a lot easier. Just add rescue nameOfException => e after your code. The => e is optional. Use it if you don’t want access to an instance of the exception to play with. Here is an example from the Ruby programming guide.
You can use else to deal with situations where exceptions doesn’t occur. The documentation doesn't say whether trying to cause confusion was a secondary objective or not. You can cause more trouble for yourself by using the ‘retry’ keyword. As the name suggests, it retries to execute the code block that produced the exception.
The finally keyword of Java is replaced by ensure and throw by raise. Like the other conditionals we have seen, rescue can also be used as a statement modifier
To ensure that people familiar with other programming languages have a hard time of it, and to make sure that Ruby code is spagetti code, they have added catch and throw but the meaning is very different from the keyword pair in other languagese. In ruby there are used to make ‘goto’ or spagettiIt's past 1:30 in the morning and time to catch a few winks. There is still some time left before the 24 hours are used up.
|
http://www.raditha.com/blog/archives/learn-ruby-in-24-hours.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Learn how the Reversi sample uses Windows Store app features
The Reversi sample uses several common features of Windows Store apps using XAML and C#. This topic describes how the sample uses some of these features, and provides links to key feature topics.
This topic does not require you to understand the whole sample, but it does assume that you already know XAML and C#, and already understand the basics of each feature, or are willing to learn by reading the linked topics. For info on app development fundamentals, see Create your first Windows Store app using C# or Visual Basic.
For a general introduction to the sample, see Reversi, a Windows Store game in XAML, C#, and C++. To understand how the various features work together as a whole, see Understand the Reversi app structure. To learn how the original C# game engine was ported to C++, see Learn about the Reversi C++ game engine.
Download the Reversi sample app or browse the source code.
Tile and splash screen
The app tile and splash screen are the first things that a user of your app sees. You can use them to provide a compelling entry point and to display your branding. The basics are trivial, but you can also do some more complex things, as described in the documentation.
Key resources:
- Creating tiles
- Adding a splash screen
- Guidelines and checklist for tiles and badges
- Guidelines and checklist for splash screens
- App tiles and badges sample
- Splash screen sample
Reversi provides only basic tile and splash-screen support. This includes square and wide tiles and a splash screen, shown here at reduced size.
The image file names are set in the Package.appxmanifest file. You can provide images at multiple scales to support multiple screen sizes. No other implementation is required for this simple usage.
App bar
App bars provide a standard place to put app commands. By default, users can show or hide an app bar as needed, making it a good place for commands used less frequently. This helps keep your main UI focused on direct interactions with your content.
Key resources:
Reversi includes a few secondary commands that are well suited to the app bar: the ability to pause the clock and to undo or redo moves. During normal game play the app bar is hidden, but the user can swipe from the top or bottom of the screen to display or hide it.
This code, from GamePage.xaml, shows the app bar definition. Although the background and border are transparent, the Background property is set to {x:Null} to prevent the invisible app bar from blocking taps and clicks. This setting is necessary because the app bar extends across the whole screen and overlaps the bottom row of the game board.
<Page.BottomAppBar> <CommandBar x: <CommandBar.SecondaryCommands> <AppBarButton Icon="Pause" Label="Pause" Command="{Binding Clock.PauseCommand}" Click="DismissAppBar" Visibility="{Binding Clock.IsPauseButtonVisible, Converter={StaticResource BooleanToVisibilityConverter}}"/> <AppBarButton Icon="Play" Label="Play" Command="{Binding Clock.PlayCommand}" Click="DismissAppBar" Visibility="{Binding Clock.IsPauseButtonVisible, Converter={StaticResource BooleanToVisibilityConverter}, ConverterParameter=Reverse}"/> <AppBarButton Icon="Undo" Label="Undo" Command="{Binding UndoCommand}"/> <AppBarButton Icon="Redo" Label="Redo" Command="{Binding RedoCommand}"/> </CommandBar.SecondaryCommands> </CommandBar> </Page.BottomAppBar>
Reversi uses the CommandBar and AppBarButton controls to get the default behavior and style. The button behavior and its enabled state are provided by view-model commands bound to the button Command properties, as described in the Commands section.
The Play and Pause buttons work like a single toggle button. To achieve this effect, the buttons' Visibility properties are bound to the same view-model property. Both bindings use a
BooleanToVisibilityConverter, but one of them also has a ConverterParameter property setting that reverses the effect of the binding. That way, each button is visible only when the other one isn't. For more info, see the Data binding section.
Toast notifications
Toast notifications alert your users when an important event occurs in your app, even if another app is currently active.
Key resources:
- Sending toast notifications
- Guidelines and checklist for toast notifications
- Toast notifications sample
In Reversi, the computer may take a while to make its move. If you switch to another app while you wait, a toast notification will alert you when it's your turn.
Reversi uses the minimum code required for toast notifications, and sets the Toast capable field to Yes in the Package.appxmanifest designer. The toast code is easily reusable, so it's in a helper class in the Common folder.
In GameViewModel.cs:
In Toast.cs:
public static void Show(string text) { const string template = "<toast duration='short'><visual><binding template='ToastText01'>" + "<text id='1'>{0}</text></binding></visual></toast>"; var toastXml = new XmlDocument(); toastXml.LoadXml(String.Format(template, text)); var toast = new ToastNotification(toastXml); ToastNotificationManager.CreateToastNotifier().Show(toast); }
Settings flyouts
The Settings charm provides standardized access to app settings.
Key resources:
Reversi has two Settings flyouts, one for display options, and one for new-game options.
This code from App.xaml.cs shows how Reversi handles the SettingsPane.CommandsRequested event to create SettingsCommand objects. When activated, each command creates and shows a SettingsFlyout control.
private void OnCommandsRequested(SettingsPane sender, SettingsPaneCommandsRequestedEventArgs args) { args.Request.ApplicationCommands.Add(new SettingsCommand("Display", "Display options", _ => (new DisplaySettings() { DataContext = SettingsViewModel }).Show())); args.Request.ApplicationCommands.Add(new SettingsCommand("NewGame", "New game options", _ => (new NewGameSettings() { DataContext = SettingsViewModel }).Show())); }
Sharing content
The share contract lets your app share data that users can send to other apps. For example, users can share data from your app into an email app to create a new message.
Key resources:
Windows provides built-in support for sharing an image of the app, and Reversi needs no additional functionality.
Data binding
Data binding lets you connect UI controls to the data they display so that changes in one will update the other. Data binding is common for data-entry forms, but you can also use it to drive your entire UI and to keep your UI separate from your app logic.
Key resources:
Reversi uses data bindings to connect its UI (or "view" layer) to its app logic (or "view model" layer). This layering helps separate the UI from other code, and is known as the Model-View-ViewModel (MVVM) pattern. For info about how Reversi uses this pattern, see Reversi app structure. For a short intro to MVVM, see Using the Model-View-ViewModel pattern.
Most of the bindings in Reversi are defined in XAML by means of the Binding markup extension, although code-behind is used in a few cases (for example, in the Board.xaml.cs file). Each page sets its DataContext property, which all the elements on the page use as the data source for their bindings.
UI updates
The data bindings drive the UI in Reversi. UI interactions cause changes to the data source properties, and the data bindings respond to these changes by updating the UI.
These updates work because the Reversi view-model classes inherit the
BindableBase class. This class is in the Common/BindableBase.cs file, and provides a standard INotifyPropertyChanged implementation and a few support methods. The
SetProperty method updates a property's backing value and also any bound UI by using a single method call. The
OnPropertyChanged method updates the UI that is bound to specified properties. This is useful to control the timing of the updates, and for properties that get their values from other properties.
This code from GameViewModel.cs shows the basic usage of both
SetProperty and
OnPropertyChanged.
Value conversion
You can convert any property values to a form more suitable for binding by creating calculated properties, which are properties that get their values from other properties.
This code from GameViewModel.cs shows a simple calculated property. UI that is bound to this property is updated by the matching
OnPropertyChanged call from the previous example.
Calculated properties are easy to create for any kind of conversion you might need, but they tend to clutter your code. For common conversions, it is better to put the conversion code into a reusable IValueConverter implementation. Reversi uses the
NullStateToVisibilityConverter and
BooleanToVisibilityConverter classes in the Common/Converters folder for bindings that show and hide various UI elements.
This binding from StartPage.xaml shows or hides a panel depending on whether a property has a value.
This binding, from NewGameSettings.xaml, shows or hides a panel depending on the state of a ToggleSwitch control.
For more examples, see App bar.
Commands
Button behaviors are often implemented with Click event handlers in code-behind files. Reversi does this for navigation buttons, but for other buttons, it separates the button UI from the non-UI code that the button invokes. To do this, the Button.Command properties are bound to view-model properties that return ICommand implementations.
Reversi command properties are of type
DelegateCommand or
DelegateCommand<T>. These classes are in the Common/DelegateCommand.cs file, and they provide standard, reusable ICommand implementations. You can use these classes to simplify the creation of single-use commands and to keep the necessary code confined to single property implementations.
This code from GameViewModel.cs shows the move command used by the board spaces, which are custom buttons. The ?? or "null-coalescing" operator means that the field value is returned only if it isn't null; otherwise, the field is set and the new value is returned. This means that a single command object is created the first time a property is accessed, and the same object is reused for all future accesses. The command object is initialized by calling the
DelegateCommand<ISpace>.FromAsyncHandler method with references to the
MoveAsync and
CanMove methods. These methods provide the implementation for the ICommand.Execute and CanExecute methods.
The CanExecute method is called by the data binding to update the enabled state of the button. However, command bindings rely on change notification similar to that of other bindings (discussed in UI updates). This code from GameViewModel.cs shows how the
UpdateView method synchronizes the view-model state with the model state and then calls
OnCanExecuteChanged for each command before continuing with the next move.
Custom dependency properties
Reversi uses custom dependency properties in its custom controls so that it can use data binding updates to drive visual state changes. Visual states and animated transitions are defined in XAML by using the VisualStateManager class. However, there is no way to bind a visual state directly to a view-model property. Custom dependency properties provide targets for binding to view-model properties. The dependency properties include property-changed callbacks that make the necessary VisualStateManager.GoToState method calls.
This code shows how the
PlayerStatus control uses code-behind to bind its custom dependency properties to view-model properties. Only one of the dependency properties is shown here, including its property-changed callback method. The callback and the OnApplyTemplate method override both call the update method. However, the OnApplyTemplate call initializes the control for its first appearance on screen, so it does not use animated transitions.
public PlayerStatus() { DefaultStyleKey = typeof(PlayerStatus); SetBinding(CurrentPlayerProperty, new Binding { Path = new PropertyPath("CurrentPlayer") }); SetBinding(IsClockShowingProperty, new Binding { Path = new PropertyPath("Settings.IsClockShowing") }); SetBinding(IsGameOverProperty, new Binding { Path = new PropertyPath("IsGameOver") }); SetBinding(WinnerProperty, new Binding { Path = new PropertyPath("Winner") }); } protected override void OnApplyTemplate() { base.OnApplyTemplate(); UpdatePlayerState(false); UpdateClockState(false); UpdateGameOverState(false); } public bool IsClockShowing { get { return (bool)GetValue(IsClockShowingProperty); } set { SetValue(IsClockShowingProperty, value); } } public static readonly DependencyProperty IsClockShowingProperty = DependencyProperty.Register("IsClockShowing", typeof(bool), typeof(PlayerStatus), new PropertyMetadata(true, IsClockShowingChanged)); private static void IsClockShowingChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { (d as PlayerStatus).UpdateClockState(true); } private void UpdateClockState(bool useTransitions) { GoToState(IsClockShowing ? "ClockShowing" : "ClockHidden", useTransitions); } private void GoToState(string state, bool useTransitions) { VisualStateManager.GoToState(this, state, useTransitions); }
Asynchronous code
Asynchronous code helps your UI stay responsive while your app is busy with time-consuming operations.
Key resources:
Reversi uses asynchronous code to perform moves in a game. Each move takes at least a second to complete, including the move animation, and AI moves can take much longer. However, the UI stays responsive at all times, and user commands (such as undo) will cancel a move in progress.
This code from GameViewModel.cs shows how Reversi uses the async and await keywords, the Task class, and cancellation tokens. Note the use of AsTask to integrate with the Windows Runtime asynchronous code in the
Game class. (For more info, see the next section.)
private async Task AiMoveAsync() { var cancellationToken = GetNewCancellationToken(); // Unlike the MoveAsync method, the AiMoveAsync method requires a try/catch // block for cancellation. This is because the AI search checks for // cancellation deep within a recursive, iterative search process // that is easiest to halt by throwing an exception. try { // The WhenAll method call enables the delay and the AI search to // occur concurrently. However, in order to retrieve the return // value of the first task, both tasks must have the same signature, // thus requiring the delay task to have a (meaningless) return value. var results = await Task.WhenAll( Game.GetBestMoveAsync(CurrentPlayerAiSearchDepth) .AsTask(cancellationToken), Task.Run(async () => { await DelayAsync(MinimumTurnLength, cancellationToken); return (ISpace)null; }) ); // Perform the AI move only after both the // search and the minimum delay have passed. LastMoveAffectedSpaces = await Game.MoveAsync( results[0]).AsTask(cancellationToken); if (cancellationToken.IsCancellationRequested) return; await OnMoveCompletedAsync(cancellationToken); } catch (OperationCanceledException) { System.Diagnostics.Debug.WriteLine("cancelled with exception"); } }
Using a Windows Runtime Component
Implementing some of your code as a Windows Runtime Component enables you to reuse that code in different apps, on different platforms, or with different languages. You can also more easily replace the component with an alternate implementation in another language.
Key resource:
Reversi implements its core game logic as a Windows Runtime Component in order to fully decouple it from the app. This enables it to support future extensibility and code reuse. Reversi also includes a C++ version of the game engine as a higher-performance alternative to the original C# version. For more info, see Learn about the Reversi C++ game engine.
This code from Game.cs shows how Reversi uses Task-based asynchronous code (including the async and await keywords) but exposes the results through Windows Runtime asynchronous interfaces. It also shows how the cancellation token from the
GameViewModel code is consumed by the
Game class.
The first and third methods in the example code call the AsyncInfo.Run method to return an IAsyncOperation<T>. This wraps the return value of the task and enables cancellation. The second example calls the WindowsRuntimeSystemExtensions.AsAsyncAction method to return an IAsyncAction. This is useful for tasks that don't have return values and don't need cancellation.
public IAsyncOperation<IList<ISpace>> MoveAsync(ISpace move) { // Use a lock to prevent the ResetAsync method from modifying the game // state at the same time that a different thread is in this method. lock (_lockObject) { return AsyncInfo.Run(cancellationToken => Task.Run(() => { if (cancellationToken.IsCancellationRequested) return null; var changedSpaces = Move(move); SyncMoveStack(move); return changedSpaces; }, cancellationToken)); } }
public IAsyncAction AiMoveAsync(int searchDepth) { return Task.Run(async () => { // If it is the AI's turn and we're not at the end of the move stack, // just use the next move in the stack. This is necessary to preserve // the forward stack, but it also prevents the AI from having to search again. var bestMove = Moves.Count < MoveStack.Count ? MoveStack[Moves.Count] : await GetBestMoveAsync(searchDepth); await MoveAsync(bestMove); }).AsAsyncAction(); } public IAsyncOperation<ISpace> GetBestMoveAsync(int searchDepth) { if (searchDepth < 1) throw new ArgumentException( "must be 1 or greater.", "searchDepth"); return AsyncInfo.Run(cancellationToken => Task.Run(() => { return (ISpace)reversiAI.GetBestMove(Board, CurrentPlayer == State.One, searchDepth, cancellationToken); }, cancellationToken)); }
Related topics
- Reversi sample app
- Reversi, a Windows Store game in XAML, C#, and C++
- Use the Model-View-ViewModel (MVVM) pattern
- Learn how the Reversi sample uses Windows Store app features
- Understand the Reversi app structure
- Create your first Windows Store app using C# or Visual Basic
- Roadmap for Windows Runtime apps using C# or Visual Basic
- Data binding
|
https://msdn.microsoft.com/en-us/library/windows/apps/jj712233.aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Noise: Creating a Synthesizer for Retro Sound Effects - Core Engine
This is the second.
If you have not already read the first Engine Demo
By the end of this tutorial all of the core code required for the audio engine will have been completed. The following is a simple demonstration of the audio engine in action.
Only one sound is being played in that demonstration, but the frequency of the sound is being randomised along with its release time. The sound also has a modulator attached to it to produce the vibrato effect (modulate the sound's amplitude) and the frequency of the modulator is also being randomised.
AudioWaveform Class
The first class that we will create will simply hold constant values for the waveforms that the audio engine will use to generate the audible sounds.
Start by creating a new class package called
noise, and then add the following class to that package:
package noise { public final class AudioWaveform { static public const PULSE:int = 0; static public const SAWTOOTH:int = 1; static public const SINE:int = 2; static public const TRIANGLE:int = 3; } }
We will also add a static public method to the class that can be used to validate a waveform value, the method will return
true or
false to indicate whether or not the waveform value is valid.
static public function validate( waveform:int ):Boolean { if( waveform == PULSE ) return true; if( waveform == SAWTOOTH ) return true; if( waveform == SINE ) return true; if( waveform == TRIANGLE ) return true; return false; }
Finally, we should prevent the class from being instantiated because there is no reason for anyone to create instances of this class. We can do this within the class constructor:
public function AudioWaveform() { throw new Error( "AudioWaveform class cannot be instantiated" ); }
This class is now complete.
Preventing enum-style classes, all-static classes, and singleton classes from being directly instantiated is a good thing to do because these types of class should not be instantiated; there is no reason to instantiate them. Programming languages such as Java do this automatically for most of these class types but currently in ActionScript 3.0 we need to enforce this behaviour manually within the class constructor.
Audio Class
Next on the list is the
Audio class. This class in similar in nature to the native ActionScript 3.0
Sound class: every audio engine sound will be represented by an
Audio class instance.
Add the following barebones class to the
noise package:
package noise { public class Audio { public function Audio() {} } }
The first things that need to be added to the class are properties that will tell the audio engine how to generate the sound wave whenever the sound is played. These properties include the type of waveform used by the sound, the frequency and amplitude of the waveform, the duration of the sound, and its release time (how quickly it fades out). All of these properties will be private and accessed via getters/setters:
private var m_waveform:int = AudioWaveform.PULSE; private var m_frequency:Number = 100.0; private var m_amplitude:Number = 0.5; private var m_duration:Number = 0.2; private var m_release:Number = 0.2;
As you can see, we have set a sensible default value for each property. The
amplitude is a value in the range
0.0 to
1.0, the
frequency is in hertz, and the
duration and
release times are in seconds.
We also need to add two more private properties for the modulators that can be attached to the sound; again these properties will be accessed via getters/setters:
private var m_frequencyModulator:AudioModulator = null; private var m_amplitudeModulator:AudioModulator = null;
Finally, the
Audio class will contain a few internal properties that will only be accessed by the
AudioEngine class (we will create that class shortly). These properties do not need to be hidden behind getters/setters:
internal var position:Number = 0.0; internal var playing:Boolean = false; internal var releasing:Boolean = false; internal var samples:Vector.<Number> = null;
The
position is in seconds and it allows the
AudioEngine class to keep track of the sound's position while the sound is playing, this is needed to calculate the waveform sound samples for the sound. The
playing and
releasing properties tell the
AudioEngine what state the sound is in, and the
samples property is a reference to the cached waveform samples that the sound is using. The use of these properties will become clear when we create the
AudioEngine class.
To finish the
Audio class we need to add the getters/setters:
Audio.waveform
public final function get waveform():int { return m_waveform; } public final function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: samples = AudioEngine.TRIANGLE; break; } m_waveform = value; }
Audio.frequency
[Inline] public final function get frequency():Number { return m_frequency; } public final function set frequency( value:Number ):void { // clamp the frequency to the range 1.0 - 14080.0 m_frequency = value < 1.0 ? 1.0 : value > 14080.0 ? 14080.0 : value; }
Audio.amplitude
[Inline] public final function get amplitude():Number { return m_amplitude; } public final function set amplitude( value:Number ):void { // clamp the amplitude to the range 0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
Audio.duration
[Inline] public final function get duration():Number { return m_duration; } public final function set duration( value:Number ):void { // clamp the duration to the range 0.0 - 60.0 m_duration = value < 0.0 ? 0.0 : value > 60.0 ? 60.0 : value; }
Audio.release
[Inline] public final function get release():Number { return m_release; } public function set release( value:Number ):void { // clamp the release time to the range 0.0 - 10.0 m_release = value < 0.0 ? 0.0 : value > 10.0 ? 10.0 : value; }
Audio.frequencyModulator
[Inline] public final function get frequencyModulator():AudioModulator { return m_frequencyModulator; } public final function set frequencyModulator( value:AudioModulator ):void { m_frequencyModulator = value; }
Audio.amplitudeModulator
[Inline] public final function get amplitudeModulator():AudioModulator { return m_amplitudeModulator; } public final function set amplitudeModulator( value:AudioModulator ):void { m_amplitudeModulator = value; }
You no doubt noticed the
[Inline] metadata tag bound to a few of the getter functions. That metadata tag is a shiny new feature of Adobe's latest ActionScript 3.0 Compiler and it does what says on the tin: it inlines (expands) the contents of a function. This is extremely useful for optimisation when used sensibly, and generating dynamic audio at runtime is certainly something that requires optimisation.
AudioModulator Class
The purpose of the
AudioModulator is to allow the amplitude and frequency of
Audio instances to be modulated to create useful and crazy sound effects. Modulators are actually similar to
Audio instances, they have a waveform, an amplitude, and frequency, but they don't actually produce any audible sound they only modify audible sounds.
First thing first, create the following barebones class in the
noise package:
package noise { public class AudioModulator { public function AudioModulator() {} } }
Now let's add the private private properties:
private var m_waveform:int = AudioWaveform.SINE; private var m_frequency:Number = 4.0; private var m_amplitude:Number = 1.0; private var m_shift:Number = 0.0; private var m_samples:Vector.<Number> = null;
If you are thinking this looks very similar to the
Audio class then you are correct: everything except for the
shift property is the same.
To understand what the
shift property does, think of one of the basic waveforms that the audio engine is using (pulse, sawtooth, sine, or triangle) and then imagine a vertical line running straight through the waveform at any position you like. The horizontal position of that vertical line would be the
shift value; its a value in the range
0.0 to
1.0 that tells the modulator where to begin reading it's waveform from and in turn can have a profound affect on the modifications the modulator makes to a sound's amplitude or frequency.
As an example, if the modulator was using a sine waveform to modulate the frequency of a sound, and the
shift was set at
0.0, the sound's frequency would first rise and then fall due to the curvature of the sine wave. However, if the
shift was set at
0.5 the sound's frequency would first fall and then rise.
Anyway, back to the code. The
AudioModulator contains one internal method that is only used by the
AudioEngine; the method is as follows:
[Inline] internal final function process( time:Number ):Number { var p:int = 0; var s:Number = 0.0; if( m_shift != 0.0 ) { time += ( 1.0 / m_frequency ) * m_shift; } p = ( 44100 * m_frequency * time ) % 44100; s = m_samples[p]; return s * m_amplitude; }
That function is inlined because it is used a lot, and when I say "a lot" I mean 44100 times a second for each sound that is playing that has a modulator attached to it (this is where inlining becomes incredibly valuable). The function simply grabs a sound sample from the waveform the modulator is using, adjusts that sample's amplitude, and then returns the result.
To finish the
AudioModulator class we need to add the getters/setters:
AudioModulator.waveform
public function get waveform():int { return m_waveform; } public function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: m_samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: m_samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: m_samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: m_samples = AudioEngine.TRIANGLE; break; } m_waveform = value; }
AudioModulator.frequency
public function get frequency():Number { return m_frequency; } public function set frequency( value:Number ):void { // clamp the frequency to the range 0.01 - 100.0 m_frequency = value < 0.01 ? 0.01 : value > 100.0 ? 100.0 : value; }
AudioModulator.amplitude
public function get amplitude():Number { return m_amplitude; } public function set amplitude( value:Number ):void { // clamp the amplitude to the range 0.0 - 8000.0 m_amplitude = value < 0.0 ? 0.0 : value > 8000.0 ? 8000.0 : value; }
AudioModulator.shift
public function get shift():Number { return m_shift; } public function set shift( value:Number ):void { // clamp the shift to the range 0.0 - 1.0 m_shift = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
And that wraps up the
AudioModulator class.
AudioEngine Class
Now for the big one: the
AudioEngine class. This is an all-static class and manages pretty much everything related to
Audio instances and sound generation.
Let's start with a barebones class in the
noise package as usual:
package noise { import flash.events.SampleDataEvent; import flash.media.Sound; import flash.media.SoundChannel; import flash.utils.ByteArray; // public final class AudioEngine { public function AudioEngine() { throw new Error( "AudioEngine class cannot be instantiated" ); } } }
As mentioned before, all-static classes should not be instantiated, hence the exception that is thrown in the class constructor if someone does try to instantiate the class. The class is also
final because there's no reason to extend an all-static class.
The first things that will be added to this class are internal constants. These constants will be used to cache the samples for each of the four waveforms that the audio engine is using. Each cache contains 44,100 samples which equates to one hertz waveforms. This allows the audio engine to produce really clean low frequency sound waves.
The constants are as follows:
static internal const PULSE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SAWTOOTH:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SINE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const TRIANGLE:Vector.<Number> = new Vector.<Number>( 44100 );
There are also two private constants used by the class:
static private const BUFFER_SIZE:int = 2048; static private const SAMPLE_TIME:Number = 1.0 / 44100.0;
The
BUFFER_SIZE is the number of sound samples that will be passed to the ActionScript 3.0 sound API whenever a request for sound samples is made. This is the smallest number of samples allowed and it results in the lowest possible sound latency. The number of samples could be increased to reduce CPU usage but that would increase the sound latency. The
SAMPLE_TIME is the duration of a single sound sample, in seconds.
And now for the private variables:
static private var m_position:Number = 0.0; static private var m_amplitude:Number = 0.5; static private var m_soundStream:Sound = null; static private var m_soundChannel:SoundChannel = null; static private var m_audioList:Vector.<Audio> = new Vector.<Audio>(); static private var m_sampleList:Vector.<Number> = new Vector.<Number>( BUFFER_SIZE );
- The
m_positionis used to keep track of the sound stream time, in seconds.
- The
m_amplitudeis a global secondary amplitude for all of the
Audioinstances that are playing.
- The
m_soundStreamand
m_soundChannelshouldn't need any explanation.
- The
m_audioListcontains references to any
Audioinstances that are playing.
- The
m_sampleListis a temporary buffer used to store sound samples when they are requested by the ActionScript 3.0 sound API.
Now, we need to initialize the class. There are numerous ways of doing this but I prefer something nice and simple, a static class constructor:
static private function $AudioEngine():void { var i:int = 0; var n:int = 44100; var p:Number = 0.0; // while( i < n ) { p = i / n; SINE[i] = Math.sin( Math.PI * 2.0 * p ); PULSE[i] = p < 0.5 ? 1.0 : -1.0; SAWTOOTH[i] = p < 0.5 ? p * 2.0 : p * 2.0 - 2.0; TRIANGLE[i] = p < 0.25 ? p * 4.0 : p < 0.75 ? 2.0 - p * 4.0 : p * 4.0 - 4.0; i++; } // m_soundStream = new Sound(); m_soundStream.addEventListener( SampleDataEvent.SAMPLE_DATA, onSampleData ); m_soundChannel = m_soundStream.play(); } $AudioEngine();
If you have read the previous tutorial in this series then you will probably see what's happening in that code: the samples for each of the four waveforms are being generated and cached, and this only happens once. The sound stream is also being instantiated and started and will run continuously until the app is terminated.
The
AudioEngine class has three public methods that are used to play and stop
Audio instances:
AudioEngine.play()
static public function play( audio:Audio ):void { if( audio.playing == false ) { m_audioList.push( audio ); } // this allows us to know exactly when the sound was started audio.position = m_position - ( m_soundChannel.position * 0.001 ); audio.playing = true; audio.releasing = false; }
AudioEngine.stop()
static public function stop( audio:Audio, allowRelease:Boolean = true ):void { if( audio.playing == false ) { // the sound isn't playing return; } if( allowRelease ) { // skip to the end of the sound and flag it as releasing audio.position = audio.duration; audio.releasing = true; return; } audio.playing = false; audio.releasing = false; }
AudioEngine.stopAll()
static public function stopAll( allowRelease:Boolean = true ):void { var i:int = 0; var n:int = m_audioList.length; var o:Audio = null; // if( allowRelease ) { while( i < n ) { o = m_audioList[i]; o.position = o.duration; o.releasing = true; i++; } return; } while( i < n ) { o = m_audioList[i]; o.playing = false; o.releasing = false; i++; } }
And here come the main audio processing methods, both of which are private:
AudioEngine.onSampleData()
static private function onSampleData( event:SampleDataEvent ):void { var i:int = 0; var n:int = BUFFER_SIZE; var s:Number = 0.0; var b:ByteArray = event.data; // if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; } // m_position = m_soundChannel.position * 0.001; }
So, in the first
if statement we are checking if the
m_soundChannel is still null, and we need to do that because the
SAMPLE_DATA event is dispatched as soon as the
m_soundStream.play() method is invoked, and before the method gets a chance to return a
SoundChannel instance.
The
while loop rolls through the sound samples that have been requested by
m_soundStream and writes them to the provided
ByteArray instance. The sound samples are generated by the following method:
AudioEngine.generateSamples()
static private function generateSamples():void { var i:int = 0; var n:int = m_audioList.length; var j:int = 0; var k:int = BUFFER_SIZE; var p:int = 0; var f:Number = 0.0; var a:Number = 0.0; var s:Number = 0.0; var o:Audio = null; // roll through the audio instances while( i < n ) { o = m_audioList[i]; // if( o.playing == false ) { // the audio instance has stopped completely m_audioList.splice( i, 1 ); n--; continue; } // j = 0; // generate and buffer the sound samples while( j < k ) { if( o.position < 0.0 ) { // the audio instance hasn't started playing yet o.position += SAMPLE_TIME; j++; continue; } if( o.position >= o.duration ) { if( o.position >= o.duration + o.release ) { // the audio instance has stopped o.playing = false; j++; continue; } // the audio instance is releasing o.releasing = true; } // grab the audio instance's frequency and amplitude f = o.frequency; a = o.amplitude; // if( o.frequencyModulator != null ) { // modulate the frequency f += o.frequencyModulator.process( o.position ); } // if( o.amplitudeModulator != null ) { // modulate the amplitude a += o.amplitudeModulator.process( o.position ); } // calculate the position within the waveform cache p = ( 44100 * f * o.position ) % 44100; // grab the waveform sample s = o.samples[p]; // if( o.releasing ) { // calculate the fade-out amplitude for the sample s *= 1.0 - ( ( o.position - o.duration ) / o.release ); } // add the sample to the buffer m_sampleList[j] += s * a; // update the audio instance's position o.position += SAMPLE_TIME; j++; } i++; } }
Finally, to finish things off, we need to add the getter/setter for the private
m_amplitude variable:
static public function get amplitude():Number { return m_amplitude; } static public function set amplitude( value:Number ):void { // clamp the amplitude to the range 0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; }
And now I need a break!
Coming Up...
In the third and final tutorial in the series we will be adding audio processors the to audio engine. These will allow us to push all of the generated sound samples though processing units such as hard limiters and delays. We will also be taking a look at all of the code to see if anything can be optimised.
All of the source code for this tutorial series will be made available with the next tutorial.
Follow us on Twitter, Facebook, or Google+ to keep up to date with the latest posts.
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
http://gamedevelopment.tutsplus.com/tutorials/noise-creating-a-synthesizer-for-retro-sound-effects-core-engine--gamedev-1536
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
One of the interesting side-effects of installing ASP.NET MVC 3 is the appearance of Microsoft.Web.Infrastructure in the GAC. Inside the assembly is a DynamicModuleUtility class that will let you do the following:
using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } }
The significant line of code is the line with RegisterModule. The DynamicModuleUtility will let you install an HTTP module into the ASP.NET pipeline without making any changes to web.config file. Registration must occur during the pre application startup up phase, so you'll probably mix dynamic modules with WebActivator for maximum flexibility. The ability to dynamically register modules opens up some interesting options for plugins and infrastructure libraries.
I can't find the Microsoft.Web.Infrastructure assembly nowhere. It isn't in the GAC, the Program Files\Microsoft ASP.NET\ASP.NET MVC 3 folder and it isn't in the source files at CodePlex!
Thanks,
RP
Thanks for your quick reply! Indeed, they are installed with MVC 3, I just hadn't noticed this folder. Thanks for the post and the information!:-)
RP
You know if it's possible to use this type of implementation to load assemblies into a new AppDomain to not restart the application?
K
could use stuff inside of System.AddIn to do.
PS : great courses you have at Plural Sight. The only thing missing is how to go from VS 2010 dev environment to production environment concerning Entity Framework Code First.
That should be available soon (a deployment module).
|
http://odetocode.com/blogs/scott/archive/2011/02/28/dynamicmoduleutility.aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
1.1 ! jdf 1: # The Domain Name System ! 2: ! 3: Use of the Domain Name System has been discussed in previous chapters, without ! 4: going into detail on the setup of the server providing the service. This chapter ! 5: describes setting up a simple, small domain with one Domain Name System (DNS) ! 6: nameserver on a NetBSD system. It includes a brief explanation and overview of ! 7: the DNS; further information can be obtained from the DNS Resources Directory ! 8: (DNSRD) at [](). ! 9: ! 10: ## DNS Background and Concepts ! 11: ! 12: The DNS is a widely used *naming service* on the Internet and other TCP/IP ! 13: networks. The network protocols, data and file formats, and other aspects of the ! 14: DNS are Internet Standards, specified in a number of RFC documents, and ! 15: described by a number of other reference and tutorial works. The DNS has a ! 16: distributed, client-server architecture. There are reference implementations for ! 17: the server and client, but these are not part of the standard. There are a ! 18: number of additional implementations available for many platforms. ! 19: ! 20: ### Naming Services ! 21: ! 22: Naming services are used to provide a mapping between textual names and ! 23: configuration data of some form. A *nameserver* maintains this mapping, and ! 24: clients request the nameserver to *resolve* a name into its attached data. ! 25: ! 26: The reader should have a good understanding of basic hosts to IP address mapping ! 27: and IP address class specifications, see ! 28: [[Name Service Concepts|guide/net-intro#nsconcepts]]. ! 29: ! 30: In the case of the DNS, the configuration data bound to a name is in the form of ! 31: standard *Resource Records* (RRs). These textual names conform to certain ! 32: structural conventions. ! 33: ! 34: ### The DNS namespace ! 35: ! 36: The DNS presents a hierarchical name space, much like a UNIX filesystem, ! 37: pictured as an inverted tree with the *root* at the top. ! 38: ! 39: TOP-LEVEL .org ! 40: | ! 41: MID-LEVEL .diverge.org ! 42: ______________________|________________________ ! 43: | | | ! 44: BOTTOM-LEVEL strider.diverge.org samwise.diverge.org wormtongue.diverge.org ! 45: ! 46: The system can also be logically divided even further if one wishes at different ! 47: points. The example shown above shows three nodes on the diverge.org domain, but ! 48: we could even divide diverge.org into subdomains such as ! 49: "strider.net1.diverge.org", "samwise.net2.diverge.org" and ! 50: "wormtongue.net2.diverge.org"; in this case, 2 nodes reside in ! 51: "net2.diverge.org" and one in "net1.diverge.org". ! 52: ! 53: There are directories of names, some of which may be sub-directories of further ! 54: names. These directories are sometimes called *zones*. There is provision for ! 55: symbolic links, redirecting requests for information on one name to the records ! 56: bound to another name. Each name recognised by the DNS is called a *Domain ! 57: Name*, whether it represents information about a specific host, or a directory ! 58: of subordinate Domain Names (or both, or something else). ! 59: ! 60: Unlike most filesystem naming schemes, however, Domain Names are written with ! 61: the innermost name on the left, and progressively higher-level domains to the ! 62: right, all the way up to the root directory if necessary. The separator used ! 63: when writing Domain Names is a period, ".". ! 64: ! 65: Like filesystem pathnames, Domain Names can be written in an absolute or ! 66: relative manner, though there are some differences in detail. For instance, ! 67: there is no way to indirectly refer to the parent domain like with the UNIX `..` ! 68: directory. Many (but not all) resolvers offer a search path facility, so that ! 69: partially-specified names can be resolved relative to additional listed ! 70: sub-domains other than the client's own domain. Names that are completely ! 71: specified all the way to the root are called *Fully Qualified Domain Names* or ! 72: *FQDN*s. A defining characteristic of an FQDN is that it is written with a ! 73: terminating period. The same name, without the terminating period, may be ! 74: considered relative to some other sub-domain. It is rare for this to occur ! 75: without malicious intent, but in part because of this possibility, FQDNs are ! 76: required as configuration parameters in some circumstances. ! 77: ! 78: On the Internet, there are some established conventions for the names of the ! 79: first few levels of the tree, at which point the hierarchy reaches the level of ! 80: an individual organisation. This organisation is responsible for establishing ! 81: and maintaining conventions further down the tree, within its own domain. ! 82: ! 83: ### Resource Records ! 84: ! 85: Resource Records for a domain are stored in a standardised format in an ASCII ! 86: text file, often called a *zone file*. The following Resource Records are ! 87: commonly used (a number of others are defined but not often used, or no longer ! 88: used). In some cases, there may be multiple RR types associated with a name, and ! 89: even multiple records of the same type. ! 90: ! 91: #### Common DNS Resource Records ! 92: ! 93: * *A: Address* -- This record contains the numerical IP address associated with ! 94: the name. ! 95: ! 96: * *CNAME: Canonical Name* -- This record contains the Canonical Name (an FQDN ! 97: with an associated A record) of the host name to which this record is bound. ! 98: This record type is used to provide name aliasing, by providing a link to ! 99: another name with which other appropriate RR's are associated. If a name has ! 100: a CNAME record bound to it, it is an alias, and no other RR's are permitted ! 101: to be bound to the same name. ! 102: ! 103: It is common for these records to be used to point to hosts providing a ! 104: particular service, such as an FTP or HTTP server. If the service must be ! 105: moved to another host, the alias can be changed, and the same name will reach ! 106: the new host. ! 107: ! 108: * *PTR: Pointer* -- This record contains a textual name. These records are ! 109: bound to names built in a special way from numerical IP addresses, and are ! 110: used to provide a reverse mapping from an IP address to a textual name. This ! 111: is described in more detail in [[Reverse Resolution|guide/dns#bg-reverse]]. ! 112: ! 113: * *NS: Name Server* -- This record type is used to *delegate* a sub-tree of the ! 114: Domain Name space to another nameserver. The record contains the FQDN of a ! 115: DNS nameserver with information on the sub-domain, and is bound to the name ! 116: of the sub-domain. In this manner, the hierarchical structure of the DNS is ! 117: established. Delegation is described in more detail in ! 118: [[Delegation|guide/dns#bg-delegation]]. ! 119: ! 120: * *MX: Mail eXchange* -- This record contains the FQDN for a host that will ! 121: accept SMTP electronic mail for the named domain, together with a priority ! 122: value used to select an MX host when relaying mail. It is used to indicate ! 123: other servers that are willing to receive and spool mail for the domain if ! 124: the primary MX is unreachable for a time. It is also used to direct email to ! 125: a central server, if desired, rather than to each and every individual ! 126: workstation. ! 127: ! 128: * *HINFO: Host Information* -- Contains two strings, intended for use to ! 129: describe the host hardware and operating system platform. There are defined ! 130: strings to use for some systems, but their use is not enforced. Some sites, ! 131: because of security considerations, do not publicise this information. ! 132: ! 133: * *TXT: Text* -- A free-form text field, sometimes used as a comment field, ! 134: sometimes overlaid with site-specific additional meaning to be interpreted by ! 135: local conventions. ! 136: ! 137: * *SOA: Start of Authority* -- This record is required to appear for each zone ! 138: file. It lists the primary nameserver and the email address of the person ! 139: responsible for the domain, together with default values for a number of ! 140: fields associated with maintaining consistency across multiple servers and ! 141: caching of the results of DNS queries. ! 142: ! 143: ### Delegation ! 144: ! 145: Using NS records, authority for portions of the DNS namespace below a certain ! 146: point in the tree can be delegated, and further sub-parts below that delegated ! 147: again. It is at this point that the distinction between a domain and a zone ! 148: becomes important. Any name in the DNS is called a domain, and the term applies ! 149: to that name and to any subordinate names below that one in the tree. The ! 150: boundaries of a zone are narrower, and are defined by delegations. A zone starts ! 151: with a delegation (or at the root), and encompasses all names in the domain ! 152: below that point, excluding names below any subsequent delegations. ! 153: ! 154: This distinction is important for implementation - a zone is a single ! 155: administrative entity (with a single SOA record), and all data for the zone is ! 156: referred to by a single file, called a *zone file*. A zone file may contain more ! 157: than one period-separated level of the namespace tree, if desired, by including ! 158: periods in the names in that zone file. In order to simplify administration and ! 159: prevent overly-large zone files, it is quite legal for a DNS server to delegate ! 160: to itself, splitting the domain into several zones kept on the same server. ! 161: ! 162: ### Delegation to multiple servers ! 163: ! 164: For redundancy, it is common (and often administratively required) that there be ! 165: more than one nameserver providing information on a zone. It is also common that ! 166: at least one of these servers be located at some distance (in terms of network ! 167: topology) from the others, so that knowledge of that zone does not become ! 168: unavailable in case of connectivity failure. Each nameserver will be listed in ! 169: an NS record bound to the name of the zone, stored in the parent zone on the ! 170: server responsible for the parent domain. In this way, those searching the name ! 171: hierarchy from the top down can contact any one of the servers to continue ! 172: narrowing their search. This is occasionally called *walking the tree*. ! 173: ! 174: There are a number of nameservers on the Internet which are called *root ! 175: nameservers*. These servers provide information on the very top levels of the ! 176: domain namespace tree. These servers are special in that their addresses must be ! 177: pre-configured into nameservers as a place to start finding other servers. ! 178: Isolated networks that cannot access these servers may need to provide their own ! 179: root nameservers. ! 180: ! 181: ### Secondaries, Caching, and the SOA record ! 182: ! 183: In order to maintain consistency between these servers, one is usually ! 184: configured as the *primary* server, and all administrative changes are made on ! 185: this server. The other servers are configured as *secondaries*, and transfer the ! 186: contents of the zone from the primary. This operational model is not required, ! 187: and if external considerations require it, multiple primaries can be used ! 188: instead, but consistency must then be maintained by other means. DNS servers ! 189: that store Resource Records for a zone, whether they be primary or secondary ! 190: servers, are said to be *authoritative* for the zone. A DNS server can be ! 191: authoritative for several zones. ! 192: ! 193: When nameservers receive responses to queries, they can *cache* the results. ! 194: This has a significant beneficial impact on the speed of queries, the query load ! 195: on high-level nameservers, and network utilisation. It is also a major ! 196: contributor to the memory usage of the nameserver process. ! 197: ! 198: There are a number of parameters that are important to maintaining consistency ! 199: amongst the secondaries and caches. The values for these parameters for a ! 200: particular domain zone file are stored in the SOA record. These fields are: ! 201: ! 202: #### Fields of the SOA Record ! 203: ! 204: * *Serial* -- A serial number for the zone file. This should be incremented any ! 205: time the data in the domain is changed. When a secondary wants to check if ! 206: its data is up-to-date, it checks the serial number on the primary's SOA ! 207: record. ! 208: ! 209: * *Refresh* -- A time, in seconds, specifying how often the secondary should ! 210: check the serial number on the primary, and start a new transfer if the ! 211: primary has newer data. ! 212: ! 213: * *Retry* -- If a secondary fails to connect to the primary when the refresh ! 214: time has elapsed (for example, if the host is down), this value specifies, in ! 215: seconds, how often the connection should be retried. ! 216: ! 217: * *Expire* -- If the retries fail to reach the primary within this number of ! 218: seconds, the secondary destroys its copies of the zone data file(s), and ! 219: stops answering requests for the domain. This stops very old and potentially ! 220: inaccurate data from remaining in circulation. ! 221: ! 222: * *TTL* -- This field specifies a time, in seconds, that the resource records ! 223: in this zone should remain valid in the caches of other nameservers. If the ! 224: data is volatile, this value should be short. TTL is a commonly-used acronym, ! 225: that stands for "Time To Live". ! 226: ! 227: ### Name Resolution ! 228: ! 229: DNS clients are configured with the addresses of DNS servers. Usually, these are ! 230: servers which are authoritative for the domain of which they are a member. All ! 231: requests for name resolution start with a request to one of these local servers. ! 232: DNS queries can be of two forms: ! 233: ! 234: * A *recursive* query asks the nameserver to resolve a name completely, and ! 235: return the result. If the request cannot be satisfied directly, the ! 236: nameserver looks in its configuration and caches for a server higher up the ! 237: domain tree which may have more information. In the worst case, this will be ! 238: a list of pre-configured servers for the root domain. These addresses are ! 239: returned in a response called a *referral*. The local nameserver must then ! 240: send its request to one of these servers. ! 241: ! 242: * Normally, this will be an *iterative* query, which asks the second nameserver ! 243: to either respond with an authoritative reply, or with the addresses of ! 244: nameservers (NS records) listed in its tables or caches as authoritative for ! 245: the relevant zone. The local nameserver then makes iterative queries, walking ! 246: the tree downwards until an authoritative answer is found (either positive or ! 247: negative) and returned to the client. ! 248: ! 249: In some configurations, such as when firewalls prevent direct IP communications ! 250: between DNS clients and external nameservers, or when a site is connected to the ! 251: rest of the world via a slow link, a nameserver can be configured with ! 252: information about a *forwarder*. This is an external nameserver to which the ! 253: local nameserver should make requests as a client would, asking the external ! 254: nameserver to perform the full recursive name lookup, and return the result in a ! 255: single query (which can then be cached), rather than reply with referrals. ! 256: ! 257: ### Reverse Resolution ! 258: ! 259: The DNS provides resolution from a textual name to a resource record, such as an ! 260: A record with an IP address. It does not provide a means, other than exhaustive ! 261: search, to match in the opposite direction; there is no mechanism to ask which ! 262: name is bound to a particular RR. ! 263: ! 264: For many RR types, this is of no real consequence, however it is often useful to ! 265: identify by name the host which owns a particular IP address. Rather than ! 266: complicate the design and implementation of the DNS database engine by providing ! 267: matching functions in both directions, the DNS utilises the existing mechanisms ! 268: and creates a special namespace, populated with PTR records, for IP address to ! 269: name resolution. Resolving in this manner is often called *reverse resolution*, ! 270: despite the inaccurate implications of the term. ! 271: ! 272: The manner in which this is achieved is as follows: ! 273: ! 274: * A normal domain name is reserved and defined to be for the purpose of mapping ! 275: IP addresses. The domain name used is `in-addr.arpa.` which shows the ! 276: historical origins of the Internet in the US Government's Defence Advanced ! 277: Research Projects Agency's funding program. ! 278: ! 279: * This domain is then subdivided and delegated according to the structure of IP ! 280: addresses. IP addresses are often written in *decimal dotted quad notation*, ! 281: where each octet of the 4-octet long address is written in decimal, separated ! 282: by dots. IP address ranges are usually delegated with more and more of the ! 283: left-most parts of the address in common as the delegation gets smaller. ! 284: Thus, to allow delegation of the reverse lookup domain to be done easily, ! 285: this is turned around when used with the hierarchical DNS namespace, which ! 286: places higher level domains on the right of the name. ! 287: ! 288: * Each byte of the IP address is written, as an ASCII text representation of ! 289: the number expressed in decimal, with the octets in reverse order, separated ! 290: by dots and appended with the in-addr.arpa. domain name. For example, to ! 291: determine the hostname of a network device with IP address 11.22.33.44, this ! 292: algorithm would produce the string `44.33.22.11.in-addr.arpa.` which is a ! 293: legal, structured Domain Name. A normal nameservice query would then be sent ! 294: to the nameserver asking for a PTR record bound to the generated name. ! 295: ! 296: * The PTR record, if found, will contain the FQDN of a host. ! 297: ! 298: One consequence of this is that it is possible for mismatch to occur. Resolving ! 299: a name into an A record, and then resolving the name built from the address in ! 300: that A record to a PTR record, may not result in a PTR record which contains the ! 301: original name. There is no restriction within the DNS that the "reverse" mapping ! 302: must coincide with the "forward" mapping. This is a useful feature in some ! 303: circumstances, particularly when it is required that more than one name has an A ! 304: record bound to it which contains the same IP address. ! 305: ! 306: While there is no such restriction within the DNS, some application server ! 307: programs or network libraries will reject connections from hosts that do not ! 308: satisfy the following test: ! 309: ! 310: * the state information included with an incoming connection includes the IP ! 311: address of the source of the request. ! 312: ! 313: * a PTR lookup is done to obtain an FQDN of the host making the connection ! 314: ! 315: * an A lookup is then done on the returned name, and the connection rejected if ! 316: the source IP address is not listed amongst the A records that get returned. ! 317: ! 318: This is done as a security precaution, to help detect and prevent malicious ! 319: sites impersonating other sites by configuring their own PTR records to return ! 320: the names of hosts belonging to another organisation. ! 321: ! 322: ## The DNS Files ! 323: ! 324: Now let's look at actually setting up a small DNS enabled network. We will ! 325: continue to use the examples mentioned in [Chapter 24, *Setting up TCP/IP on ! 326: NetBSD in practice*](chap-net-practice.html "Chapter 24. Setting up TCP/IP on ! 327: NetBSD in practice"), i.e. we assume that: ! 328: ! 329: * Our IP networking is working correctly ! 330: * We have IPNAT working correctly ! 331: * Currently all hosts use the ISP for DNS ! 332: ! 333: Our Name Server will be the `strider` host which also runs IPNAT, and our two ! 334: clients use "strider" as a gateway. It is not really relevant as to what type of ! 335: interface is on "strider", but for argument's sake we will say a 56k dial up ! 336: connection. ! 337: ! 338: So, before going any further, let's look at our `/etc/hosts` file on "strider" ! 339: before we have made the alterations to use DNS. ! 340: ! 341: **Example strider's `/etc/hosts` file** ! 342: ! 343: 127.0.0.1 localhost ! 344: 192.168.1.1 strider ! 345: 192.168.1.2 samwise sam ! 346: 192.168.1.3 wormtongue worm ! 347: ! 348: This is not exactly a huge network, but it is worth noting that the same rules ! 349: apply for larger networks as we discuss in the context of this section. ! 350: ! 351: The other assumption we want to make is that the domain we want to set up is ! 352: `diverge.org`, and that the domain is only known on our internal network, and ! 353: not worldwide. Proper registration of the nameserver's IP address as primary ! 354: would be needed in addition to a static IP. These are mostly administrative ! 355: issues which are left out here. ! 356: ! 357: The NetBSD operating system provides a set of config files for you to use for ! 358: setting up DNS. Along with a default `/etc/named.conf`, the following files are ! 359: stored in the `/etc/namedb` directory: ! 360: ! 361: * `localhost` ! 362: * `127` ! 363: * `loopback.v6` ! 364: * `root.cache` ! 365: ! 366: You will see modified versions of these files in this section, and I strongly ! 367: suggest making a backup copy of the original files for reference purposes. ! 368: ! 369: *Note*: The examples in this chapter refer to BIND major version 8, however, it ! 370: should be noted that format of the name database and other config files are ! 371: almost 100% compatible between version. The only difference I noticed was that ! 372: the `$TTL` information was not required. ! 373: ! 374: ### /etc/named.conf ! 375: ! 376: The first file we want to look at is `/etc/named.conf`. This file is the config ! 377: file for bind (hence the catchy name). Setting up system like the one we are ! 378: doing is relatively simple. First, here is what mine looks like: ! 379: ! 380: options { ! 381: directory "/etc/namedb"; ! 382: allow-transfer { 192.168.1.0/24; }; ! 383: allow-query { 192.168.1.0/24; }; ! 384: listen-on port 53 { 192.168.1.1; }; ! 385: }; ! 386: ! 387: zone "localhost" { ! 388: type master; ! 389: notify no; ! 390: file "localhost"; ! 391: }; ! 392: ! 393: zone "127.IN-ADDR.ARPA" { ! 394: type master; ! 395: notify no; ! 396: file "127"; ! 397: }; ! 398: ! 399: zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.int" { ! 400: type master; ! 401: file "loopback.v6"; ! 402: }; ! 403: ! 404: zone "diverge.org" { ! 405: type master; ! 406: notify no; ! 407: file "diverge.org"; ! 408: }; ! 409: ! 410: zone "1.168.192.in-addr.arpa" { ! 411: type master; ! 412: notify no; ! 413: file "1.168.192"; ! 414: }; ! 415: ! 416: zone "." in { ! 417: type hint; ! 418: file "root.cache"; ! 419: }; ! 420: ! 421: Note that in my `named.conf` the root (".") section is last, that is because ! 422: there is another domain called diverge.org on the internet (I happen to own it) ! 423: so I want the resolver to look out on the internet last. This is not normally ! 424: the case on most systems. ! 425: ! 426: Another very important thing to remember here is that if you have an internal ! 427: setup, in other words no live internet connection and/or no need to do root ! 428: server lookups, comment out the root (".") zone. It may cause lookup problems if ! 429: a particular client decides it wants to reference a domain on the internet, ! 430: which our server couldn't resolve itself. ! 431: ! 432: Looks like a pretty big mess, upon closer examination it is revealed that many ! 433: of the lines in each section are somewhat redundant. So we should only have to ! 434: explain them a few times. ! 435: ! 436: Lets go through the sections of `named.conf`: ! 437: ! 438: #### options ! 439: ! 440: This section defines some global parameters, most noticeable is the location of ! 441: the DNS tables, on this particular system, they will be put in `/etc/namedb` as ! 442: indicated by the "directory" option. ! 443: ! 444: Following are the rest of the params: ! 445: ! 446: * `allow-transfer` -- This option lists which remote DNS servers acting as ! 447: secondaries are allowed to do zone transfers, i.e. are allowed to read all ! 448: DNS data at once. For privacy reasons, this should be restricted to secondary ! 449: DNS servers only. ! 450: ! 451: * `allow-query` -- This option defines hosts from what network may query this ! 452: name server at all. Restricting queries only to the local network ! 453: (192.168.1.0/24) prevents queries arriving on the DNS server's external ! 454: interface, and prevent possible privacy issues. ! 455: ! 456: * `listen-on port` -- This option defined the port and associated IP addresses ! 457: this server will run ! 458: [named(8)]() ! 459: on. Again, the "external" interface is not listened here, to prevent queries ! 460: getting received from "outside". ! 461: ! 462: The rest of the `named.conf` file consists of `zone`s. A zone is an area that ! 463: can have items to resolve attached, e.g. a domain can have hostnames attached to ! 464: resolve into IP addresses, and a reverse-zone can have IP addresses attached ! 465: that get resolved back into hostnames. Each zone has a file associated with it, ! 466: and a table within that file for resolving that particular zone. As is readily ! 467: apparent, their format in `named.conf` is strikingly similar, so I will ! 468: highlight just one of their records: ! 469: ! 470: #### zone diverge.org ! 471: ! 472: * `type` -- The type of a zone is usually of type "master" in all cases except ! 473: for the root zone `.` and for zones that a secondary (backup) service is ! 474: provided - the type obviously is "secondary" in the latter case. ! 475: ! 476: * `notify` -- Do you want to send out notifications to secondaries when your ! 477: zone changes? Obviously not in this setup, so this is set to "no". ! 478: ! 479: * `file` -- This option sets the filename in our `/etc/namedb` directory where ! 480: records about this particular zone may be found. For the "diverge.org" zone, ! 481: the file `/etc/namedb/diverge.org` is used. ! 482: ! 483: ### /etc/namedb/localhost ! 484: ! 485: For the most part, the zone files look quite similar, however, each one does ! 486: have some unique properties. Here is what the `localhost` file looks like: ! 487: ! 488: 1|$TTL 3600 ! 489: 2|@ IN SOA strider.diverge.org. root.diverge.org. ( ! 490: 3| 1 ; Serial ! 491: 4| 8H ; Refresh ! 492: 5| 2H ; Retry ! 493: 6| 1W ; Expire ! 494: 7| 1D) ; Minimum TTL ! 495: 8| IN NS localhost. ! 496: 9|localhost. IN A 127.0.0.1 ! 497: 10| IN AAAA ::1 ! 498: ! 499: Line by line: ! 500: ! 501: * *Line 1*: This is the Time To Live for lookups, which defines how long other ! 502: DNS servers will cache that value before discarding it. This value is ! 503: generally the same in all the files. ! 504: ! 505: * *Line 2*: This line is generally the same in all zone files except ! 506: `root.cache`. It defines a so-called "Start Of Authority" (SOA) header, which ! 507: contains some basic information about a zone. Of specific interest on this ! 508: line are "strider.diverge.org." and "root.diverge.org." (note the trailing ! 509: dots!). Obviously one is the name of this server and the other is the contact ! 510: for this DNS server, in most cases root seems a little ambiguous, it is ! 511: preferred that a regular email account be used for the contact information, ! 512: with the "@" replaced by a "." (for example, mine would be ! 513: "jrf.diverge.org."). ! 514: ! 515: * *Line 3*: This line is the serial number identifying the "version" of the ! 516: zone's data set (file). The serial number should be incremented each time ! 517: there is a change to the file, the usual format is to either start with a ! 518: value of "1" and increase it for every change, or use a value of "YYYYMMDDNN" ! 519: to encode year (YYYY), month (MM), day (DD) and change within one day (NN) in ! 520: the serial number. ! 521: ! 522: * *Line 4*: This is the refresh rate of the server, in this file it is set to ! 523: once every 8 hours. ! 524: ! 525: * *Line 5*: The retry rate. ! 526: ! 527: * *Line 6*: Lookup expiry. ! 528: ! 529: * *Line 7*: The minimum Time To Live. ! 530: ! 531: * *Line 8*: This is the Nameserver line, which uses a "NS" resource record to ! 532: show that "localhost" is the only DNS server handing out data for this zone ! 533: (which is "@", which indicates the zone name used in the `named.conf` file, ! 534: i.e. "diverge.org") is, well, "localhost". ! 535: ! 536: * *Line 9*: This is the localhost entry, which uses an "A" resource record to ! 537: indicate that the name "localhost" should be resolved into the IP-address ! 538: 127.0.0.1 for IPv4 queries (which specifically ask for the "A" record). ! 539: ! 540: * *Line 10*: This line is the IPv6 entry, which returns ::1 when someone asks ! 541: for an IPv6-address (by specifically asking for the AAAA record) of ! 542: "localhost.". ! 543: ! 544: ### /etc/namedb/zone.127.0.0 ! 545: ! 546: This is the reverse lookup file (or zone) to resolve the special IP address ! 547: 127.0.0.1 back to "localhost": ! 548: ! 549: 1| $TTL 3600 ! 550: 2| @ IN SOA strider.diverge.org. root.diverge.org. ( ! 551: 3| 1 ; Serial ! 552: 4| 8H ; Refresh ! 553: 5| 2H ; Retry ! 554: 6| 1W ; Expire ! 555: 7| 1D) ; Minimum TTL ! 556: 8| IN NS localhost. ! 557: 9| 1.0.0 IN PTR localhost. ! 558: ! 559: In this file, all of the lines are the same as the localhost zonefile with ! 560: exception of line 9, this is the reverse lookup (PTR) record. The zone used here ! 561: is "@" again, which got set to the value given in `named.conf`, i.e. ! 562: "127.in-addr.arpa". This is a special "domain" which is used to do ! 563: reverse-lookup of IP addresses back into hostnames. For it to work, the four ! 564: bytes of the IPv4 address are reserved, and the domain "in-addr.arpa" attached, ! 565: so to resolve the IP address "127.0.0.1", the PTR record of ! 566: "1.0.0.127.in-addr.arpa" is queried, which is what is defined in that line. ! 567: ! 568: ### /etc/namedb/diverge.org ! 569: ! 570: This zone file is populated by records for all of our hosts. Here is what it ! 571: looks like: ! 572: ! 573: 1| $TTL 3600 ! 574: 2| @ IN SOA strider.diverge.org. root.diverge.org. ( ! 575: 3| 1 ; serial ! 576: 4| 8H ; refresh ! 577: 5| 2H ; retry ! 578: 6| 1W ; expire ! 579: 7| 1D ) ; minimum seconds ! 580: 8| IN NS strider.diverge.org. ! 581: 9| IN MX 10 strider.diverge.org. ; primary mail server ! 582: 10| IN MX 20 samwise.diverge.org. ; secondary mail server ! 583: 11| strider IN A 192.168.1.1 ! 584: 12| samwise IN A 192.168.1.2 ! 585: 13| www IN CNAME samwise.diverge.org. ! 586: 14| worm IN A 192.168.1.3 ! 587: ! 588: There is a lot of new stuff here, so lets just look over each line that is new ! 589: here: ! 590: ! 591: * *Line 9*: This line shows our mail exchanger (MX), in this case it is ! 592: "strider". The number that precedes "strider.diverge.org." is the priority ! 593: number, the lower the number their higher the priority. The way we are setup ! 594: here is if "strider" cannot handle the mail, then "samwise" will. ! 595: ! 596: * *Line 11*: CNAME stands for canonical name, or an alias for an existing ! 597: hostname, which must have an A record. So we have aliased `` ! 598: to `samwise.diverge.org`. ! 599: ! 600: The rest of the records are simply mappings of IP address to a full name (A ! 601: records). ! 602: ! 603: ### /etc/namedb/1.168.192 ! 604: ! 605: This zone file is the reverse file for all of the host records, to map their IP ! 606: numbers we use on our private network back into hostnames. The format is similar ! 607: to that of the "localhost" version with the obvious exception being the ! 608: addresses are different via the different zone given in the `named.conf` file, ! 609: i.e. "0.168.192.in-addr.arpa" here: ! 610: ! 611: 1|$TTL 3600 ! 612: 2|@ IN SOA strider.diverge.org. root.diverge.org. ( ! 613: 3| 1 ; serial ! 614: 4| 8H ; refresh ! 615: 5| 2H ; retry ! 616: 6| 1W ; expire ! 617: 7| 1D ) ; minimum seconds ! 618: 8| IN NS strider.diverge.org. ! 619: 9|1 IN PTR strider.diverge.org. ! 620: 10|2 IN PTR samwise.diverge.org. ! 621: 11|3 IN PTR worm.diverge.org. ! 622: ! 623: ### /etc/namedb/root.cache ! 624: ! 625: This file contains a list of root name servers for your server to query when it ! 626: gets requests outside of its own domain that it cannot answer itself. Here are ! 627: first few lines of a root zone file: ! 628: ! 629: ; ! 630: ; This file holds the information on root name servers needed to ! 631: ; initialize cache of Internet domain name servers ! 632: ; (e.g. reference this file in the "cache . <file>" ! 633: ; configuration file of BIND domain name servers). ! 634: ; ! 635: ; This file is made available by InterNIC ! 636: ; under anonymous FTP as ! 637: ; file /domain/db.cache ! 638: ; on server ! 639: ; -OR- RS.INTERNIC.NET ! 640: ; ! 641: ; last update: Jan 29, 2004 ! 642: ; related version of root zone: 2004012900 ! 643: ; ! 644: ; ! 645: ; formerly NS.INTERNIC.NET ! 646: ; ! 647: . 3600000 IN NS A.ROOT-SERVERS.NET. ! 648: A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4 ! 649: ; ! 650: ; formerly NS1.ISI.EDU ! 651: ; ! 652: . 3600000 NS B.ROOT-SERVERS.NET. ! 653: B.ROOT-SERVERS.NET. 3600000 A 192.228.79.201 ! 654: ; ! 655: ; formerly C.PSI.NET ! 656: ; ! 657: . 3600000 NS C.ROOT-SERVERS.NET. ! 658: C.ROOT-SERVERS.NET. 3600000 A 192.33.4.12 ! 659: ; ! 660: ... ! 661: ! 662: This file can be obtained from ISC at <> and usually comes ! 663: with a distribution of BIND. A `root.cache` file is included in the NetBSD ! 664: operating system's "etc" set. ! 665: ! 666: This section has described the most important files and settings for a DNS ! 667: server. Please see the BIND documentation in `/usr/src/dist/bind/doc/bog` and ! 668: [named.conf(5)]() ! 669: for more information. ! 670: ! 671: ## Using DNS ! 672: ! 673: In this section we will look at how to get DNS going and setup "strider" to use ! 674: its own DNS services. ! 675: ! 676: Setting up named to start automatically is quite simple. In `/etc/rc.conf` ! 677: simply set `named=yes`. Additional options can be specified in `named_flags`, ! 678: for example, I like to use `-g nogroup -u nobody`, so a non-root account runs ! 679: the "named" process. ! 680: ! 681: In addition to being able to startup "named" at boot time, it can also be ! 682: controlled with the `ndc` command. In a nutshell the `ndc` command can stop, ! 683: start or restart the named server process. It can also do a great many other ! 684: things. Before use, it has to be setup to communicate with the "named" process, ! 685: see the [ndc(8)]() ! 686: and ! 687: [named.conf(5)]() ! 688: man pages for more details on setting up communication channels between "ndc" ! 689: and the "named" process. ! 690: ! 691: Next we want to point "strider" to itself for lookups. We have two simple steps, ! 692: first, decide on our resolution order. On a network this small, it is likely ! 693: that each host has a copy of the hosts table, so we can get away with using ! 694: `/etc/hosts` first, and then DNS. However, on larger networks it is much easier ! 695: to use DNS. Either way, the file where order of name services used for ! 696: resolution is determined is `/etc/nsswitch.conf` (see ! 697: [[`nsswitch.conf`|guide/net-practice#ex-nsswitch]]. Here is part of a typical ! 698: `nsswitch.conf`: ! 699: ! 700: ... ! 701: group_compat: nis ! 702: hosts: files dns ! 703: netgroup: files [notfound=return] nis ! 704: ... ! 705: ! 706: The line we are interested in is the "hosts" line. "files" means the system uses ! 707: the `/etc/hosts` file first to determine ip to name translation, and if it can't ! 708: find an entry, it will try DNS. ! 709: ! 710: The next file to look at is `/etc/resolv.conf`, which is used to configure DNS ! 711: lookups ("resolution") on the client side. The format is pretty self explanatory ! 712: but we will go over it anyway: ! 713: ! 714: domain diverge.org ! 715: search diverge.org ! 716: nameserver 192.168.1.1 ! 717: ! 718: In a nutshell this file is telling the resolver that this machine belongs to the ! 719: "diverge.org" domain, which means that lookups that contain only a hostname ! 720: without a "." gets this domain appended to build a FQDN. If that lookup doesn't ! 721: succeed, the domains in the "search" line are tried next. Finally, the ! 722: "nameserver" line gives the IP addresses of one or more DNS servers that should ! 723: be used to resolve DNS queries. ! 724: ! 725: To test our nameserver we can use several commands, for example: ! 726: ! 727: # host sam ! 728: sam.diverge.org has address 192.168.1.2 ! 729: ! 730: As can be seen, the domain was appended automatically here, using the value from ! 731: `/etc/resolv.conf`. Here is another example, the output of running ! 732: `host`: ! 733: ! 734: $ host ! 735: is an alias for. ! 736: has address 68.142.226.38 ! 737: has address 68.142.226.39 ! 738: has address 68.142.226.46 ! 739: has address 68.142.226.50 ! 740: has address 68.142.226.51 ! 741: has address 68.142.226.54 ! 742: has address 68.142.226.55 ! 743: has address 68.142.226.32 ! 744: ! 745: Other commands for debugging DNS besides ! 746: [host(1)]() are ! 747: [nslookup(8)]() ! 748: and ! 749: [dig(1)](). Note ! 750: that ! 751: [ping(8)]() ! 752: is *not* useful for debugging DNS, as it will use whatever is configured in ! 753: `/etc/nsswitch.conf` to do the name-lookup. ! 754: ! 755: At this point the server is configured properly. The procedure for setting up ! 756: the client hosts are easier, you only need to setup `/etc/nsswitch.conf` and ! 757: `/etc/resolv.conf` to the same values as on the server. ! 758: ! 759: ## Setting up a caching only name server ! 760: ! 761: A caching only name server has no local zones; all the queries it receives are ! 762: forwarded to the root servers and the replies are accumulated in the local ! 763: cache. The next time the query is performed the answer will be faster because ! 764: the data is already in the server's cache. Since this type of server doesn't ! 765: handle local zones, to resolve the names of the local hosts it will still be ! 766: necessary to use the already known `/etc/hosts` file. ! 767: ! 768: Since NetBSD supplies defaults for all the files needed by a caching only ! 769: server, it only needs to be enabled and started and is immediately ready for ! 770: use! To enable named, put `named=yes` into `/etc/rc.conf`, and tell the system ! 771: to use it adding the following line to the `/etc/resolv.conf` file: ! 772: ! 773: # cat /etc/resolv.conf ! 774: nameserver 127.0.0.1 ! 775: ! 776: Now we can start named: ! 777: ! 778: # sh /etc/rc.d/named restart ! 779: ! 780: ### Testing the server ! 781: ! 782: Now that the server is running we can test it using the ! 783: [nslookup(8)]() ! 784: program: ! 785: ! 786: $ nslookup ! 787: Default server: localhost ! 788: Address: 127.0.0.1 ! 789: ! 790: > ! 791: ! 792: Let's try to resolve a host name, for example "": ! 793: ! 794: > ! 795: Server: localhost ! 796: Address: 127.0.0.1 ! 797: ! 798: Name: ! 799: Address: 204.152.190.12 ! 800: ! 801: If you repeat the query a second time, the result is slightly different: ! 802: ! 803: > ! 804: Server: localhost ! 805: Address: 127.0.0.1 ! 806: ! 807: Non-authoritative answer: ! 808: Name: ! 809: Address: 204.152.190.12 ! 810: ! 811: As you've probably noticed, the address is the same, but the message ! 812: `Non-authoritative answer` has appeared. This message indicates that the answer ! 813: is not coming from an authoritative server for the domain NetBSD.org but from ! 814: the cache of our own server. ! 815: ! 816: The results of this first test confirm that the server is working correctly. ! 817: ! 818: We can also try the ! 819: [host(1)]() and ! 820: [dig(1)]() commands, ! 821: which give the following result. ! 822: ! 823: $ host ! 824: has address 204.152.190.12 ! 825: $ ! 826: $ dig ! 827: ! 828: ; <<>> DiG 8.3 <<>> ! 829: ;; res options: init recurs defnam dnsrch ! 830: ;; got answer: ! 831: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19409 ! 832: ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0 ! 833: ;; QUERY SECTION: ! 834: ;;, type = A, class = IN ! 835: ! 836: ;; ANSWER SECTION: ! 837:. 23h32m54s IN A 204.152.190.12 ! 838: ! 839: ;; AUTHORITY SECTION: ! 840: NetBSD.org. 23h32m54s IN NS uucp-gw-1.pa.dec.com. ! 841: NetBSD.org. 23h32m54s IN NS uucp-gw-2.pa.dec.com. ! 842: NetBSD.org. 23h32m54s IN NS ns.NetBSD.org. ! 843: NetBSD.org. 23h32m54s IN NS adns1.berkeley.edu. ! 844: NetBSD.org. 23h32m54s IN NS adns2.berkeley.edu. ! 845: ! 846: ;; Total query time: 14 msec ! 847: ;; FROM: miyu to SERVER: 127.0.0.1 ! 848: ;; WHEN: Thu Nov 25 22:59:36 2004 ! 849: ;; MSG SIZE sent: 32 rcvd: 175 ! 850: ! 851: As you can see ! 852: [dig(1)]() gives ! 853: quite a bit of output, the expected answer can be found in the "ANSWER SECTION". ! 854: The other data given may be of interest when debugging DNS problems. ! 855:
|
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/guide/dns.mdwn?annotate=1.1
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Yahoo Gemini Reporting API - Advertiser id not available
The flow to getting data from the Yahoo Gemini reporting API is cumbersome and consists of multiple steps. Here’s a guide how to do it and avoid the common error E40000_INVALID_INPUT - Entity (publisher or advertiser id) not available.
We have previously produced an article on how to go through the Yahoo Gemini API Oauth authentication. And while it is a good resource to get started working with the API, it is easy to get stuck at the next steps as we were made aware by a recent conversation.
Specifically the
E40000_INVALID_INPUT - Entity (publisher or advertiser id) not available error code seems to be a recurring issue. Although not necessarily intuitive, it’s luckily easy to get around this easy.
Assuming that you have gone through the
Authentication steps from the previous article, you have everything you need for this guide. Let’s start the process of collecting performance reporting data from the API of Yahoo Gemini (Verizon Media Native).
Reminder, a fresh Access Token and the advertiser id
Now depending on how long ago you went throuhg the
Authentication process, your
Access Token might have expired. This should happen 60 minutes after receiving it. If this is the case, below is how to use your
Refresh Token to get a new
Access Token.
data = { 'grant_type': 'refresh_token', 'redirect_uri': 'oob', 'code': code, 'refresh_token': refresh_token }
response = post(base_url + 'oauth2/get_token', headers=headers, data=data)
if response.ok: access_token = response.json()['access_token']
And let’s test that it’s working and also collect out
Advertiser id which will be needed for the next steps.
headers = { 'Authorization': f'Bearer {access_token}', 'Accept': 'application/json', 'Content-Type': 'application/json' }
response = get('', headers=headers)
The
Advertiser id should now be in the response. Use the code below to save it. Keep in mind that this is assuming that you only have one
Advertiser or you want the data for your first
Advertiser. If that is not the case, you need to explore the other advertisers that are returned in the response.
if response.ok: adv_id = response.json()['response'][0]['id']
It could be idea to print it out to make sure you have it before moving on. Overall it should look something like the screenshot below.
Request a report and get around E40000_INVALID_INPUT
Now we will need to use some additional libraries that will be needed in the following steps. These come with Python so no need to install anything else.
import datetime import pandas as pd
Next up is deciding the time range that we want performance data for. For this, we use the
datetime library to give us today’s date and whichever date we want. For the sake of this guide only data from yesterday will be collected.
report_date_from = (datetime.date.today() - datetime.timedelta(days=1)).strftime("%Y-%m-%d") report_date_to = (datetime.date.today() - datetime.timedelta(days=1)).strftime("%Y-%m-%d")
In order to do more dates, just change the date in
report_date_from. For example
days=30 there will give you the data for the last 30 days. The
strftime makes sure that the date is formatted in the way that the Yahoo Gemini API reads it.
It is now time to define our
cube for the report request. A
cube is Yahoo terminology for a json dictionary that contains all information related to creating a performance report. You can read more about cubes on the API documentation. The cube for our example looks like this:
payload = {"cube": "performance_stats", "fields": [ {"field": "Day"}, {"field": "Campaign Name"}, {"field": "Campaign ID"}, {"field": "Ad ID"}, {"field": "Ad Title"}, {"field": "Impressions"}, {"field": "Clicks"}, {"field": "Spend"}, ], "filters": [ {"field": "Advertiser ID", "operator": "=", "value": adv_id}, {"field": "Day", "operator": "between", "from": report_date_from, "to": report_date_to} ]}
As you can see, we are including the previously defined
Advertiser id and report dates in the
cube. The
fields and
filters can be changed to include other data. For
filters the
Advertiser id and
Day are required, but you can also filter for specific campaigns or hours. The available metrics and filters can be found here in the Yahoo Gemini API documentation.
So far so good. Now let’s actually request the API to create a report for us
E40000_INVALID_INPUT - Entity (publisher or advertiser id) not available
Getting close to the issue here. First defining the
url we are making the request to and the
headers.
url = '' headers = { 'Authorization': f'Bearer {access_token}', 'Accept': 'application/json', 'Content-Type': 'application/json' }
Then the request
response = post(url + "?reportFormat=json", data=payload, headers=headers)
And the response we are getting:
E40000_INVALID_INPUT - Solution
All that is actually needed to avoid this issue is to make sure that the
cube is really formatted in the json format. In Python this is done by using
json.dumps().
So let’s retry
json_payload = json.dumps(payload, separators=(',', ':'))
response = post(url + "?reportFormat=json", data=json_payload, headers=headers)
Looks a lot better:
How to get the reporting data from the API
You have now successfully requested the report, but more is needed for you to actually get the data. As you might recall from the last the text is the last response, it had a
jobId and the
status was
submitted. This means that the API has started working on generating the report. In order to actually download the report data, you need to make additional requests to see if the report is ready, and when it is then download it. Generally it doesn’t take long for the report to be prepared, but expect to wait a few minutus if you are requesting a lot of data.
Use the following code to check if the report is ready
jobId = response.json()['response']['jobId'] headers = {'Authorization': f'Bearer {access_token}'}
response = get(url + '/{}?advertiserId={}'.format(jobId, adv_id), headers=headers) response.json()
If the
status is now
completed there is a url in the
jobResponse which is what you’ll use to collect the actual report. If it’s not yet
completed give it a little longer and then try again.
jobResponse = response.json()['response']['jobResponse'] report = get(jobResponse)
if response.ok: report = report.json()
This returns a json formatted response with the headers and data fields seperated. There are several ways of cleaning it up. If you are used to working with
Pandas, this would directly turn it into a
dataframe
pd.DataFrame(report['rows'], columns = [i['fieldName'] for i in report['fields']])
And all of it should look something like this
Start adapting the data collection
If you made it to this point you should have successfully got performance data from the Yahoo Gemini API. Now feel free to go back and start making changes to make sure you get all the data you want. Expand the time ranges to collect more historic data and include other relevant fields.
You find the official documentation for the Verizon Media Native API. Information about using the API for reporting here and documentation for campaign management here.
For a better way of collecting Native Advertising performance data, preparing reports and getting insights, check out our Native Ads Reporting and Optimization tool. We are currently offering a free 30 day trial, get in touch for more information.
|
https://joinative.com/yahoo-gemini-api-E40000-invalid-input
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
xticklabel interpreter latex
“xticklabel interpreter latex” Code Answer
xticklabel interpreter latex
whatever by
Thoughtful Termite
on Oct 03 2021
Comment
3
set(groot,'defaultAxesTickLabelInterpreter','latex'); set(groot,'defaulttextinterpreter','latex'); set(groot,'defaultLegendInterpreter','latex');
Add a Grepper Answer
Whatever answers related to “xticklabel interpreter latex”
symbol degré latex
latex belongs symbol
latex real number symbol
Latex Real numbers symbol
+- symbol latex
latex laplace symbol
latex labels
%en latex
latex reference labels
simbolo composicion latex
latex type math in table
latex rho
latex figure
symbole intégrale latex
<=> symbol LaTeX
interpreter latex matlab
latex contour integral
LaTeX mathematical expressions
+ - in latex
how to put % in latex
how to convert a matlab plot to latex
latex equals with exclamation mark on top
xor symbol latex
latex formula
Whatever queries related to “xticklabel interpreter latex”
xticklabel interpreter latex
More “Kinda” Related Whatever Answers
View All Whatever Answers »
xticklabel interpreter latex
symbole euro latex
latex unordered list
latex noindent for whole document
isomorphismus latex
system of equations latex
latex add empty line
svg latex overleaf
latex next page command
latex page break
latex bullet points
list latex
big o notation latex
latex rotate page
e expectation latex
natural numbers latex
url in latex
column vector latex
wiggle arrow latex
floor latex
positive integers latex
latex figure
latex hyper link
latex degree symbol
latex two figure side by side
partial derivative latex
latex subsubsection add newline after headline
tex write above
latex includegraphics trim
latex real number symbol
Latex Real numbers symbol
latex table add space between rows
add new page in latex
overleaf url
latex include pdf
latex sum
itemize latex
latex input pdf
latex elipsis
includegraphics latex rotate
latex text size
start enumerate from a number in latex
latext double space
crop the figure to the left in latex
combined +- in latex
+ - in latex
latex line break
latex image scale
overleaf increase image size
latex page margin
symbole proportionnel latex
proportional symbol latex
exponentielle latex
latex evaluated at
how to flip an image horizontally in latex
insert tabulation latex
LaTeX bold color text
dollar sign latex
set parindent latex
appendices latex
latex disable indentation
latex is not equal
not equal latex
latex how to write algorithm pseudocode
normal distribution symbol latex
center image latex
latex tabular
how to make a backslash in latex
ensemble des reels en latex
latex bold text
list overleaf
give space in latex
items in latex
latex page number missing
latex column vector
frac latex
latex bruch
latex geteilt durch
how to wrap text in latex table
spacing between list and text latex
does not exist sign latex
latex tabular caption
latex code highlighting r
augmented matrix in overleaf
latex alphanumeric list
latex integer to symbol
expected value symbol latex
bold overleaf
latex stack question equal
latex top margin bottom margin
fit table to page latex
latex remove page number
latex increase bottom margin article class
infity latex
bold command in latex in math mode
latex new line
latex tilde over symbol
text middle in latex
how to write conditional function in latex
figure centering latex
latex reduce space between items
latex piecewise function
how to replace index terms in latex
scalar produkt latex
vektoren und matrizen in Latex
latex matrix vertical line
latex coloured text
latex rho
latex space between paragraphs and lines
refer to section latex
writing code in latex
orthogonal symbol latex
real latex
latex multiple equation curly bracket
latex pmatrix
latex product
norm latex
centering text latex
center algin latex
latex cases
latex reference in order of appearance
latex reduce letter spacing
angstrom latex
latex typewriter
latex subfigure align top
how to put % in latex
latex bar over text in equation
latex text in box
degree symbol latex
insert table latex
tab in latex
latex horizontal line in table
latex bold italic underline
matrix latex
degrees latex
no space after latex command
text color in latex
latex long comments
latex quotation marks
latex colorbox xcolor
latex strikethrough overleaf
latex argmin
register mark symbol latex
entspricht zeichen latex
how to write a matrix latex
latex table caption
multiline underbrace latex
latex width figure
vector latex
use variables in latex
how to insert png latex
spacing lines latex
optimization formulas latex
latex itemize
non numbered section latex aticle
latex align equation
multi line comment in latex
include picture in latex
how to rotate picture in latex
latex section without number but in table of contents
latex labels
hyperref latex
includegraphics latex size
latex how to position table
overleaf bibtex file
referencing a webpage latex
espace latex
latex date syntax
Comment only a portion of text in LaTeX
latex colored text
empecher l'indentation latex
latex remove tab at beginning of paragraph
latex no indentazione
latex laplace symbol
latex amsmath
appendix section Latex
table Latex new line in same cell
latex write over arrow
latex hyperref footnote color
remove page number latex
latex line at the end of the page
display text at the bottom of page in latex
latex or
union of n elements latex
equation latex
how to make make figure page width in latex
latex blank page
latex block comment
complex numbers latex
fonction indicatrice latex
almoast equal to latex
latex approximately equal
gradient sign latex
a peu prés egale en latex
absolute value in latex
norm sign latex
remove date from latex
o umlaut latex
latex accented letters
move a figure to left latex
latex multicolumn
how to add link to a word in overleaf
arg min in latex
square root latex
latex vector
latex matrix
latex figure over two columns
new paragraph latex
latex image force placement
latex partial derivative
reduce size table latex
latex equal with triangle
indent latex
latex arrow with text above
latex backslash
infinity sign latex
verbatim latex
latex italian package
latex set language
latex landscape format
multicolumn latex
table in latex
tabel latex
writing insidemath mode latex
upside down triangle latex
mathbb latex
how to reduce size of table in latex to fit on page
latex equals with text
multirow latex package
latex month year date
circle with dot latex
how to align math equations in latex
aligning latex
periodic number latex
how to fit a whole picture on page in latex
enumerated list latex
bmatrix latex
latex babel spanish mexico
latex numbered equations
latex horizontal
two lines under summation latex
latex hyperlink
latex new lune
algorithm2e latex else if
spacing in table latex
latex insieme vuoto
latex foreach
fit image to page latex
documentclass latex
latex vector syntax
how to add authors in latex
add figure numbers in latex
put link to section latex
include r code in latex
integer number latex
how to order the references in latex
latex bar over letter
footnote in a caption latex
como se ponen imágenes en Latex
latex caption on right side
latex angle brackets
teta latex
latex subsubsection has no number
format a4 overleaf
blindtext latex
latex equals with exclamation mark on top
reducing page margin latesx
latex cut image
probability convergence latex
xambio item latex
add horizontal rule to latex
change font size itemize latex
latex newline in array cell
latex triple equals
latex formula
vector overleaf
latex color word
jupyter to latex
latex minipages next to each other
sun symbol latex
rond latex composition
bold italic text in latex
latex italic
latex change size of text
latex magic comments language
latex spellcheck
latex scriptsize
latex binomial coefficient
inserire una pagina bianca in latex
overleaf section not on next page
symbol degré latex
augmenter interlignes latex
fira sans font latex
dagger symbol latex
latex log
latex color
rotate text table latex
latex full width image two column
latex number in circle
move figure to right latex
bibliography latex inline
include link in latex
bold in table latexx
text over equal latex
hbar sign latex
normal text in equation latex
figures next to each other overleaf
LaTeX adjusting \sum limits
latex up bar
latex chapter no new page
ieee author multiple latex
latex make table ignore margins
latex make table go over margins
vertical space in table cell latex rule
latex not italic
underscore latex
latex to the power of
latex math comment formula
number of the page latex
latex big brackets
latex column separator width
markdown in latex
Markdown Document in LaTeX
div symbol latex
how change index terms to keywords latex
undefined control sequence toprule latex
overleaf double matrix
latex texttt margin overflow
latex footnote url clickable
themes in beamer latex
what is latent defect
latent bug
latex normal text inside align
latex how to make a big evaluated at sign
double implies symbol matrix latex
latex implica matematica
multi fig in row latex
equation with captionlatex
show minus 1 in latex
how to to increase space between a table cell and border in latex
simbolo composicion latex
latex verknüpfung
how to use pandoc to convert latex to word
how to number split equations in latex
latex page margins
latex bibtex cite book chapter
overleaf vectors
xor symbol latex
how to get the micro sign in latex
latex beamer new frame
ensemble latex
sinus cosinus latex
latex eqnarray centering
latex add lnie numbers
ieee conference latex algorithm
latex landscape figure width
how to write two line in one cell latex
cambiare enumerate latex
symbole intégrale latex
text over left right arrow latex
latex new line in equation missing \right
latex not logic
latex remove section numbers
LaTeX mathematical expressions
font size table latex
latex include without new page
very big symbol latex
latex tutorial
LateX
latex math editor
shimmer
legend of graphic latex
Itemize with numbers latex
numpy expresion to latex
subset eq latex
vektorprodukt latex
latex centrare una frase
including dot dot lines for titles latex table of contents
latex clear floats
latex page layout
Rnw looping latex command
latex prevent paragraph break
adjust distance between figure and caption overleaf
latex bibliography order of appearance
get current year latex
stilmittel latein
latex addition row
latex graffe
removing page number for subsection latex table of contents
overleaf subfigures
dirac notation latex
latex hl problem with utf8
label sections latex
latex check embedded fonts
what is latency in network
latex type math in table
latex not include in table of contents
latext preamble of a document
indented paragraph latex
how to cite a range of references in latex
rename table of contents org mode latex export
pruduct dot latex
newline in math mode latex
question mark equals latex
bracket notation latex
latex größer gleich
latex remove title page
latency
tex angle brackets
claverref not working in springer svjour latex
LaTeX preamble file
add multi line comment in latex
latex per ogni matematica
fettdruck in latex
hbox frame options latex
new environment latex
latex how to make fork
revmove number latex page
LaTeX pie chart radius
rotor latex
matplotlib latex could not be found
how to convert a matlab plot to latex
FPDF rotation Tutorial
latex get limit underneath
numbering figure in document class beamer
<=> symbol LaTeX
function definition layout latex
maggiore latex simbolo
imaginary number latex
corchet double latex
environnement array latex
latex fcolorbox
empty rule latex
latex contour integral
algorithm converter in latex
latex fixme setup force draft mode
asciimath-to-latex
wahrheitstafel latex
backslash latex simbolo
latex contiene matematica
latex proportional zeichen
referencing section latex
double dagger sign latex
per mille latex
latex proof box at end of line
latex dejavu font default
invert image latex
latex freccia a destra
how to avoid typing mathbb everytime in latex
examples beamer latex no title
existential quantifier latex
script font latex
latex change section style
chapeau lettre latex
encadrer une formule en latex
setstretch latex package
latex peice wise function
write below latex
latex 1.4gb
overleaf subfigure
latex normal text between gather / align lines
latex how to reset counters
algorithm latex input
making simple table in latex
latexbold text
angstrom unit of measure latex
latex tilde
latex adjustbox picture height
latex tornare indietro di un passaggio
latex label without a proper referenc
table referenced incorrectly in overleaf
overleaf image position
earth symbol latex
latex uncomment multiple lines
create pointed list latex
latex tall vert
latex doubledot
how to stop latex bibliography from going off the page
latex nested brackets
algorithm creator online latex
latex rovesciare un simbolo
how to make lowercase in overleaf
latex in matplotlib space
latex lista numeri romani
anki latex multiline bracket
latex calligraphy letters onenote
overleaf remove page numbe
l rond latex
latex linea vuota
conditional inclusion in latex
remove numbers in section latex
single quotation marks in latex
words are breaking in overleaf
IEEEauthorrefmark number superscript latex ieeetran
latex sum two lines subscript
latex alphanumeric labelling for subsections
macro for norm in latex
latex big vertical bar
dot notation latex
r en latex
Start Image from top of page Latex
insert figure latex
newcommand latex
pdf to latex
figures next to eatch other latex
how to use setx path
loop latex
gamestop stock
how to make fontawesome icon smaller
change icon size css
icon size html
fontawesome.com mak icon bigger
enlarging font-awesome icons
centre align image in div
centre text bootstrap
flutter svg
Snapchat
adblock
flutter list only shows 1 item
flutter listview builder
col offset in bootstrap bootstrap
gentoo
bootstrap overflow hidden
overflow hidden in bs4
space between columns children flutter
why do onions make you cry when you cut them
sndl stock
ifelse between two numbers r
network security commands linux explained
android sdk location mac
vim listchars tab
How to use Fontawsome on Android
tkinter add new element into grid by click
highfleet
uk grid system
regex not contain
sheets take last column value of row
sassdoc parametre
Daphne vs gunicorn in 2020
epoch time in postgresql
A problem occurred evaluating project ':app'. > Plugin with id 'com.android.application' not found.
how to know my future
flutter bold part of text
center row content flutter
delphi string remove last character
filmora 10 crack download without watermark
types of monkeys
react native code push app center key
deploy springboot jar file in centos 7
unresolved reference findviewbyid
atom append end of line with comma
isempty is not defined
Node -Cron Run every minute
Error: ENOSPC: System limit for number of file watchers reached,
vector.project returns infiniy
switch to window in testcafe
How to change API levels in Vixual Studio
Row size too large (> 8126)
helix nebula near by earth
getting UUID from fstab
3 pm bst
python are of equilateral triangle
target encoder sklearn example
linktocrudaction easyadmin 3
sequelize ERROR: Attribute '' cannot be parsed: Cannot read property 'dataType' of undefined
meta keywords
REVOLUT1
nuxt
tbc full form in cricket
can't upload webp to wordpress
import excel into mongodb
how to float few menus on the header to right
boojhkjh
npm webpack
host file windows 10
how to open another tab without click "open link in other tab"
bootstrap keep tab open cookie
question mark in coding
webpac-merge
Explain why in the electrolysis of water, acidified water is used.
excel paste special shortcut
make virtual env available in jupyter
french to english translator
what is uri
what comes first x or y
empty cuda
how to get distance from object roblox
download apache tomcat 8.0 for eclipse
octobercms model add fillable
url sacer
gogle docs
haxe add item
how to get sharpness 1000
add dot after cancel text wrap
flutter flatbutton is deprecated
flutter url remove query
selenium disable Chrome is being controlled by automated test software
how to return single document from firebase flutter
latest solidity version
hard drive not showing up windows 10
minio client docker
text size
gspread.exceptions.APIError: {'code': 403, 'message': 'The caller does not have permission', 'status': 'PERMISSION_DENIED'}
how to deactivate windows 10 product key
devextreme
who likes grepper
sinonimo di pensiero
how to get the amount of servers a discord bot is in? discord.py
load data batchwise keras
theatre reopening india
countries names separating with commas
how to stop code in ursina
difference between align-content and align-items
Xamarin change document/form
int array to float array
what is an ester in chemistry
ReturnPdfFiles
useEffect not triggering after hooks state array update
etherium classic mining
CIAA
replace 2 leading zeros from string
eloquent where parentheses
proton not working black screen
tinymce extract value
htaccess rewrite optional parameters
recursion function r
yii2 gridview header text align center
NLog how to wirte to console and file at the sam etime
How to Disallow .htaccess
hahahha i hack u
nuxeo pdf thumbnail
$trim_length = $custom_field = 'field-name'
dropdown content left
android loop array
Module not found: Can't resolve '@headlessui/react'
beauty and the baker season 2, 2020
How to post Dynamic list to asp mvc controller
Convert arduino String to C language String
a=input("enter the number whose table you want to see") for i in range(0,11): print( %s,'x',i,'=' , %s *i,(a))
cannot convert float nan to integer
who is the best youtuber
stream_socket_enable_crypto(): ssl operation failed with code 1. openssl error messages: error:1416f086:ssl routines:tls_process_server_certificate:certificate verify failed
cracking cccleaner
maga meaning
gta online how to load faster
console.log for loop
datatable add filter dropdown
square root in notion
tan58degrees
laravel DomPDF controller to view data pass
increase div size on hover css
who is the most famous person in the world
disable automatic animation unity
no molenaers
get start of week big query
dokcer remove image
color container flutter
fa fa bug
vim replace under cursor
encrypt & decrypt api data in localstorage
Compute the floor of base-2 logarithm of x in creeper
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
intext:reach router dynamic path using map
mount wsl drive in windows
Change default branch on git
vstheme dracula
#fff
Cross-Site Tracing Vulnerability
best laptop for programming
webmethods %ifvar%
bot delete embed py
windows xp source code download link
laravel: get id of inserted row
bloomberg api overriding
"generatedMessage": false, "code": "ERR_ASSERTION", "expected": true, "operator": "=="
what's the biggest planet on earth
svgs not showing up in google chrome
ExpressVPN activation
scrapy xpath href contains text
bash if variable is not empty
disable hyper v windows 10 powershell
carbon start of week minus one day
example of full stack
numeric or value error: character string buffer too small
how to create a player in haxeflixel
thumbnail size wordpress
cursor pointer for span
program to derivate
SIM900 GSM GPRS
ppt intent android
installing saas in react application
synfony vérifier si connecter dans controller
what is a stack in programming
ggt euklidischer algorithmus python
cordova android.useandroidx
identar no vscode
ir para comit
how to fit background image in a webpage
how to clear cache in streamlit
kitten ui input padding
to create a static library
currunt location in flutter
vector from angle
guzzle Request with POST files
linux battery status
difference between dispatcher and short term scheduler
add text box to flutter
could not load Ensure it is referenced by the startup project 'PDFCertificateWeb'
No module named 'keras.engine.topology'
what is workbench concept
linear layout button shadow cropped android
ngrok command 80 not found
TCL get array indices
how to comment selection in visual studio
folder Controller" not found
Property 'firstName' has no initializer and is not definitely assigned in the constructor
A <Route> is only ever to be used as the child of <Routes> element, never rendered directly. Please wrap your <Route> in a <Routes>.
nvm use default alias
alkyl
why i am single
take screenshot of website
.
|
https://www.codegrepper.com/code-examples/whatever/xticklabel+interpreter+latex
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Create a program C# to ask the user for a distance in meters and the time taken (hours, minutes, seconds).
Display the speed, in meters per second, kilometers per hour and miles per hour.
1 mile = 1609 meters.
1200
0
13
45
Speed in meters/sec is 1.454545
Speed in km/h is 5.236363
Speed in miles/h is 3.254421
using System;
public class FloatSpeedUnits
{
public static void Main(string[] args)
{
float distance = Convert.ToSingle(Console.ReadLine());
float hour = Convert.ToSingle(Console.ReadLine());
float min = Convert.ToSingle(Console.ReadLine());
float sec = Convert.ToSingle(Console.ReadLine());
float timeBySeconds = (hour*3600) + (min*60) + sec;
float mps = distance/timeBySeconds;
float kph = (distance/1000.0f) / (timeBySeconds/3600.0f);
float mph = kph/1.609f;
Console.WriteLine("Speed in meters/sec is {0}", mps);
Console.WriteLine("Speed in km/h is {0}", kph);
Console.WriteLine("Speed in miles/h is {0}", mph);
}
}
Practice C# anywhere with the free app for Android devices.
Learn C# at your own pace, the exercises are ordered by difficulty.
Own and third party cookies to improve our services. If you go on surfing, we will consider you accepting its use.
|
https://www.exercisescsharp.com/data-types-a/float-value/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Chapter 3. Configuring the JBoss EAP for OpenShift Image for Your Java Application
The JBoss EAP for OpenShift image is preconfigured for basic use with your Java applications. However, you can configure the JBoss EAP instance inside the image. The recommended method is to use the OpenShift S2I process, together with application template parameters and environment variables.
Any configuration changes made on a running container will be lost when the container is restarted or terminated.
This includes any configuration changes made using scripts that are included with a traditional JBoss EAP installation, for example
add-user.sh or the management CLI.
It is strongly recommended that you use the OpenShift S2I process, together with application template parameters and environment variables, to make any configuration changes to the JBoss EAP instance inside the JBoss EAP for OpenShift image.
3.1. How the JBoss EAP for OpenShift S2I Process Works
The variable
EAP_HOME is used to denote the path to the JBoss EAP installation inside the JBoss EAP for OpenShift image.
The S2I process for JBoss EAP for OpenShift works as follows:
If a
pom.xmlfile is present in the source code repository, a Maven build process is triggered that uses the contents of the
$MAVEN_ARGSenvironment variable.
Although you can specify custom Maven arguments or options with the
$MAVEN_ARGSenvironment variable, Red Hat recommends that you use the
$MAVEN_ARGS_APPENDenvironment variable to do this. The
$MAVEN_ARGS_APPENDvariable takes the default arguments from
$MAVEN_ARGSand appends the options from
$MAVEN_ARGS_APPENDto it.
By default, the OpenShift profile uses the Maven
packagegoal, which includes system properties for skipping tests (
-DskipTests) and enabling the Red Hat GA repository (
-Dcom.redhat.xpaas.repo).Note
To use Maven behind a proxy on JBoss EAP for OpenShift image, set the
$HTTP_PROXY_HOSTand
$HTTP_PROXY_PORTenvironment variables. Optionally, you can also set the
$HTTP_PROXY_USERNAME,
HTTP_PROXY_PASSWORD, and
HTTP_PROXY_NONPROXYHOSTSvariables.
- The results of a successful Maven build are copied to the
EAP_HOME/standalone/deployments/directory inside the JBoss EAP for OpenShift image. This includes all JAR, WAR, and EAR files from the source repository specified by the
$ARTIFACT_DIRenvironment variable. The default value of
$ARTIFACT_DIRis the Maven target directory.
- All files in the
configurationsource repository directory are copied to the
EAP_HOME/standalone/configuration/directory inside the JBoss EAP for OpenShift image. If you want to use a custom JBoss EAP configuration file, it should be named
standalone-openshift.xml.
- All files in the
modulessource repository directory are copied to the
EAP_HOME/modules/directory inside the JBoss EAP for OpenShift image.
See Artifact Repository Mirrors for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror.
3.2. Configuring JBoss EAP for OpenShift Using Environment Variables
Using environment variables is the recommended method of configuring the JBoss EAP for OpenShift image. See the OpenShift documentation for instructions on specifying environment variables for application containers and build containers.
For example, you can set the JBoss EAP instance’s management username and password using environment variables when creating your OpenShift application:
oc new-app --template=eap72-basic-s2i \ -p IMAGE_STREAM_NAMESPACE=eap-demo \ -p SOURCE_REPOSITORY_URL= \ -p SOURCE_REPOSITORY_REF=openshift \ -p CONTEXT_DIR=kitchensink \ -e ADMIN_USERNAME=myspecialuser \ -e ADMIN_PASSWORD=myspecialp@ssw0rd
Available environment variables for the JBoss EAP for OpenShift image are listed in Reference Information.
3.3. Build Extensions and Project Artifacts
The JBoss EAP for OpenShift image extends database support in OpenShift using various artifacts. These artifacts are included in the built image through different mechanisms:
- S2I artifacts that are injected into the image during the S2I process.
- Runtime artifacts from environment files provided through the OpenShift Secret mechanism.
Support for using the Red Hat-provided internal datasource drivers with the JBoss EAP for OpenShift image is now deprecated for JDK 8 image streams. It is recommended that you use JDBC drivers obtained from your database vendor for your JBoss EAP applications.
The following internal datasources are no longer provided with the JBoss EAP for OpenShift JDK 11 image:
- MySQL
- PostgreSQL
For more information about installing drivers, see Modules, Drivers, and Generic Deployments.
For more information on configuring JDBC drivers with JBoss EAP, see JDBC drivers in the JBoss EAP Configuration Guide.
3.3.1. S2I Artifacts
The S2I artifacts include modules, drivers, and additional generic deployments that provide the necessary configuration infrastructure required for the deployment. This configuration is built into the image during the S2I process so that only the datasources and associated resource adapters need to be configured at runtime.
See Artifact Repository Mirrors for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror.
3.3.1.1. Modules, Drivers, and Generic Deployments
There are a few options for including these S2I artifacts in the JBoss EAP for OpenShift image:
- Include the artifact in the application source deployment directory. The artifact is downloaded during the build and injected into the image. This is similar to deploying an application on the JBoss EAP for OpenShift image.
Include the
CUSTOM_INSTALL_DIRECTORIESenvironment variable, a list of comma-separated list of directories used for installation and configuration of artifacts for the image during the S2I process. There are two methods for including this information in the S2I:
An
install.shscript in the nominated installation directory. The install script executes during the S2I process and operates with impunity.
install.shScript Example
#!/bin/bash injected_dir=$1 source /usr/local/s2i/install-common.sh install_deployments ${injected_dir}/injected-deployments.war install_modules ${injected_dir}/modules configure_drivers ${injected_dir}/drivers.env
The
install.shscript is responsible for customizing the base image using APIs provided by
install-common.sh.
install-common.shcontains functions that are used by the
install.shscript to install and configure the modules, drivers, and generic deployments.
Functions contained within
install-common.sh:
install_modules
configure_drivers
install_deployments
Modules
A module is a logical grouping of classes used for class loading and dependency management. Modules are defined in the
EAP_HOME/modules/directory of the application server. Each module exists as a subdirectory, for example
EAP_HOME/modules/org/apache/. Each module directory then contains a slot subdirectory, which defaults to main and contains the
module.xmlconfiguration file and any required JAR files.
For more information about configuring
module.xmlfiles for MySQL and PostgreSQL JDBC drivers, see the Datasource Configuration Examples in the JBoss EAP Configuration Guide.
Example
module.xmlFile
<?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.0" name="org.apache.derby"> <resources> <resource-root <resource-root </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module>
Example
module.xmlFile for PostgreSQL Datasource
<?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.0" name="org.postgresql"> <resources> <resource-root </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module>
Example
module.xmlFile for MySQL Connect/J 8 Datasource
<?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.0" name="com.mysql"> <resources> <resource-root </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module>Note
The ".Z" in
mysql-connector-java-8.0.Z.jarindicates the version of the
JARfile downloaded. The file can be renamed, but the name must match the name in the
module.xmlfile.
The
install_modulesfunction in
install.shcopies the respective JAR files to the modules directory in JBoss EAP, along with the
module.xml.
Drivers
Drivers are installed as modules. The driver is then configured in
install.shby the
configure_driversfunction, the configuration properties for which are defined in a runtime artifact environment file.
Example
drivers.envFile
#DRIVER DRIVERS=DERBY DERBY_DRIVER_NAME=derby DERBY_DRIVER_MODULE=org.apache.derby DERBY_DRIVER_CLASS=org.apache.derby.jdbc.EmbeddedDriver DERBY_XA_DATASOURCE_CLASS=org.apache.derby.jdbc.EmbeddedXADataSource
The MySQL and PostgreSQL datasources are no longer provided as pre-configured internal datasources. However, these drivers can still be installed as modules as described in Modules, Drivers, and Generic Deployments.
The mechanism follows the
Derbydriver example and uses S2I artifacts. Create a
drivers.envfile for each datasource to be installed.
Example
drivers.envFile for MySQL Datasource
#DRIVER DRIVERS=MYSQL MYSQL_DRIVER_NAME=mysql MYSQL_DRIVER_MODULE=org.mysql MYSQL_DRIVER_CLASS=com.mysql.cj.jdbc.Driver MYSQL_XA_DATASOURCE_CLASS=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource
Example
drivers.envFile for PostgreSQL Datasource
#DRIVER DRIVERS=POSTGRES POSTGRES_DRIVER_NAME=postgres POSTGRES_DRIVER_MODULE=org.postgresql POSTGRES_DRIVER_CLASS=org.postgresql.Driver POSTGRES_XA_DATASOURCE_CLASS=org.postgresql.xa.PGXADataSource
For information about download locations for various drivers, such as MySQL or PostgreSQL, see JDBC Driver Download Locations in the Configuration Guide.
Generic Deployments
Deployable archive files, such as JARs, WARs, RARs, or EARs, can be deployed from an injected image using the
install_deployments function supplied by the API in
install-common.sh.
If the
CUSTOM_INSTALL_DIRECTORIESenvironment variable has been declared but no
install.shscriptsalternative, and requires the artifacts to be structured appropriately.
3.3.2. Runtime Artifacts
3.3.2.1. Datasources
There are three types of datasources:
- Default internal datasources. These are PostgreSQL, MySQL, and MongoDB. These datasources are available on OpenShift by default through the Red Hat Registry and do not require additional environment files to be configured for JDK 8 image streams. Set the DB_SERVICE_PREFIX_MAPPING environment variable to the name of the OpenShift service for the database to be discovered and used as a datasource.
- Other internal datasources. These are datasources not available by default through the Red Hat Registry but run on OpenShift. Configuration of these datasources is provided by environment files added to OpenShift Secrets.
- External datasources that are not run on OpenShift. Configuration of external datasources is provided by environment files added to OpenShift Secrets.
Example: Datasource Environment File
# derby datasource ACCOUNTS_DERBY_DATABASE=accounts ACCOUNTS_DERBY_JNDI=java:/accounts-ds ACCOUNTS_DERBY_DRIVER=derby ACCOUNTS_DERBY_USERNAME=derby ACCOUNTS_DERBY_PASSWORD=derby ACCOUNTS_DERBY_TX_ISOLATION=TRANSACTION_READ_UNCOMMITTED ACCOUNTS_DERBY_JTA=true # Connection info for xa datasource ACCOUNTS_DERBY_XA_CONNECTION_PROPERTY_DatabaseName=/home/jboss/source/data/databases/derby/accounts # _HOST and _PORT are required, but not used ACCOUNTS_DERBY_SERVICE_HOST=dummy ACCOUNTS_DERBY_SERVICE_PORT=1527
The
DATASOURCES property is a comma-separated list of datasource property prefixes. These prefixes are then appended to all properties for that datasource. Multiple datasources can then be included in a single environment file. Alternatively, each datasource can be provided in separate environment files.
Datasources contain two types of properties: connection pool-specific properties and database driver-specific properties. Database driver-specific properties use the generic
XA_CONNECTION_PROPERTY, because the driver itself is configured as a driver S2I artifact. The suffix of the driver property is specific to the particular driver for the datasource.
In the above example,
ACCOUNTS is the datasource prefix,
XA_CONNECTION_PROPERTY is the generic driver property, and
DatabaseName is the property specific to the driver.
The datasources environment files are added to the OpenShift Secret for the project. These environment files are then called within the template using the
ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files as shown below.
{ “Name”: “ENV_FILES”, “Value”: “/etc/extensions/datasources1.env,/etc/extensions/datasources2.env” }
3.3.2.2. Resource Adapters
Configuration of resource adapters is provided by environment files added to OpenShift Secrets.
Table 3.1. Resource Adapter Properties
The
RESOURCE_ADAPTERS property is a comma-separated list of resource adapter property prefixes. These prefixes are then appended to all properties for that resource adapter. Multiple resource adapter can then be included in a single environment file. In the example below,
MYRA is used as the prefix for a resource adapter. Alternatively, each resource adapter can be provided in separate environment files.
Example: Resource Adapter Environment File
#RESOURCE_ADAPTER RESOURCE_ADAPTERS=MYRA MYRA_ID=myra MYRA_ARCHIVE=myra.rar MYRA_CONNECTION_CLASS=org.javaee7.jca.connector.simple.connector.outbound.MyManagedConnectionFactory MYRA_CONNECTION_JNDI=java:/eis/MySimpleMFC
The resource adapter environment files are added to the OpenShift Secret for the project namespace. These environment files are then called within the template using the
ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files as shown below.
{ "Name": "ENV_FILES", "Value": "/etc/extensions/resourceadapter1.env,/etc/extensions/resourceadapter2.env" }
3.4. Deployment Considerations for the JBoss EAP for OpenShift Image
3.4.1. Scaling Up and Persistent Storage Partitioning
There are two methods for deploying JBoss EAP with persistent storage: single-node partitioning, and multi-node partitioning.
Single-node partitioning stores the JBoss EAP data store directory, including transaction data, in the storage volume.
Multi-node partitioning creates additional, independent
split-n directories to store the transaction data for each JBoss EAP pod, where
n is an incremental integer. This communication is not altered if a JBoss EAP pod is updated, goes down unexpectedly, or is redeployed. When the JBoss EAP pod is operational again, it reconnects to the associated split directory and continues as before. If a new JBoss EAP pod is added, a corresponding
split-n directory is created for that pod.
To enable the multi-node configuration you must set the
SPLIT_DATA parameter to
true. This results in the server creating independent
split-n directories for each instance within the persistent volume which are used as their data store.
Due to the different storage methods of single-node and multi-node partitioning, changing a deployment from single-node to multi-node results in the application losing all data previously stored in the data directory, including messages, transaction logs, and so on. This is also true if changing a deployment from multi-node to single-node, as the storage paths will not match.
3.4.2. Scaling Down and Transaction Recovery
When the JBoss EAP for OpenShift image is deployed using a multi-node configuration, it is possible for unexpectedly terminated transactions to be left in the data directory of a terminating pod if the cluster is scaled down.
See manual transaction recovery to complete these branches.
|
https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/getting_started_with_jboss_eap_for_openshift_online/configuring_eap_openshift_image
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Web scraping is a process of retrieving data through automated means. It is essential in many scenarios, such as competitor price monitoring, real estate listing, lead generation, sentiment monitoring, news article or financial data aggregation, and more.
The first decision you would want to make when writing a web scraping code is deciding on the programming language you’ll use. You can employ a number of languages for this purpose, such as Python, JavaScript, Java, Ruby, or C#. All of the mentioned languages serve powerful web scraping capabilities.
In this article, we’ll explore C# and show you how to create a real-life C# public web scraper. Keep in mind that even if we’re using C#, you’ll be able to adapt this information to all languages supported by the .NET platform, including VB.NET and F#.
Navigation:
C# Web Scraping Tools
Before writing any code, the first step is choosing the suitable C# library or package. These C# libraries or packages will have the functionality to download HTML pages, parse them, and make it possible to extract the required data from these pages. Some of the most popular C# packages are as follows:
- ScrapySharp
- Puppeteer Sharp
- Html Agility Pack
Html Agility Pack is the most popular C# package, with almost 50 million downloads from Nuget alone. There are multiple reasons behind its popularity, the most significant one being the ability of this HTML parser to download web pages directly or using a browser. This package is tolerant of malformed HTML and supports XPath. Also, it can even parse local HTML files; thus, we’ll use this package further in this article.
ScrapySharp adds even more functionality to C# programming. This package supports CSS Selectors and can simulate a web browser. While ScrapySharp is considered a powerful C# package, it’s not very actively maintained among programmers.
Puppeteer Sharp is a .NET port of the famous Puppeteer project for Node.js. It uses the same Chromium browser to load the pages. Also, this package employs the async-await style of code, enabling asynchronous, promise-based behavior. Puppeteer Sharp might be a good option if you are already familiar with this C# package and need a browser to render pages.
Building a web scraper with C#
As mentioned, now we’ll demonstrate how to write a C# public web scraping code that will use Html Agility Pack. We will be employing the .NET 5 SDK with Visual Studio Code. This code has been tested with .NET Core 3 and .NET 5, and it should work with other versions of .NET.
We’ll be following the hypothetical scenario: scraping a bookstore and collecting book names and prices. Let’s set up the development environment before writing a C# web scraper.
Setup Development environment
For C# development environment, install Visual Studio Code. Note that Visual Studio and Visual Studio Code are two completely different applications if you use them for writing a C# code.
Once Visual Studio Code is installed, install .NET 5.0 or newer. You can also use .NET Core 3.1. After installation is complete, open the terminal and run the following command to verify that .NET CLI or Command Line Interface is working properly:
dotnet --version
This should output the version number of the .NET installed.
Project Structure and Dependencies
The code will be a part of a .NET project. To keep it simple, create a console application. Then, make a folder where you’ll want to write the C# code. Open the terminal and navigate to that folder. Now, type in this command:
dotnet new console
The output of this command should be the confirmation that the console application has been successfully created.
Now, it’s time to install the required packages. To use C# to scrape public web pages, Html Agility Pack will be a good choice. You can install it for this project using this command:
dotnet add package HtmlAgilityPack
Install one more package so that we can easily export the scraped data to a CSV file:
dotnet add package CsvHelper
If you are using Visual Studio instead of Visual Studio Code, click File, select New Solution, and press on Console Application. To install the dependencies, follow these steps:
- Choose Project;
- Click on Manage Project Dependencies. This will open the NuGet Packages window;
- Finally, search for CsvHelper, choose it, and click on Add Packages.
Now that the packages have been installed, we can move on to writing a code for web scraping the bookstore.
Download and Parse Web Pages
The first step of any web scraping program is to download the HTML of a web page. This HTML will be a string that you’ll need to convert into an object that can be processed further. The latter part is called parsing. Html Agility Pack can read and parse files from local files, HTML strings, any URL, or even a browser.
In our case, all we need to do is get HTML from a URL. Instead of using .NET native functions, Html Agility Pack provides a convenient class –
HtmlWeb. This class offers a
Load function that can take a URL and return an instance of the
HtmlDocument class, which is also part of the package we use. With this information, we can write a function that takes a URL and returns an instance of
HtmlDocument.
Open
Program.cs file and enter this function in the class
Program:
// Parses the URL and returns HtmlDocument object static HtmlDocument GetDocument(string url) { HtmlWeb web = new HtmlWeb(); HtmlDocument doc = web.Load(url); return doc; }
With this, the first step of the code is complete. The next step is to parse the document.
Parsing the HTML: Getting Book Links
In this part of the code, we’ll be extracting the required information from the web page. At this stage, a document is now an object of type
HtmlDocument. This class exposes two functions to select the elements. Both functions accept XPath as input and return
HtmlNode or
HtmlNodeCollection. Here is the signature of these two functions:
public HtmlNodeCollection SelectNodes(string xpath);
public HtmlNode SelectSingleNode(string xpath);
Let’s discuss
SelectNodes first.
For this example – C# web scraper – we are going to scrape all the book details from this page. First, it needs to be parsed so that all the links to the books can be extracted. To do that, open this page in the browser, right-click any of the book links and click Inspect. This will open the Developer Tools.
After understanding some time with the markup, your XPath to select should be something like this:
//h3/a
This XPath can now be passed to the
SelectNodes function.
HtmlDocument doc = GetDocument(url); HtmlNodeCollection linkNodes = doc.DocumentNode.SelectNodes("//h3/a");
Note that the
SelectNodes function is being called by the
DocumentNode attribute of the
HtmlDocument.
The variable
linkNodes is a collection. We can write a
foreach loop over it and get the
href from each link one by one. There is one tiny problem that we need to take care of – the links on the page are relative. Hence, they need to be converted into an absolute URL before we can scrape these extracted links.
For converting the relative URLs, we can make use of the
Uri class. We can use this constructor to get a
Uri object with an absolute URL.
Uri(Uri baseUri, string? relativeUri);
Once we have the Uri object, we can simply check the
AbsoluteUri property to get the complete URL.
We can write all this in a function to keep the code organized.
static List<string> GetBookLinks(string url) { var bookLinks = new List<string>(); HtmlDocument doc = GetDocument(url); HtmlNodeCollection linkNodes = doc.DocumentNode.SelectNodes("//h3/a"); var baseUri = new Uri(url); foreach (var link in linkNodes) { string href = link.Attributes["href"].Value; bookLinks.Add(new Uri(baseUri, href).AbsoluteUri); } return bookLinks; }
In this function, we are starting with an empty
List<string> object. In the
foreach loop, we are adding all the links to this object and returning it.
Now, it’s time to modify the
Main() function so that we can test the C# code that we have written so far. Modify the function so that it looks like this:
static void Main(string[] args) { var bookLinks = GetBookLinks(""); Console.WriteLine("Found {0} links", bookLinks.Count); }
To run this code, open the terminal and navigate to the directory which contains this file, and type in the following:
dotnet run
The output should be as follows:
Found 20 links
Let’s move to the next part where we will be processing all the links to get the book data.
Parsing the HTML: Getting Book Details
At this point, we have a list of strings that contain the URLs of the books. We can simply write a loop that will first get the document using the
GetDocument function that we’ve already written. After that, we’ll use the
SelectSingleNode function to extract the title and the price of the book.
To keep the data organized, let’s start with a class. This class will represent a book. This class will have two properties –
Title and
Price. It will look like this:
public class Book { public string Title { get; set; } public string Price { get; set; } }
Now, open a book page in the browser and create the XPath for the
Title – //h1. Creating an XPath for the price is a little trickier because the additional books at the bottom have the same class applied.
The XPath of the price will be this:
//div[contains(@class,"product_main")]/p[@class="price_color"]
Note that XPath contains double quotes. We will have to escape these characters by prefixing them with a backslash.
Now we can use the
SelectSingleNode function to get the Node, and then employ the
InnerText property to get the text contained in the element. We can organize everything in a function as follows:
static List<Book> GetBookDetails(List<string> urls) { var books = new List<Book>(); foreach (var url in urls) { HtmlDocument document = GetDocument(url); var titleXPath = "//h1"; var priceXPath = "//div[contains(@class,\"product_main\")]/p[@class=\"price_color\"]"; var book = new Book(); book.Title = document.DocumentNode.SelectSingleNode(titleXPath).InnerText; book.Price = document.DocumentNode.SelectSingleNode(priceXPath).InnerText; books.Add(book); } return books; }
This function will return a list of
Book objects. It’s time to update the
Main() function as well:
static void Main(string[] args) { var bookLinks = GetBookLinks(""); Console.WriteLine("Found {0} links", bookLinks.Count); var books = GetBookDetails(bookLinks); }
The final part of this web scraping project is to export the data in a CSV.
Exporting Data
If you haven’t yet installed the
CsvHelper, you can do this by running the command
dotnet add package CsvHelper from within the terminal.
The export function is pretty straightforward. First, we need to create a
StreamWriter and send the CSV file name as the parameter. Next, we will use this object to create a
CsvWriter. Finally, we can use the
WriteRecords function to write all the books in just one line of code.
To ensure that all the resources are closed properly, we can use the
using block. We can also wrap everything in a function as follows:
static void exportToCSV(List<Book> books) { using (var writer = new StreamWriter("./books.csv")) using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture)) { csv.WriteRecords(books); } }
Finally, we can call this function from the
Main() function:
static void Main(string[] args) { var bookLinks = GetBookLinks(""); var books = GetBookDetails(bookLinks); exportToCSV(books); }
That’s it! To run this code, open the terminal and run the following command:
dotnet run
Within seconds, you will have a
books.csv file created.
Conclusion
You can use multiple packages if you want to write a web scraper with C#. In this article, we’ve shown how to employ Html Agility Pack, a powerful and easy-to-use package. This was a simple example that can be enhanced further; for instance, you can try adding the above logic to handle multiple pages to this code.
If you want to know more about how web scraping works using other programming languages, you can have a look at the guide on web scraping with Python. We also have a step-by-step tutorial on how to write a web scraper using JavaScript.
People also ask
Is C# good for web scraping?
Similar to Python, C# is widely used for web scraping. When deciding on which programming language to choose, selecting the one you’re most familiar with is essential. Yet, you’ll be able to find example web scrapers in both Python and C#.
Is web scraping legal?
Proxies may be legal if they are used without breaching any laws. Yet, before engaging in any activity with proxies, you should obtain professional legal advice regarding your particular case. We have this topic covered in our blog post “Is web scraping legal?” if you wish to dig deeper into the topic.
|
https://oxylabs.io/blog/csharp-web-scraping
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Definition of Flask Environment Variables
Flask environment variables are defined as a set of parameters that facilitates the running of application developed in Flask. An environment for flask applications is essentially a directory or a placeholder that contains all the pre-requisite for successfully running an application. It is assumed to be a surrounding created in order for an application to run. The environment variables come in handy as it allows users to tweak parameters enabling them download tools for building flask application to run, test, and operationalize only for the sake of running the application without hampering the system-wide variables. This also allows users to categorize the type of environment for the flask application, the details of which we will learn in the forthcoming sections!
Syntax:
In this section, we will look at the syntax involved in environment variables, so that when we understand the working of environment variables in-depth, the peek-in of syntax will enable us to have a closely relate to the working and will promote quicker understanding.
Configure environment variable one by one:
appConfig = Flask(__name__)
appConfig.config['<config variable>'] = <config variable’s value>
Configure environment variable from config.cfg:
In the python code:
appConfig = Flask(__name__)
appConfig.config.from_envvar('APP_SETTINGS')
In command window:
export APP_SETTINGS = <path to the config file>
Setting the FLASK_APP environment variable:
In command window:
export FLASK_APP=<python file name>
Setting the FLASK_ENV environment variable:
In command window:
export FLASK_ENV=<environment name>
How Environment Variables work in Flask?
Before we start learning about environment variables, it is necessary for us to know about the environments which Flask supports. It is the availability of different environments that reason to having multiple configurations because every configuration file is environment-related. The several categories in which the environment is segregated are:
- Development: This environment consists of a set of processes and programming tools that facilitate the developer to build the program or the software product. This environment acts like an experimental playground for the developer so that one can test, debug, experiment, and then finalize on the tools that will go into the final program.
- Testing: This environment relates to the system having configurations built for enabling developers to run test cases enhancing the confidence in the actual product. The development environment might consist of various tools as a result of experimentation, and we don’t know the dependencies, if applicable, on any of the tools. As a result, the testing environment helps create an identical environment every time one needs to test the product so that the integration in the production environment is seamless.
- Production: This environment consists of a set of processes and programming tools that facilitate the end-user for their operations. The software developed in the development environment and after getting tested in the testing environment gets actually put into the production environment for operation of the end-users. This is a real-time setting that has all the required hardware installed and relied on for any commercial purposes.
Now that we have a fair idea of all the environment applicable in the context of Flask, we would now look at what environment variables are and their working. In the introduction session, we have extensively gone through the definition of the environment variables, which we can in short describe as a set of parameters that facilitates the running of application in the prescribed environment we intend the application to run.
While we use these parameters to ease off some repetitive coding for our flask application, we should keep in mind that the usage is totally voluntary and one can easily switch to either of the ways as per the requirement of the application developed. For understanding the working let us take an example through which we can go through step by step. From one of our earlier article of Flask session, we had the following code:
Python code:
from flask import Flask, redirect, url_for, render_template, request, session
from datetime import timedelta
appFlask = Flask(__name__)
appFlask.secret_key = "27eduCBA09"
@appFlask.route("/login")
def login():
session["user"] = "user1"
return '''<h1>The session value is: {}</h1>'''.format(session["user"])
if __name__ == "__main__":
appFlask.run(debug=True)
Command-line:
python <PYTHON FILE NAME>
We see here at the last 2 lines, that the main function consists code that we can easily integrate into one and easily run the application through command line using the FLASK_APP environment variable. The modified code after removing the last 2 lines of the code will look like:
Python code:
from flask import Flask, redirect, url_for, render_template, request, session
from datetime import timedelta
appFlask = Flask(__name__)
appFlask.secret_key = "27eduCBA09"
appFlask.permanent_session_lifetime = timedelta(minutes=5)
@appFlask.route("/login")
def login():
session["user"] = "user1"
return '''<h1>The session value is: {}</h1>'''.format(session["user"])
Command-line:
set FLASK_APP=<PYTHON FILE NAME> flask run
Using the above command-line environment variable, we can run the python code using the single command line and in addition to that, we can reduce 2 extra lines of boilerplate code. One can think of The FLASK_APP variable that it takes into account that one needs to convert the python code into a flask application and then learning from the later part of the command that it should run the flask application. It looks as if the FLASK_APP variable incorporates the 2 lines of code that we avoided.
Similarly, the FLASK_ENV variable sets the environment on which we want our flask application to run. For example, if we put FLASK_ENV=development the environment will be switched to development. By default, the environment is set to development. Now let us look at some example so that one can easily understand “what happens in reality”!
Examples
Let us discuss examples of Flask Environment Variables.
Example #1
Setting the FLASK_APP environment variable
Syntax
Before FLASK_APP variable (The python file name is flaskSession.py):
python flaskSession.py
After FLASK_APP variable (The python file name is flaskSessionAppVariable.py):
set FLASK_APP= flaskSessionAppVariable.py flask run
Output:
Example #2
Set the session to development instead of production:
Syntax:
set FLASK_ENV=development
Output:
Conclusion
In this article, we have got to know about the details of environment that flask facilitates along with the variables that not only facilitates developers to write smaller and neat codes but also reduce a lot of errors that might creep in due to negligence. Rest is on to the readers to start experimenting with the environment variables in their daily development!
Recommended Articles
This is a guide to Flask Environment Variables. Here we discuss the definition, How Environment Variables work in Flask? examples with code implementation respectively. You may also have a look at the following articles to learn more –
|
https://www.educba.com/flask-environment-variables/?source=leftnav
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
A trace based streamer. More...
#include <introspected-doxygen.h>
ns3::UdpTraceClient is accessible through the following paths with Config::Set and Config::Connect:
No TraceSources are defined for this type.
sends udp packets based on a trace file of an MPEG4 stream trace files could be downloaded form : (the 2 first lines of the file should be removed) A valid trace file is a file with 4 columns: -1- the first one represents the frame index -2- the second one indicates the type of the frame: I, P or B -3- the third one indicates the time on which the frame was generated by the encoder -4- the fourth one indicates the frame size in byte if no valid MPEG4 trace file is provided to the application the trace from g_defaultEntries array will be loaded.
Definition at line 43 of file udp-trace-client.cc.
creates a traceBasedStreamer application
Definition at line 96 of file udp-trace-client.cc.
References NS_LOG_FUNCTION, and SetTraceFile().::Application.
Definition at line 176 of file udp-trace-client.cc.
References ns3::Application::DoDispose(), and NS_LOG_FUNCTION.
Definition at line 168 of file udp-trace-client.cc.
References NS_LOG_FUNCTION.
Definition at line 161 of file udp-trace-client.cc.
References NS_LOG_FUNCTION.
set the destination IP address and port
Definition at line 120 of file udp-trace-client.cc.
References NS_LOG_FUNCTION.
set the trace file to be used by the application
Definition at line 147 of file udp-trace-client.cc.
References NS_LOG_FUNCTION.
Referenced by UdpTraceClient().
Application specific startup code.
The StartApplication method is called at the start time specified by Start This method should be overridden by all or most application subclasses.
Reimplemented from ns3::Application.
Definition at line 241 of file udp-trace-client.cc.
References ns3::Socket::Bind(), ns3::Socket::Bind6(), ns3::Socket::Connect(), ns3::Ipv4Address::ConvertFrom(), ns3::Ipv6Address::ConvertFrom(), ns3::Socket::CreateSocket(), ns3::Application::GetNode(), ns3::Ipv4Address::IsMatchingType(), ns3::Ipv6Address::IsMatchingType(), ns3::TypeId::LookupByName(), ns3::MakeNullCallback(), NS_LOG_FUNCTION, ns3::Simulator::Schedule(), ns3::Seconds(), and ns3::Socket::SetRecvCallback().
Application specific shutdown code.
The StopApplication method is called at the stop time specified by Stop This method should be overridden by all or most application subclasses.
Reimplemented from ns3::Application.
Definition at line 265 of file udp-trace-client.cc.
References ns3::Simulator::Cancel(), and NS_LOG_FUNCTION.
|
https://coe.northeastern.edu/research/krclab/crens3-doc/structns3_1_1_udp_trace_client.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
+4
Copy the current viewing file's path to clipboard
Arunprasad Rajkumar 9 years ago • updated by Alexis Sa 8 years ago • 5
It would be nice if we were able to get the full path of viewing file. Suppose if user right clicking on the file-tab it should show option to copy the current viewing file's path to clipboard :)
Customer support service by UserEcho
import sublime, sublime_plugin, os
class PathToClipboardCommand(sublime_plugin.TextCommand):
def run(self, edit):
sublime.set_clipboard(self.view.file_name())
class FilenameToClipboardCommand(sublime_plugin.TextCommand):
def run(self, edit):
sublime.set_clipboard(os.path.basename(self.view.file_name()))
class FiledirToClipboardCommand(sublime_plugin.TextCommand):
def run(self, edit):
branch, leaf = os.path.split(self.view.file_name())
sublime.set_clipboard(branch)
Hi, it is a plugin right? Great work. Thanks for that :D
BTW, do I need to map this to certain key combination? how to make this as usable one? Please help me
Find the class - strip off the "Command" - then convert the CamelCase by separating the words with underscores where you see capital letters. Like following:
Add the following to Key bindings - User:
{ "keys": ["ctrl+alt+c"], "command": "filename_to_clipboard" },
Or to add the commands to the right-click menu, you can create a file named Context.sublime-menu in your User folder containing the following:
[
{ "command": "filename_to_clipboard", "caption": "Filename to Clipboard" },
{ "command": "filedir_to_clipboard", "caption": "Filedir to Clipboard" },
]
It is build into core ST.
Right click on the body of the open file (not the tab the bit with the text in it).
There is what you want right there on the menu and it has been there for years.
No need for all the argby bargy in this thread - it is already supplied
copy or delete long path files.
|
https://sublimetext.userecho.com/en/communities/1/topics/4429-copy-the-current-viewing-files-path-to-clipboard
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
In Computer vision we often deal with several tasks like Image classification, segmentation, and object detection. While building a deep learning model for image classification over a very large volume of the database of images we make use of transfer learning to save the training time and increase the performance of the model. Transfer learning is the process where we can use the pre-trained model weights of the model like VGG16, ResNet50, Inception, etc that were trained on the ImageNet dataset.
Also, when we train an image classification model we always want a model that does not get overfitted. Overfitted models are those models that perform good in training but poorly while prediction on testing data is computed. This is the reason we make use of regularization to avoid overfitting situations like Dropouts and Batch Normalization.
Through this article, we will explore the usage of dropouts with the Resnet pre-trained model. We will build two different models one without making use of dropout and one with dropout. At last, we will compare both the models with graphs and performance. For this experiment, we are going to use the CIFAR10 Dataset that is available in Keras and can also be found on Kaggle.
What we will learn from this article?
- What is ResNet 50 Architecture?
- How to use ResNet 50 for Transfer Learning?
- How to build a model with and without dropout using ResNet?
- Comparison of both the built models
What is ResNet50 Architecture?
ResNet was a model that was built for the ImageNet competition. This was the first model that was a very deep network having more than 100 layers. It reduced the error rate down to 3.57% from 7.32% shown by vgg. The main idea behind this network was making use of Residual connections. The idea is not to learn the original function but to residuals. Read more about ResNet architecture here and also check full Keras documentation.
Dropout
Dropout is a regularization technique for reducing over fitting in neural networks by preventing complex co-adaptations on training data. It is an efficient way of performing model averaging with neural networks. The term dilution refers to the thinning of the weights.The term dropout refers to randomly “dropping out”, or omitting, units (both hidden and visible) during the training process of a network.
Model Without Dropout
Now we will build the image classification model using ResNet without making use of dropouts. First, we will define all the required libraries and packages. Use the below code to import the same.
import tensorflow as tf from keras import applications from keras.utils import to_categorical from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Dense, Dropout, Flatten from keras import Model from keras.applications.resnet50 import ResNet50 from keras.applications.resnet50 import preprocess_input from tensorflow import keras
Now we will load the data. We are directly loading it from Keras whereas you can read the data downloaded from Kaggle as well. After loading we will transform the labels followed by defining the base model that is ResNet50. Use the below code to the same.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
y_train=to_categorical(y_train)
y_test=to_categorical(y_test)
base_model = ResNet50(include_top=False,weights='imagenet',input_shape=(32,32,3),classes=y_train.shape[1])
Now we will add the flatten and fully connected layer over this base model and will define the total no of classes as outputs. Use the below code to the same.
model_1= Sequential() model_1.add(base_model) model_1.add(Flatten()) model_1.add(Dense(512,activation=('relu'),input_dim=2048)) model_1.add(Dense(256,activation=('relu'))) model_1.add(Dense(512,activation=('relu'))) model_1.add(Dense(128,activation=('relu'))) model_1.add(Dense(10,activation=('softmax')))
Now we will check the model summary followed by compiling and training the model. Use the below code for the same. We will be training the model for 15 epochs with batch size of 100.
.
model_1.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model_1.fit(x_train,y_train,validation_data=(x_test,y_test),epochs=15,batch_size=100)
Now we will evaluate the model performance on the testing data. Use the below code for the same.
print("\nTraining Loss and Accuracy: ",model_1.evaluate(x_train,y_train))
print("\nTesting Loss and Accuracy: ",model_1.evaluate(x_test,y_test))
Model With Dropout
Now we will build the image classification model using ResNet without making dropouts. Use the below code to do the same. We will follow the same steps. We will first define the base model and add different layers like flatten and fully connected layers to it. Use the below code for the same.
model_2= Sequential() model_2.add(base_model) model_2.add(Flatten()) model_2.add(Dense(512,activation=('relu'),input_dim=2048)) model_2.add(Dense(256,activation=('relu'))) model_2.add(Dense(512,activation=('relu'))) model_2.add(Dense(128,activation=('relu'))) model_2.add(Dense(10,activation=('softmax')))
We will now compile the model and will train the model over the training data. Use the below code for the same.
model_2.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
history = model_2.fit(x_train,y_train,validation_data=(x_test,y_test),epochs=15,batch_size=100)
Now we will evaluate this model performance on the testing data. Use the below code for the same.
print("\nTraining Loss and Accuracy: ",model_2.evaluate(x_train,y_train))
print("\nTesting Loss and Accuracy: ",model_2.evaluate(x_test,y_test))
Now we will see the graph for model loss and accuracy for both the models. Use the below code to do the same.
Comparison of Training and Testing Accuracy and Loss
Conclusion
Through this article, we explored practically how using dropout increases the accuracy of the model built using ResNet architecture. Training the model for only 10 epochs gave the accuracy of 74% whereas without dropout it only went up to 62%. We did not make use of any preprocessing techniques. If we make use of such techniques like Data Augmentation we can get better performance of the model. Regularization techniques have also shown good results as the learning of the model gets improved.
|
https://analyticsindiamag.com/guide-to-building-a-resnet-model-with-without-dropout/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Product Information
Intelligent ERP Update: SAP S/4HANA Cloud 1908 – Technology Topics
Welcome to this new blogs series illustrating Technology Topic highlights for SAP S/4HANA Cloud starting with the latest release, SAP S/4HANA Cloud 1908.
As Technology Topics comprise many different aspects of SAP S/4HANA Cloud, this blog focuses on a selection of innovations which I consider most important for 1908. Please note that for one of these topics, there this an already well-perceived blog series in place. For this reason, I will not cover the topic in detail in my blog but instead point you to the respective offering.
For 1908, I will cover the following topics:
- Extensibility Templates
- Localization
- Situation Handling
- Analytics
- Intelligent Robotic Process Automation
- Hybrid Cloud ERP (link to separate blog)
Extensibility Templates
I would like to start off with the extensibility templates. This innovation is especially important for our SAP S/4HANA Cloud implementation partners as it allows them to significantly speed up their implementation projects by distributing in-app extensibility content across system landscapes. In order to use this new functionality, you have to activate the scope item for in-app extensibility which is called 1N9. The new functionality comes with three new apps:
- Export Extensibility Template
- Import Extensibility Template
- Extensibility Settings
The extended items, such as custom fields, custom objects, or business logic, are created in a separate name space in the quality system of the template provider (partner). In the ‘Export Extensibility Template’ app, these items can be added to an extensibility template and exported from the template provider system by means of a human readable .json file. To guarantee consistency, SAP S/4HANA Cloud checks during the saving process whether there are any dependencies to other extension items which need to also be included into the file.
Figure 1 – With the Export Extensibility App, you can download your extended items
Using the ‘Import Extensibility Template’ app, the exported template can be imported into the system of the template consumer (customer). There, it can be adapted according to the requirements of the respective template consumer. This, of course, leads to large efficiency gains: Template providers don’t have to manually create the same in-app extensions in each and every template consumer system, but instead you can easily import them from the same template. As you can imagine, this significantly speeds ups your projects and ensures consistency across your customer solutions.
The ‘Extensibility Settings’ app allows you to make the necessary settings for namespaces. As a template provider, you can register your own namespace and assign users to it to use the in-app extensibility tools to develop extensions in your own name space. You can create and download an installation file so that a template consumer can import your development objects. As a template consumer, you can register a template provider’s namespace to be able to import and publish extensions developed in the respective namespace. What is important to note is that a template consumer won’t be able to create items in the template provider’s namespace.
Disclaimer:
SAP is not responsible for any extensibility template that a customer uploads into his SAP S/4HANA cloud system. The use of an extensibility template is at the customer’s sole risk, and SAP shall not be responsible for any changes or modifications of customer data by or through an extensibility template (see SAP Help Portal for full disclaimer).
For more information, you can check out the following sources:
- Extensibility Templates on SAP Help Portal
- SAP Best Practice Explorer
- For information on how to obtain unique namespaces refer to SAP Note 2787809
Localization
Also for localization, we have good news to spread for 1908: With Czechia and Thailand, we offer two new country versions along with the corresponding languages Czech and Thai. Thanks to this, we now provide 42 country versions in 24 languages. Regarding the Treasury and Risk Management (TRM) localization, we have Turkey and South Korea as new kids on the block.
For more information, you can check out the following sources:
- SAP Best Practice Explorer
- Country/Region-Specific Functions on SAP Help Portal
- What’s New Viewer – SAP S/4HANA Cloud on SAP Help Portal
- Localization for Thailand on SAP Innovation Discovery
- Localization for Czechia on SAP Innovation Discovery
Situation Handling
For Situation Handling, there are several innovations with 1908 that I would like to draw your attention to.
‘My Situations’ App
The first one is the new ‘My Situations’ app which provides an overview of all open situations in a user‘s area of responsibility and serves as an additional channel informing the user about situations that need to be solved. The app allows you to directly navigate to the corresponding apps with the situation details. In addition, it even contains those situations where the notification option was not enabled.The relevant scope item is 31N.
Figure 2 – The ‘My Situations’ app displays all open situations within a user’s area of responsibility
In addition to this, there is also a new functionality for e-mail notifications available which you can configure in the ‘Manage Situation Types’ app. It allows users to be informed about important and urgent situations by e-mail and can be enabled per notification type in the user’s notification settings.
Figure 3 – The ‘Manage Situation Types’ app allows you to configure e-mail notifications for new situations
With SAP S/4HANA Cloud 1908, we now deliver 27 situation templates. Situation templates predefine business situations about which you want to inform specific members in your organization by using the Situation Handling Framework. Standard situation templates are preconfigured by SAP and intended for specific use cases.
Examples for situation templates are:
- Finance: House Bank Account to Be Checked (scope item BFA)
- Sourcing & Procurement: Low Number of Quotations Received (scope item 1XF)
For more information on Situation Handling, you can check out the following sources:
- Situation Handling on SAP Help Portal
- ‘My Situations’ app on SAP Help Portal
- New My Situations App on What’s New Viewer – SAP S/4HANA Cloud
- ‘Manage Situation Types’ app on SAP Help Portal
- E-Mail Notifications on What’s New Viewer – SAP S/4HANA Cloud
- SAP Best Practice Explorer
Analytics
Scope item 41E called ‘HR Analytics with SAP Analytics Cloud‘ supports HR and finance business users with key insights into the workforce and financial situation of the company. Key workforce metrics based on SAP SuccessFactors together with financial metrics based on S/4HANA Cloud have been curated into four dashboards. These dashboards allow for a unique integrated HR and finance view on the company’s current situation and history.
Figure 4 – The ‘Workforce Overview’ dashboard provides an overview on headcount, FTE, and employee turnover
The available charts bring financial aspects to HR and vice versa in dashboards for
- Workforce Overview (e.g. headcount, FTE, turnover)
- Diversity (e.g.gender, age groups)
- Performance (e.g. total workforce cost ratio, operating expense per FTE)
- Finance (e.g. balance sheet)
Figure 5 – The ‘Workforce Performance’ dashboard provides an overview on headcount, FTE, and employee turnover
For more information, check out the following sources:
- Scope item ‘SAP Best Practices for HR analytics with SAP Analytics Cloud’ on SAP Best Practices Explorer
SAP Intelligent Robotic Process Automation
The next topic that I’d like to mention briefly is SAP Intelligent Robotic Process Automation. SAP Intelligent RPA is a complete automation suite from SAP Leonardo and enables customers to achieve a high degree of automation by delivering robotic process automation, machine learning, and conversational artificial intelligence (AI) in an integrated way to automate business processes.Intelligent RPA accelerates digital transformation of business processes by automatically replicating tedious actions that have no added value.. AI bots mimic users by learning user actions in the application’s graphical user interface (GUI), and then automatically execute by repeating those tasks directly in the GUI.
Figure 6 – SAP Best Practice provides prebuilt bots for SAP Intelligent RPA
If you’re interested to learn more about it being integrated with SAP S/4HANA,
To enable the automation within your SAP S/4HANA or SAP S/4HANA Cloud system, you activate the corresponding SAP Best Practices package. First, install the required components for SAP Intelligent Robotic Process Automation. Then, search for the correct or available version of the template bots within the package.If you’re interested to see a practical example how Intelligent RPA can be used with S/4HANA and S/4HANA Cloud, you can check out the blog from Ulrich Hauke that discusses a practical example from the Finance area: The Automated Upload of Manual Entries via API (scope item 4CA).
For more information, you can check out the following sources:
- SAP Best Practices for SAP Intelligent Robotic Process Automation integration with SAP S/4HANA on SAP Best Practice Explorer
- OpenSAP Course starting on September 17, 2019: SAP Intelligent Robotic Process Automation in a Nutshell
- Practical example from the Finance area: Intelligent ERP Update: SAP S/4HANA Cloud 1908 Release for Finance by Ulrich Hauke
Hybrid Cloud ERP (aka Two-Tier ERP)
Last but not least, I would like to quickly mention Hybrid Clour ERP, where we have the following four key innovations with 1908:
- Centralised management of pricing master
- Centralised financial closing
- Procurement of Project Services in HQ-Subsidiary model
- Monitoring of integrations across Hybrid Solutions with SAP Cloud ALM
For more information, you can check out the blog Intelligent ERP Update: Hybrid Cloud Deployment in SAP S/4HANA Cloud 1908 bySrivatsan Santhanam.
|
https://blogs.sap.com/2019/08/20/intelligent-erp-update-sap-s4hana-cloud-1908-cross-topics/?preview_id=829993
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
In this article, we will study what searching algorithms are and the types of searching algorithms i.e linear search and binary search in detail. We will learn their algorithm along with the python code and examples of the searching algorithms in detail. Lastly, we will understand the time complexity and application of the searching algorithm. So, let's get started!
What is a searching algorithm?
There is not even a single day when we don’t want to find something in our daily life. The same thing happens with the computer system. When the data is stored in it and after a certain amount of time the same data is to be retrieved by the user, the computer uses the searching algorithm to find the data. The system jumps into its memory, processes the search of data using the searching algorithm technique, and returns the data that the user requires. Therefore, the searching algorithm is the set of procedures used to locate the specific data from the collection of data. The searching algorithm is always considered to be the fundamental procedure of computing. And hence it is always said that the difference between the fast application and slower application is often decided by the searching algorithm used by the application.
There are many types of searching algorithms possible like linear search, binary search, jump search, exponential search, Fibonacci search, etc. In this article, we will learn linear search and binary search in detail with algorithms, examples, and python code.
What is Linear Search?
Linear search is also known as a sequential searching algorithm to find the element within the collection of data. The algorithm begins from the first element of the list, starts checking every element until the expected element is found. If the element is not found in the list, the algorithm traverses the whole list and return “element not found”. Therefore, it is just a simple searching algorithm.
Example:
Consider the below array of elements. Now we have to find element a = 1 in the array given below.
We will start with the first element of the array, compare the first element with the element to be found. If the match is not found, we will jump to the next element of the array and compare it with the element to be searched i.e ‘a’.
If the element is found, we will return the index of that element else, we will return 'element not found'.
Linear Search Algorithm
LinearSearch(array, key) for each element in the array if element == value return its index
Python Program for Linear Search
def LinearSearch(array, n, k): for j in range(0, n): if (array[j] == k): return j return -1 array = [1, 3, 5, 7, 9] k = 7 n = len(array) result = LinearSearch(array, n, k) if(result == -1): print("Element not found") else: print("Element found at index: ", result)
Output
Element found at index: 3
Time Complexity of Linear Search
The running time complexity of the linear search algorithm is O(n) for N number of elements in the list as the algorithm has to travel through each and every element to find the desired element.
Applications of Linear Search
- Used to find the desired element from the collection of data when the dataset is small
- The searching operations is less than 100 items
What is Binary Search?
Binary search is used with a similar concept, i.e to find the element from the list of elements. Binary search algorithms are fast and effective in comparison to linear search algorithms. The most important thing to note about binary search is that it works only on sorted lists of elements. If the list is not sorted, then the algorithm first sorts the elements using the sorting algorithm and then runs the binary search function to find the desired output. There are two methods by which we can run the binary search algorithm i.e, iterative method or recursive method. The steps of the process are general for both the methods, the difference is only found in the function calling.
Algorithm for Binary Search (Iterative Method)
do until the pointers low and high are equal. mid = (low + high)/2 if (k == arr[mid]) return mid else if (k > arr[mid]) // k is on right side of mid low = mid + 1 else // k is on left side of mid high = mid - 1
Algorithm for Binary Search (Recursive Method)
BinarySearch(array, k, low, high)
if low > high return False
else mid = (low + high) / 2
if k == array[mid] return mid
else if k > array[mid] // k is on the right side return BinarySearch(array, k, mid + 1, high)
else // k is on the right side return BinarySearch(array, k, low, mid - 1)
Example
Consider the following array on which the search is performed. Let the element to be found is k=0
Now, we will set two pointers pointing the low to the lowest position in the array and high to the highest position in the array.
Now, we will find the middle element of the array using the algorithm and set the mid pointer to it.
We will compare the mid element with the element to be searched and if it matches, we will return the mid element.
If the element to be searched is greater than the mid, we will set the low pointer to the "mid+1" element and run the algorithm again.
If the element to be searched is lower than the mid element, we will set the high pointer to the "mid-1" element and run the algorithm again.
We will repeat the same steps until the low pointer meets the high pointer and we find the desired element.
Python Code for Binary Search (Iterative Method)
def binarySearch(arr, k, low, high):
while low <= high: mid = low + (high - low)//2 if arr[mid] == k: return mid elif arr[mid] < k: low = mid + 1 else: high = mid - 1 return -1 arr = [1, 3, 5, 7, 9] k = 5 result = binarySearch(arr, k, 0, len(arr)-1) if result != -1: print("Element is present at index " + str(result)) else: print("Not found")
Output
Element is present at index 2
Python Code for Binary Search (Recursive Method)
def BinarySearch(arr, k, low, high): if high >= low: mid = low + (high - low)//2 if arr[mid] == k: return mid elif arr[mid] > k: return BinarySearch(arr, k, low, mid-1) else: return BinarySearch(arr, k, mid + 1, high) else: return -1 arr = [1, 3, 5, 7, 9] k = 5 result = BinarySearch(arr, k, 0, len(arr)-1) if result != -1: print("Element is present at index " + str(result)) else: print("Not found")
Output
Element is present at index 2
Time complexity of Binary Search
The running time complexity for binary search is different for each scenario. The best-case time complexity is O(1) which means the element is located at the mid-pointer. The Average and Worst-Case time complexity is O(log n) which means the element to be found is located either on the left side or on the right side of the mid pointer. Here, n indicates the number of elements in the list.
The space complexity of the binary search algorithm is O(1).
Applications of Binary Search
- The binary search algorithm is used in the libraries of Java, C++, etc
- It is used in another additional program like finding the smallest element or largest element in the array
- It is used to implement a dictionary
Difference Between Linear Search and Binary Search
Conclusion
As studied, linear and binary search algorithms have their own importance depending on the application. We often need to find the particular item of data amongst hundreds, thousands, and millions of data collection, and the linear and binary search will help with the same.
|
https://favtutor.com/blogs/searching-algorithms
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Note: This is part 3 of four-parts Learning GatsbyJs series. This learning post is still in active development and updated regularly.
In the previous part 2 of the four-parts series, step-by-step procedures to extend & modify the ‘hello world‘ site components to add new pages, styling site, adding global shared components like
<Layout /> with
header,
footer and other components were discussed. In this series, installing & configuring Gatsby plugins to extend the site to get content data & transform functionality will be over-viewed with a working case example.
Part 1: Learning GatsbyJS – Setup & Installation
Part 2: Understanding GatsbyJS Building Blocks
Part 3: An Overview of Gatsby Plugins & GraphQL (this post)
Part 4: Learning to Programmatically Create Pages in Gatsby
The objective of this learning-note post series is to explore GatsbyJs to build an simple SPA blog site and document step-by-step procedures to install & setup Gatsby and overview of its data layer.
Goal
The main goal of this learning-note post is to get an overview of Gatsby data layer. How data in React components from local or external APIs are fetched is described in a previous post. But in Gatsby it makes use of GraphQL a query language to fetch data from local & external APIs together with Gatsby plugins. To quote Gatsby tutorial doc “Gatsby uses GraphQL to enable components to declare the data they need.”
GraphQL Use in Gatsby
GraphQL, a data query language, was developed by facebook to solve their need to fetch the only the needed data from the external APIs. Gatsby uses this GraphQL, a popular & powerful technology to load desired data into its React component.
Send a GraphQL query to your API and get exactly what you need, nothing more and nothing less. GraphQL queries always return predictable results. Apps using GraphQL are fast and stable because they control the data they get, not the server. Source: GraphQL
In Gatsby, GraphQL is used to query data using a special syntax from its internal files as well as from external third party APIs. The GraphQL serves as a interface between Gatsby and its data sources.
To quote from Gatsby docs “GraphiQL is the GraphQL integrated development environment (IDE). It’s a powerful (and all-around awesome) tool you’ll use often while building Gatsby websites.”
In the next section, some use case examples of GraphQL is discussed while using few Gatsby plugins.
Note: More detail discussion on GraphQL use in Gatsby will be discussed in a separate post Deep Diving into GraphQL use in Gatsby (in preparation).
Gatsby Plugins
Gatsby plugins are
Node.js packages and are install using NPM. Gatsby utilizes JAMstack (JavaScript, APIs & Markup) to build projects.
Gatsby plugins library is robust to extend & modify functionality of GatsbyJs core. There are three categories of plugins:
- Source Plugins: Data in Gatsby sites can be from internal local files and/or external databases, APIs, CMSs, etc. The source plugins fetch data from their sources. This plugin adds
nodesin Gatsby data system and thus permits data transformation with transformer plugins into usable format. For example, if we would like to fetch data from local files, the gatsby-source-filesystem plugin knows how to fetch data from file system. Similarly, the gatsby-source-wordpress plugin fetch data from WordPress API.
- Transformer Plugins: The data fetched from source plugin are in various file format. The transformer plugins transform the fetched raw data by the source plugins into usable format to build Gatsby sites. For example, gatsby-tranformer-remark plugin transforms data in markdown files into HTML to display in a browser.
- Functional Plugins: These groups of plugin extend functionality of Gatsby. For example, gatysby-plugin-react-helmet enhances Gatsby functionality with react-helmet to manipulate head of Gatsby document (eg, to add SEO etc.,).
In Gatsby sites, most contents are created with markdown, GraphQL to query markdown files (eg. API) and delivery using React components.
In this learning-note posts, the following two Gatsby source and Gatsby transformer plugins will be installed and explored to extend the Gatsby site project. To get started, the following two plugins will be install with
npm in project folder:
#! install plugin in project folder npm install --save gatsby-transformer-remark gatsby-source-filesystem
Gatsby transformer remark: This plugin uses Remark library & ecosystem to parse markdown files in
.md format into
HTML.
Gatsby source filesystem: This Gatsby source plugin parses files within a directory for further parsing by other plugins & help to read the markdown files.
Configuring Plugins
Most plugins have their default configuration but can be customized by overwriting with options.
Installing Working Example Site
To start fresh, a new site named ‘hello-plugin‘ was setup with the Gatsby default starter hello-world as described previously.
Starting Point
As a starting point, the site was setup similar to described in the previous post with a shared
<Layout /> component and three page components. The commonly use section of the sites (header, main navigation, footer) and global styles included in
layout.js component as shown below:
layout.js(middle) and
header.jscomponents as described in the previous post.
The
<Layout /> &
<Header /> Components
To refresh, the
<Layout /> and refactored
<Header /> components from the previous posts are shown in the middle-panel & right-panel, respectively (above). A browser display of starting point of the project site is shown below:
header.js,
footer.jsand
layout.jscomponents with basic styling.
Extending With Plugins & GraphQL Query
Some common site data like site title, description, author etc., can be added as
siteMetadata object at one location at
gatsby-config.js located at the root of Gatsby site project. These data can be accessed from other components by referencing that location.
1: Editing
gatsby-config.js File
Some basic site data like title (for entire site), pagetitle (for page), description are added in
gatsby-config.js file as shown below.
// gatsby-config.js module.exports = { siteMetadata: { title: 'Hello siteMetadata', pagetitle: `Title from siteMetadata`, description: 'Gatsby - Learning by Doing!', }, plugin: [ ], }
After restarting development server, the above information stored in
siteMetadata object is available for refrence from Gatsby page components with GraphQL query.
2: Using Page Query
In the example below, pagetitle data stored at
siteMetadata object in
gatsby-config.js is available for reference using GraphQL query and the query results can be mapped in a page (eg.,
about.js) component.
// page query example import React from "react" import { graphql } from "gatsby" import Layout from "../components/layout" export default ( {data} ) => ( <Layout> <h1>This {data.site.siteMetadata.pagetitle}</h1> <p>Content for about page. A react way!</p> <img src="" alt="" /> </Layout> ) // page query export const query = graphql` query { site { siteMetadata { pagetitle } } }`
In the example above, pagetitle data location in
siteMetadata is referenced (line: 9) in a page component (eg.,
about.js). The GraphQL is imported to the page component (line 4) then the graphql query for pagetitle is appended in the component (lines: 15-22). Its output in the browser is shown in the figure below.
Additional Information: Use a page query | Gatsby Tutorial
3: Using StaticQuery
The Gatsby’s StaticQuery was introduced as new API in Gatsby V2 to allow non-page component to retrieve data with GraphQL query. In the example below, its hook version – useStaticQuery is used to reference site title data stored at
siteMetadata object by referencing in
src/components/header.js component, as described in the Gatsby tutorial.
//src/components/header.js import React from "react" import { useStaticQuery, Link, graphql } from "gatsby" import "../styles/header.css" export default ( ) => { const data = useStaticQuery( graphql` query { site { siteMetadata { title } } } ` ) return ( <section className="header"> <div className="site-title wrapper"> <Link to="/"> <h3>{data.site.siteMetadata.title}</h3> </Link> <ul className="menu"> <Link to="/">Home</Link> <Link to="/about/">About</Link> <Link to="/contact/">Contact</Link> </ul> </div> </section> ) }
In the example above, the useStaticQuery was imported from Gatsby (line 3). Then using GraphQL, site title graphql query was defined in
header.js component (lines: 7-16) and the title data from
siteMetadata was referenced in line 23. Its output displayed in the browser is shown in the figure below.
Additional Information: Use a StaticQuery | Gatsby Tutorial
4: Viewing in a Browser
Restart the development server and open up the browser at
localhost:8000 and view output from above queries.
gatsby-config.jssiteMetadata.
Extending with Source Plugins
In this section, we will explore using gatsby-source-filesystem plugin & GraphQL to full data from a Gatsby site.
Step 1: Plugin Installation
Install source plugin gatsby-source-filesystem at the root of the project with
npm or
yarn, as shown below:
#! install with plugin sudo npm install --save gatsby-source-filesystem #! install with yarn sudo yarn add gatsby-source-filesystem
Depending on the privacy setting of the machine, use of
sudo might not be necessary.
Step 2: Add & Configure the Plugin
Add the gatsby-source-filesystem plugin into the
gatsby-config.js file and configure it as shown below:
// gatsby-config.js module.exports = { siteMetadata: { // .. }, plugins: [ //add source plugin { resolve: `gatsby-source-filesystem`, options: { name: `src`, path: `${__dirname}/src/`, }, }, // add other plugins ], }
Step 3: Explore GraphQL Query in the Browser
Restart the development server and open up the browser and explore GraphQL at
localhost:8000/___graphql .
allFileadded by the
gatsby-source-filesystemplugin & basic GraphQl query (left panel) & queried data output (right panel).
In the screenshot above, the
allFile in the left panel was available through gatsby-source-filesystem plugin. Next, play around and make a simple query as shown above and run the query which shows available data from from
/src folder on the right panel.
The Gatsby source plugins bring data into Gatsby’s site from various external sources. Next we will use the above GraphlQL query to build a simple page to list all the project files from
/src folder in a browser, as demonstrated in the Gatsby tutorial.
Step 4: Build Page Using GraphlQL query
Using the GraphlQL query from the previous section, a React page component was built
src/pages/my-files.js as described in the Gatsby tutorial.
my-file.jscomponent with appended GraphQL query (left panel) and displayed list of site’s file in the browser (right panel).
The above
my-files.js the GraphlQL query created in previous section were appended in
src/pages/my-files.js react page component (lines: 37-50, left panel) to print list of files in a browser (right panel).
In the example above, the gatsby-source-filesystem plugin was demonstrated to fetch data stored in project’s
/src folder. However, there are many other Gatsby source plugins to fetch data from external APIs like the WordPress, Drupal, instagram, shopify, google sheets and many others.
Extending with Transformer Plugins
In the previous section, it was discussed how Gatsby source plugins can fetch data local and external resources.The data fetched by Gatsby source plugins come in different file formats and the Gatsby transformer plugins transform those raw data into Gatsby’s data system to be able use in Gatsby sites. For example, file written in popular markdown format to create posts in Gatsby can be transformed with gatsby-transformer-remark plugin which transforms yaml markdown files into HTML suitable to display in a browser. Likewise, gatsby-source-wordpress plugin fetch data from WordPress REST API and make available to use in Gatsby sites.
In this section, we will discuss how gatsby-transformer-remark plugin & GraphQL is used to transform markdown files from
/src folder into HTML to display posts in our Gatsby site as explained in Gatsby tutorial.
Step 1: Create Posts with Markdown syntax
Few dummy posts are created in
src/posts/ folder in markdown format as shown below.
--- title: Hello World From Gatsby date: "2019-7-11" author: Mary Jones --- This is is my second test gatsby post. It should be written in mark Link from Unsplash <img src="" alt="" /> Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Additional information on markdown files: Basic Syntax | Markdown Guide
Step 2:Add gatsby-transforer-remark plugin
Add gatsby-transforer-remark plugin and configure in
gatsby-config.js file at the root of the project folder.
#! install plugin npm npm install --save gatsby-transformer-remark #! install with yarn yarn add gatsby-transformer-remark
Next add the plugin to
gatsby-config.js file, just below (lines: 15-23) and configure as as shown below.
// gatsby-config.js module.exports = { siteMetadata: { // .. }, plugins: [ //add source plugin { resolve: `gatsby-source-filesystem`, options: { name: `src`, path: `${__dirname}/src/`, }, }, // add transformer plugins `gatsby-transformer-remark`, { resolve: `gatsby-source-filesystem`, options: { name: `posts`, path: `${__dirname}/src/posts/`, }, }, ], }
In the examples above, the resolve & options section (lines: 17-23) can be omitted too.
Step 3: View GraphQL Query in the Browser
Start the development server and open up the browser and explore GraphQL at l
ocalhost:8000/___graphql .
allMardownRemark,
frontmatter& nodes object added by
gatsby-transformer-remarkplugin (left-panel), an example query of
allMarkdownRemarknode (middle-panel) and queried data output (right-panel).
In the example above, the
allMarkdownremark (left panel, above) and
markdownRemark (not visible) are added by the gatsby-tranformer-remark plugin and available for
markdownRemark node query. As done previously with source plugin (above) some basic query were made (middle panel) to display title, date, author from
markdownRemark node (right panel).
Step 4: Create a list of Markdown files in
src/posts/ folder
For creating a list of posts (in mardown format) and displaying them in the front (home) page of our site, the codes in
src/pages/index.js component requires refactoring (as shown below) by mapping
markdownRemark nodes (line 13, left panel) followed by listing content below (lines: 14-19). The component is then appended with the Graphql query (lines: 26-42). The list of three markdown posts is listed, as expected, in the home page (right panel, below).
The above list of markdown posts in the front (home) page of project site looks great with title, date and excerpt display but there is further work to do. This does not allow to view the entire post content nor there is link to individual post. To view separate pages for each blog post, we need to create new post components to query each post individually. That’s is a hassle! but new pages can be created programmatically with Gatsby’s Node createPages API.
Wrapping Up
In this learning-note post, we only scratched the use of Gatsby source & transformer plugins together with GraphQL query to create simple page components. In the next learning-note post, how Gatsby could programmatically create pages from Gatsby data by mapping GraphQL query results using Node createPages API will be discussed.
Next: Learning to Programmatically Create Pages in Gatsby
Useful resources
While preparing this post, I have referred the following references extensively. Please to refer original posts for more detailed information.
|
https://tinjurewp.com/jsblog/an-overview-of-gatsby-plugins-graphql/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Python
>>
imbade image to jupyter notebook
“imbade image to jupyter notebook” Code Answer’s
how to get image in jupyter notebook
python by
M.U
on Jul 22 2021
Comment
8
from IPython import display display.Image("./image.png")
Source:
ealizadeh.com
imbade image to jupyter notebook
whatever by
Adventurous Addax
on Nov 25 2021
Comment
1
<img src="images/grad_summary.png" style="width:600px;height:300px;">
Add a Grepper Answer
Python answers related to “imbade image to jupyter notebook”
python pillow convert jpg to png
pil image to numpy
save image from jupyter notebook
display cv2 image in jupyter notebook
pil.jpegimageplugin.jpegimagefile to image
jupyter notebook attach image
show image jupyter notebook
show jpg in jupyter notebook
converting jupyter notebook files to python
add image to jupyter notebook in markdown
Convert .tif images files to .jpeg in python
opencv show image jupyter
how to make images in python
how to import image in python
Python queries related to “imbade image to jupyter notebook”
insert image in jupyter notebook
insert image jupyter notebook
open image in jupyter notebook
jupyter notebook insert image
how to add image in jupyter notebook
jupyter notebook image
jupyter insert image
jupyter notebook display image
insert an image to jupyter
add images in jupyter notebook
show image on jupyter notebook
how to import image in jupyter notebook
how to add images in jupyter notebook
how to read image in jupyter notebook
picture in jupyter notebook
how to insert image in notebook
jupyter show image in notebook
how to show image in jupyter notebook
paste image jupyter notebook
jupyter image
importing image in jupyter notebook
display image jupyter
how to add a picture to jupyter notebook
jupyter images
add a picture to jupyter notebook
inserer image jupyter notebook
how to put an image in jupyter notebook
upload image to jupyter notebook
jupyter notebook import image
attach image to jupyter notebook
insert image jupyter notebook markdown
how to insert image jupyter notebook
show picture in jupyter notebook
how to load image on jupyter
how to add a picture to jupyter
jupyter image show
adding images to jupyter notebook
jupyter notebook open image with matplotlib
how to display image in jupiter
jupyter notebook display http image
image jupyter
is it posisble to add photo to jupyter notebook
jupyter notebook put image cell
how to upload image on jupyter notebook
how to upload an image in jupyter notebook
how to upload image in jupyter notebook
jupyter notebook image insert
how to read image in python jupyter notebook
python jupyter notebook image
put image from web in jupyter notebook
can i put any image in jupyter
how to show an image in jupyter notebooks
show image in jupyter notebook opencv
show img in python notebook example
how to show an image in jupyter notebook
print a picture in jupyter notebook
import image into jupyter notebook
jupyterlab how to insert picture
display image array in jupyter notebook
load an image in jupyter notebook
upload image in cell in jupyter notebook
add photo in jupyter notebook
jupyter download all images
how insert picture in jupyte rnotebook
how to create image in jupyter notebook
put an image in jupyter notebook
python notebook adding pictures
notebook add image
how to add image in jupyter notebook python
add image into jupyter notebook
display a image in jupyter lab
upload image in python jupyter notebook
loaindg image in jupyter notebooks
add images jupyter notebook
embed images in jupyter notebook
how to add photo in the jupyter notebook
picture jupyter notebook
how to insert an image in jupyter notebook
how to add photo in the jupyter notebook python
jupyter notebook interactive image
see image jupyter notebook
insert a logo image to a jupyter notebook file
open image python jupyter
jupyter reading in images
how to read image in python in jupyter notebook
img in python notebook
include picture in jupyter notebook
jupyter notebook insert picture
display image in jupyter notebook
add picture to jupyter notebook
add image to jupyter notebook
import image in jupyter notebook
add image in jupyter notebook
add an image in jupyter notebook
import image jupyter notebook
how to load image in jupyter notebook
add image to notebook
import image to jupyter notebook
display image python jupyter
how to display image in jupyter notebook
plot image jupyter notebook
display pil image in jupyter notebook
image jupyter notebook
insert a photo in jupyter
insert picture in jupyter notebook
show image jupyter notebook
jupyter notebook add picture
add photo to jupyter notebook
import picture to jupyter notebook
can you insert picture in jupyter notebook
jupyter notebook how to insert picture
how to paste image in jupyter notebook
how to insert image in python jupyter notebook
insert a picture into jupyter notebook
insert image in jupyter notebook from url
add image jupyter notebook
pictures in jupyter notebook
how to insert a picture into jupyter notebook
load image in jupyter notebook
inserting image in jupyter notebook
img src in jupyter notebook
how to add image to jupyter notebook
put image in jupyter notebook
how to add pics in jupyter
images in jupyter notebook
how to show a image from dataset in jupyter notebook
read and display image in python jupyter notebook
view image in jupyter notebook
how to read a file of images in jupyter notebook
show image from a url jupyter
add a image in jupyter notebook
insertar imágenes en jupyter
jupyter plot to image
jupyter notebook cant insert image
how to add image in ipython notebook
load a picture in jupyter
render image in jupyter notebook
show numpy image in jupyter notebook
load image folder in jupyter notebook
how to show an image in jupyter notebooks from an array
jupyter notebook view image
how to open an image in anaconda jupyter notebook
including images in jupyter notebook
image to jupyter notebook
jupyter display image
cant see image on jupyter notebook
inserting pictures in jupyter notebook
img source in jupyter notebook
image processing in python using jupyter notebook
python jupyter notebook display image
embed images inside jupyter notebook
how to add images to jupter notebook
picture in jupyter
add images to jupyter notebook
insert picture to jupyter notebook
insert a picture in jupyter notebook
how do i include an image in a jupyter notebook
jupyter notebook insert images
how to insert an image in juypter notebooks
how to image.show() jupyter
show image in jupyter notebook pil
download image in jupiter notebook
how to import an image in jupyter notebook
paste image to jupyter notebook
import picture in jupyter notebook
insert picture in jupyter notebook markdown
how to import image into jupyter notebook
from ipython.display import image jupyter
how to print image in jupyter
add images in jupyter md
show image in jupyter notebook matplotlib
ipython to display jupyter image
jupyter python code for image show
jupiter notebook how to add images
how to refer image in jupyter notebook
upload image jupyter notebook widget
show image in jupyter notebook
image in jupyter notebook
insert image to jupyter notebook
how to insert picture in jupyter notebook
jupyter notebook add image
how to insert image in jupyter notebook
how to get image in jupyter notebook
how to add images to jupyter notebook
jupyter notebook images
jupyter notebook show image
upload image in jupyter notebook
how to insert images in jupyter notebook
how to display images in jupyter notebook
how to add an image in jupyter notebook
how to add a picture in jupyter notebook
imbade image to jupyter notebook
insert image jupyter
how to add picture in jupyter notebook
jupyter read image
show image jupyter
insert image into python notebook
how to insert an image on jupiter notebook
how to insert image to jupyter notebook
how to insert an image in jupyter notebook cell
how to attach images to jupyter notebook
insert image notebook
how to insert a picture in jupyter notebook
adding images in jupyter notebook
include image in jupyter notebook
how to load images in jupyter notebook
how to add photo to jupyter
jupyterlab insert images
add image jupyter
jupyter add image
imread image in jupyter notebook
how to include image url in jupyter notebook
adding images to jupyter book
how to include image in jupyter notebook
how to load image dataset from local in jupyter notebook
jupyter load image output
inser image dans jupyter notebook
jupyter notebook put image
embed image into jupyter notebook
see images in jupyter notebook
how to upload image to jupyter notebook
how to upload an image to jupyter notebook
see images in jupyter notebook matplotlib
open an image in jupyter notebook
jupyter load image from desktop
jupyter notebook load image
show an image in jupyter notebook
print image python jupyter
images jupyter notebook
how to add a picture to jupiter
insert image to jupyte\
upload image jupyter notebook
how to load image in jupyter notebook using matplotlib
pillow show image jupyter
how to show pil image in jupyter notebook
jupyter notebook open image
image display app jupyter notebook
how to show picture in jupyter
display imagefrom url in jupyter notebook
upload a photo on jupyter notebook
add pictures to jupyter notebook
can i upload a photo to jupyter notebook
how to add a photo in jupyter notebook
how to attach a image in jupyter notebook
display image jupyter notebook
adding image to jupyter notebook
insert picture to jupyter noteobok
how to upload images on jupyter
insert image in notebook jupyter
internet image on jupyter notebook
image show jupyter
image show in jupyter notebook
image jupyter notebook documentation
images into jupyter notebook
pillow show image in jupyter
image jupyter notebook show
how to import image data in jupyter notebook
how to display image on pc on jupyter notebook
show image in notebook
extract images from jupyter notebook
insert image into jupyter notebook
show pil image in jupyter
show image in ipython notebook
Browse Python Answers by Framework
Django
Flask
More “Kinda” Related Python Answers
View All Python Answers »
disable images selenium python
how to change the scale of a picture in pygame
transform size of picture pygame
pygame scale image python
how to capture a single photo with webcam opencv
how to capture an image with web cam open cv
cv2 grayscale
cv2.cvtcolor grayscale
rotate picture in opencv2 python
python download image
download image python
cv2 add text
how to save image opencv
save an image in python as grayscale cv2
save thing in pickle python
python selenium get image src
add picture to jupyter notebook
how to get image in jupyter notebook
mp4 get all images frame by frame python
resize imshow opencv python
opencv show image jupyter
how to open webcam with python
webcam cv2
open cv read webcam
how to rezize image in python tkinter
cv2.rectangle
python download image from url
python saving a screentshot with PIL
pytorch plt.imshow
install imageio
window size cv2
python delete saved image
ndarray to pil image
rgb to grayscale python opencv
convert opencv image to pil image
how to return PIL image from opencv
cv2 crop image
selenium python activate new tab
show image in python
python resize image
python pdf to image
draw a single pixel using pygame
image to text python
add image to jupyter notebook in markdown
put text on image python
imshow grayscale
cv2 draw box
color to black and white cv2
how to add a image in tkinter
add text toimage cv2
plt.imshow grayscale
opencv grayscale to rgb
convert grayscale to rgb python
copy image from one folder to another in python
open image in numpy
how to read video in opencv python
how to play a video in cv2
display np array as image
from array to image show
python pil resize image
images from opencv displayed in blue
display cv2 image in jupyter notebook
extract images from mp4 python
Convert a Video in python to individual Frames
numpy to csv
python cv2 screen capture
extract frames from video python
pygame scale image
how to save matplotlib figure to png
python get all images in directory
show image jupyter notebook
draw bounding box on image python cv2
python bounding box on image
python turtle background image
bgr2gray opencv
convert image to grayscale opencv
show image in tkinter pillow
pil get image size
python get image dimensions
python ffmpeg
python convert png to jpg
module 'cv2.cv2' has no attribute 'imWrite'
cap.release() not working
cv2.cv2' has no attribute 'face_lbphfacerecognizer'
cv2.cv2 has no attribute face
opencv contrib pillow
load images pygame
check if image is empty opencv python
negative cv2
cv2 reverse contrast
open image from link python
Extract images from html page based on src attribute using beatutiful soup
plt.imshow not showing
how to place image in tkinter
python image black and white
get text from image python
cv show image python
cv2 show image
how to convert an image to ascii art using python
save image requests python
pickle save
cv2 videocapture nth frame
get all type of image in folder python
show jpg in jupyter notebook
how to trim mp4 with moviepy
read video with opencv
height width image opencv
how to change opencv capture resolution
how to draw image in tkinter
image in tkinter
tkinter load image
tkinter image
cv2 add circle to image
python clipboard to image
img read
python image read
skimage image read
read image python
click js selenium python
cv2 resize
jupyter notebook attach image
print image python
python insert image
how to convert img to gray python
camera lags when using with opencv
delete image with python
matplotlib savefig not working
pil image from numpy
python windows take screenshot pil
bgr to gray opencv
read binary image python
th2=cv2.adaptiveThreshold(img, 255 ,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11 # no of block size , 2 #c)
pil save image
subplots matplotlib examples
edge detection opencv python
Embed picture in email using smtplib
how to locate image using pyautogui
split imagedatagenerator into x_train and y_train
OPENCV GET CONTOURS
cv2 load image
clibboard to png
save video cv2
images subplot python
python cv2 get image shape
fill pixels with zeros python opencv
open tiff image pyt
how to watermark a video using python
take pictures from webcam python
python pil get pixel
python link to jpg
get video duration opencv python
finding the format of an image in cv2
opencv python shrink image
pygame flip image
python pil invert image color
build image from dockerfile
width and height of pil image
import Image
pip install ffmpeg
python save figure as pdf
python cv2.Canny()
image capture from camera python
extract image from pdf python
how to set background color of an image to transparent in pygame
cv2 gaussian blur
python image to grayscale
library for converting text into image in python
how to get RGB value from pixel in screen live python
flipping an image with cv2
getting image from path python
how to blit image in pygame
python convert png to ico
python rotate pdf pages
python opencv create new image
pil img to pdf
python save image to pdf
python plot jpg image
python show png
pyperclip
pyperclip copy paste
how to import image in python
merge all mp4 video files into one file python
python resize image keep aspect ratio
import all images from folder python
python pil to greyscale
how to show webcam in opencv
opencv python convert rgb to hsv
read live video from usb opencv python
save image url to png python
python cv2 resize keep aspect ratio
crop image python
add images in readme github file
python pillow resize image
change image resolution pillow
pygame rotate image
python get pixel color
pil get image type
opencv waitkey example
convert files from jpg to png and save in a new directory python
find width and height of imported video frame opencv2
convert url to base64 image py
load img cv2
cv2.imwrite save to folder
im save to a bytes io python
pil crop image
cv2 save video mp4
how to sharpen image in python using cv2
python pillow convert jpg to png
python imread multiple images
glob read multiple images
cut part of video ffmpeg
find all color in image python
python check if image is corrupted
search google images python
download images python google
rotate image by specific angle opencv
cv2 yellow color range
ffmpeg python video from images
convert video to text python
plt imshow python
get difference of images python
download image from url python 3
couldn't recognize data in image file
save a file as a pickle
cvtcoloer opencv
adding text cv2
how to save an image with the same name after editing in python pillow module
getting pi in python
draw bounding box on image python opencv
resize image array python
write data to using pickle
how to read frame width of video in cv2
blender python save file
PIL image example
increase contrast cv2
image from wikipedia module in python
python send image in post request with json data
create pdf from images python
clahe opencv
how to read hdf5 file in python
get video width and height cv2
savefig resolution
export high resolution .png matplotlib
matploltib increase resolution
python code for where to save the figures
import pil pycharm
mutable and immutable in python
resize multiple images to same size python
how to convert into grayscale opencv
python tkinter get image size
python how to convert csv to array
rotate image python
pillow image from array
python image library
compress image pillow
python inspect source code
how to use cv2.COLOR_BGR2GRAY
save image from jupyter notebook
how to save a pickle file
python logo png
how to change color of image in opencv
pyqt5 image change size
save_img keras
how to make images in python
make white image numpy
make a white image numpy
plt.imread python
save object pickle python
python opencv imresize
cv2 read rgb image
how to write a script to display an image in python
cv2 get framerete video
python png library
how to convert csv into list
python docx extract image
feature to determine image too dark opencv
get image from url python
draw picture in python libraries
cv2 check if image is grayscale
displaying cv2.imshow on specific window position
rotate image pyqt5
export an excel table to image with python
convert matplotlib figure to cv2 image
skimage python
pillow create image
import ImageGrab
draw pixel by pixel python
opencv set window size
pyqt5 image
flask decoding base 64 image
how to play mp3 file form pygame module
upload image to s3 python
how to convert an image to matrix in python
pil resize image
pygame size of image
how to cut image python
cv2 rgb to bgr
numpy savetext
Play Video in Google Colab
how to draw a single pixel in pyglet
get resolution of image python
how to resize windows in python
add metadata png PIL
numpy empty image
pillow rgb to grayscale
import numpy import cv2 cv2.imshow('image',img) cv2.waitKey(0)
add image pptx python
python draw rectangle on image
pil python image
make a gif with images python
smtplib send pdf
pil.jpegimageplugin.jpegimagefile to image
cv2 blue color range
python image to video
generate binay image python
cv2.namedwindow
Cast image to float32
set pixel pygame
pyhton image resize
read image and resize
make gif from images in python
OpenCV: FFMPEG: tag 0x4745504d/'MPEG' is not supported with codec id 2 and format 'mp4 / MP4 (MPEG-4 Part 14)'
savefig matplotlib python
imread real color cv2
pil image resize not working
csv manipulation python
converting parquet to csv python
python get pixel
cropping image google colab
rgb to grayscale python
plot path in pillow python
opencv resize image
pygame image get height
cv2.videocapture python set frame rate
get height of image in pygame
discord embed add image
soup.find_all attr
transform image to rgb python
text to image python
éliminer le background image python
pil saves blue images
python cv2 convert image to binary
opencv save image rgb
wav file to array python
optimize images using pillow
telethon send image
python cv2 read image grayscale
resize cmd using python
cv2.imwrite path
opencv export image
rasperry pi camera
python start process in background and get pid
python pil image flip
Video to text convertor in python
get image image memeory size in url inpyton requests
pyplot rectangle over image
destory image in pygame
how to terminate a program cv2 python
replace transparent pixels python
get coordinates of an image from a pdf python
pil normalize image
beautifulsoup get img alt
size pilimage
python vim auto indent on paste
image hashing
python opencv check image read
example images
module 'cv2.cv2' has no attribute 'videowriter'
size.width>0 && size.height>0 in function 'cv::imshow'
relative text size put text cv2
take screenshot of video python
load png to python
auto py to exe with multiple images
download video to from pytube with a special name
convert image to binary python
Plotting multiple images
blender change text during animation
opencv documentation
histogram image processing python
python 3.9 beautifulsoup kurulumu
cv2 videowriter python not working
how to standardize the image data to have values between 0 and 1
python how to make a png
convert rgb image to binary in pillow
how to save frames in form of video in opencv python
ocr image table to excel python
how to find left top width and height on an image using python
python png to svg
cv2 pink color range
saving 3D plots on python as png
Optimize images in python using pillow
How to track hands python opencv/mediapipe
compute slice distance from image position
python savefig full screen
save artist animation puython
background subtraction params opencv python
earthpy
api to find information about superheros
image resolution extracting python
arabic text recognition from pdf using python
how to do downsampling in python
picture as background of seaborn plot python
python visualize fft of an image
create animation from sequence of image python
python pillow cut image in half
download button image streamlit
python get image's text from web request
Convert MATLAB to Python Reddit
how to capture video in google colab with python
programme phyton pour realiser un programme qui transforme une image en niveau de gris
get random superhero images
python look for image on screen
python numpy + opencv + overlay image
make image fit to size tkinter
how to change pi hostname in python file
play video in colab
load image metadata with pil
python cv2 blob detection seg fault
remove bg from pic using pthon
pyfcm image
python image layers
Circular heatmap python
Django forms I cannot save picture file
fouier transformation in python open cv
rotate an image python keras
how to open pickle file
sending image over ros
features and image recongnition
how to load images from folder in python
scikit image 0.16.2
save gif python
how to resize image with pillow in django
cv2 leave only the biggest blob
video steganography using python
find the middle of the document in the image opencv
add values to add value in a matplotlib image
correlation between images python
how to flatten the image dataset
pil format multiline text
how a 16 mp camera looks like
cv2 opencv-python imshow while loop
how to add picture in phyton
add text to jpg python
json payload python function
Python turtle (built in shape) image size
hwo to download video using pytube highest resolution
how to insert image in python
OpenCV(3.4.11) Error: Assertion failed (_img.rows * _img.cols == vecSize) in CvCascadeImageReader::PosReader::get
opening & creating hdf5 file
Python file write all the bounding box coordinates using OpenCV
wsl python image
images download
opencv cartoonizer script
python convert images to pdf
convert an image to matrix in python
how to open camre aopencv
Qt convert image to base64
download image from url python
cv2 warpaffine rotate
description of imdb dataset python
python cv2 imwrite
read image from a folder python
python cv2 canny overlay on image
picobot python
python cv2 unblur
how to get image object from array in python
récupérer texte d'une image python
python loading image file requires absolute path
plt.imshow typeerror invalid dimensions for image data
medium how to interact with jupyter
python opendatasets
combine picture and audio python
how to make turtle shape image smaller
Opencv Webcam
pil
convert .tiff image stack to unit8 format
transform jpg image into array for conv2d
scipy get frequencies of image
captcha.image install in python
Image loader RGB transform
Reason: "broken data stream when reading image file" in jupyter notebook
python immutable dataclass
cv2 open blank window
How to hyperlink image in blender
list images in directory python
py urllib download foto
python image resize
image_file = models.ImageField(upload_to='images') image_url = models.URLField()
python plot outline imdbpy
jpg image in tkinter title
pyqt5 image center
python declare immutable variable
pyqt stretch image
crop image using opencv with height and width
ope pickle file
io.imsave 16 bit
python , cv2 change font type
resize interpolation cv2
pytesseract.image_to_data into pandas dataframe
dataset analysis in python photo photoelectric effect
pygame mirror image
pixmap in python
picture plot
Ipython.display latex in the IDE like spyder
opencv loop video
conversion un type image en array python
can't use "pyimage2" as iconphoto
pillow python update image
cv2 put font on center
change size of image and fir it into numpy array opencv
data exfiltration icmp
is complex datatype immutable in python
convert fisheye video to normal python
how to pull images from android device from usb in python
how to add watermark in mp4 video using python
convert numpy array to HSV cv
wand image resize
markdown image embed url
Insert Multiple Images to Excel with Python
how to download multiple googel images using python
python save base64 temp file
color to black and white opencv
how to capture multiple screens with ImageGrab
can i save additional information with png file
how to import pil in spyder
python cv2 how to update image
make image to string to use in tkinter
how to detect the body with cv2
cx_freeze include images in specific path
appending hdf5 files
save image to file from URL
Lcd screen 3.5 inch to pi
generate 3 pages pdf reportlab
reportlab drawimage issues with png transparency background
cv2.filter2D(image, -2, kernel_3x3)
python ffmpeg get video fps
Convert files to JPEG
ffmpeg python
python web scraping project
python image crop
opencv python rgb to hsv
how to connect ip camera to opencv python
csv reader url
draw circle opencv
how to transcode a video in python using ffmpeg
PyPDF2 Python PDF
text detection from image using opencv python
Convert .tif images files to .jpeg in python
how to reduce the image files size in python
detect grayscale image in python opencv
image analysis python
converting multipage tiff to pdf python
python pillow crop image
img_sm = pygame.transform.scale(img, (32, 32))
image completion inpainting with python
bad resolution in the exported RDKit images
Python Script to check how many images are broken
extract x y coordinates from image in pdf python
ModuleNotFoundError: No module named 'pip._internal'
python write to file
python iterate dictionary key value
python virtual environment
dataframe create
pandas dataframe
python random
python read file
install opencv python
python read json file
how to check python version
rename columns pandas
rename columns in python
python input
how to add a column to a pandas df
random number python
python check if file exists
datetime python
reverse list python
python loops
python read file line by line
csv python write
string to date python
time format conversion in python
string to datetime convert
convert date string to date time string python
python read csv
update tensorflow pip
python logging to file
pandas loop through rows
df iterrows pandas
convert string to list python
'utf-8' codec can't decode byte 0x85 in position 715: invalid start byte
code how pandas save csv file
how to find item in list python without indexnig
python find item in list
index in list
python find index by value
flask app example
flask app starter
simple flask app
how to reverse a string in python
python loop through list
get current date in python
python current date
get current date datetime
python to uppercase
how can I sort a dictionary in python according to its values?
get text from txt file python
python merge list into string
what is join use for in python
Concatenate Item in list to strings
python create directory
matplotlib histogram
django rest
pip install django rest framework
django rest framework
execute command in python script
pickle.load python
sort dataframe by column
matplotlib install
No module named 'matplotlib'
python pip install matplotlib
install matplotlib
python install matplotlib
Import "matplotlib" could not be resolved django
list to dict python
seaborn figure size
plt figsize
increase figure size in matplotlib
django admin create superuser
how to create a superuser in django
django createsuperuser
selenium webdriver python
selenium getting started
selenium python
python main
how to define main.py function example
Python main file with class
pandas dataframe from dict
matplotlib plot
reset index pandas
python async await
iterate over rows dataframe
for row in column pandas
python loop through dictionary
'pip' is not recognized as an internal or external command, operable program or batch file.
open text file in python
merge two dataframes based on column
if substring not in string python
python string contains substring
how to replace na values in python
how to replace null values in pandas
replace nan in pandas
save dataframe as csv
télécharger dataframe python extract
save pandas into csv
export dataset from python to csv
saving a pandas dataframe as a csv
python join list to string
pyautogui install
round to two decimal places python
send email python
python gmail
matplotlib legend
sleep in py
py sleep function
python sleep
python get command line arguments
Write python program to take command line arguments (word count).
pd.to_datetime python
python super
run python file using python code
scikit learn linear regression
python floor
how to check the type of a variable in python
print type(x) in python
python new line
how to update python
python make txt file
python remove last character from string
how to import matplotlib in python
import matplotlib.pyplot as plt
flask how to run app
train test split sklearn
python iterate through dictionary
replace column values pandas
dotenv python
import python module from another directory
python requirements.txt
how to run requirements.txt in python
installing django
string palindrome in python
palindrome python
what is palindrome number in python
export a dataframe to excel pandas
creating a new enviroment in conda
anaconda create new environment
anaconda create environment python version
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 6148: character maps to <undefined>
python append to file
get request python datetime from string
python string to datetime
python convert string to date
install pip on windows 10 python 3.9
import csv file in python
No module named env.__main__; 'env' is a package and cannot be directly executed
how to read csv from local files
load csv file using pandas
how to install python libraries
error tokenizing data. c error
ParserError: Error tokenizing data. C error: Expected 1 fields in line 87, saw 2
python get filename from path
install python on ubuntu
count values pandas
value_counts pandas
value_counts() in pandas
python shuffle list
python find in list
python version
convert into date python
python rename file
save dataframe to csv
python calculate factorial
ModuleNotFoundError: No module named 'tkinter'
sudo apt-get install python3-tk not working
how to install tkinter
install python in mac
python download for mac
check tensorflow version
how to check current version of tensorflow
python find the key with max value
python round up
how to define main in python
python main function
sort list of dictionaries by key python
sort list of dictionaries python by value
get date and time python
python list files in current directory
python date and time
check if anything in a list is in a string python
sort a dataframe by a column valuepython
sort_values
order pandas dataframe by column csv reader new env in anaconda
how to create virtual environment in anaconda
panda dataframe to list
new env in conda
How to create an env to use in vs code using anaconda
how to convert dataframe to list in python
create an environment in conda
how to create miniconda environment
pandas to list
conda env
conda create environment
dataframe to list
how to install pyaudio in python
how to install pyaudio
python pip install pandas
ImportError: No module named pandas
python install pandas
pip install pandas
install pandas
pandas python install
pip pandas
replacing values in pandas dataframe
set index to column pandas
convert array to dataframe python
which python mac
Instead, it is recommended that you transition to using 'python3' from within Terminal.
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe7 in position 5: invalid continuation byte
how to convert a list into a dataframe in python
bubble sort python
change dataframe column type
python requests header
how to print error in try except python
Curl in python
convert a dictionary into dataframe python
pip install specific version
sklearn random forest
print no new line python
logistic regression algorithm in python
count nan pandas
dataframe find nan rows
python check if folder exists
python removing \n from string
python create environment variable
python reduce()
python read json
how to use timeit in python 3
how to delete the last item in a list python
all permutations python
permutations python
django create app command
create new django app
python if __main__
Pandas groupby aggregate multiple columns
dataframe groupby multiple columns
python square root
install nltk in python
nltk pip
nltk in python
convert column in pandas to datetime
if __name__ == '__main__'
main in python
python how to read a xlsx file
list files in directory python
python os remove file
flask install
virtual environment python
pip install virtualenv windows
create virtual env
python create uuid
how to use virtual environment python
print key of dictionary python
python cheat sheet
python replace letters in string
pipenv
pandas read excel
numpy merge arrays
install flask on linux mint for python3
how to install flask
ImportError: No module named flask
Flask – Environment
install flask
python index of max value in list
convert list of strings to ints python
pip install python
python pip install
install pip python 3.9
how to check datatype of column in dataframe python
python install command in linux
install python 3.9 ubuntu
python time delay
python clear console
create a dataframe python
pandas replace values in column based on condition
change pandas column value based on condition
how to execute a cmd command in python
code for test and train split
virtual env in python
create new thread python
python ignore runtimewarning
python suppress warnings in function
how to avoid deprecation warning in python
ignore warnings python
ignore warnings
turn off warnings
how to open csv file in python
drop null rows pandas
how to create progress bar python
how to run a .exe through python
open an exe file using python
urllib python
decode base64 python
take off character in python string
get files in directory python
how to remove all characters from a string in python
wait function python
how to wait in python
install numpy
installing python packages in visual studio code
python datetime string
python play audio snippet
play music from python
play videos in python
mean of a column pandas
python get actual timestamp
python print timestamp
rename column name pandas dataframe
static dirs django
add static file in django
django new static files directory
matplotlib title
registering static files in jango
declare numpy zeros matrix python
python sort list in reverse
how to sort list in descending order in python
ndarray to list
python directory contains file
python os if file exists
datetime python timezone
hypixel main ip
making log files in python
creating venv python3
pyvenv.cfg file download
french to english
traduttore
google traduttore
group by count dataframe
sklearn plot confusion matrix
main function python\
create folder python
create data dir in python
timestamp to date python
python get numbers from string
python only numbers in string
python remove letters from string
json dump to file
ModuleNotFoundError: No module named 'pandas'
get list of folders in directory python
how to use random in python
list to json python
python format datetime
python wait 1 sec
create dataframe with column names pandas
pandas dataframe creation column names
python how to use input
how to check if datapoint is in pandas column
find nan value in dataframe python
determine if number is prime python
python primality test
export pandas dataframe as excel
list comprehension python if else
python sort list in reverse order
python library to make qr codes
empty dataframe
update python ubuntu
remove punctuation from string python
python move file
how to create migrations in django
Function to a button in tkinter
change list to int in python
python-binance
get list of unique values in pandas column
get working directory python
find root directory of jupyter notebook
how to create dataframe in python
copy text to clipboard python
calculating mean for pandas column
how to automatically copy an output to clipboard in python
copy to clipboard python
get IP address python
drop columns pandas
how to check if a number is odd python
comparing two dataframe columns
convert a data frame column values to list
how to get a dataframe column as a list
python read yaml
qdate to date
how to set class attributes with kwargs python
Write a Python program to count total number of notes in given amount.
how to create a datetime object in python
online kivy
how to print alternate numbers in python
lambda in python
python pandas apply to one column
python function to check list element ratio with total data
how to set up pygame
python find index by value
Square of numbers in non-decreasing order
colorbar remove tick lines and border
retrieve row by index pandas
python regex for a url
stack overflow python ip check
>>> import numpy Illegal instruction (core dumped)
animal quiz game in python
enumerate in django templte
grouped box plot in python
python import could not be resolved
dict python
torch mse loss
django clodinarystorage
re.match python
datetime date of 10 years ago python
pandas query on datetime
python get screen size
python how to get the last element in a list
how to get latitude and longitude from address in python
create folders in python
identify null values
get key from value dictionary py
how to run requirements.txt in python
install django rest framework
change markersize in legend matplotlib
Tuple: Tuple cannot change
colours seaborn
only keep few key value from dict
python copy sequence
what ide or code editor should i use for python
Removing punctuation with NLTK in Python
pygame alien example
python better while loop that count up
create new thread python
raise keyerror(key) from none os.environ
drop row pandas column value not a number
python convert to percentage
dict to array of string python
how to set required drf serialzier
python sleep
A0 = dict(zip(('a','b','c','d','e'),(1,2,3,4,5)))
when to use finally python
python float to fraction
horizontal bar plot matplotlib
make a script run itself again python
how to divide string in python
extract specific key values from python dictionary
resample and replace with mean in python
initialize dictionary to zero in python
how to add multiple items in a list in python
factors of a number with memoization
how to go up levels in path python
dictionary from two list
scikit learn linear regression
python dictionary dot product
python script that executes at time
python Decompress gzip File
extract minutes from timedelta python
kivy bind when text changes
plotly dash datatable column width
List comprehension - list files with extension in a directory
extend stack python
which type of programming does python support?
subplots whitespace
compile python to pyc
insert blank row in data frame
select rows in python
how to install modules to a specific version of python
creating venv python3
how to capitalize first letter in python in list using list comprehension
python sort
print all unique values in a dictionary
No module named 'libtorrent'
command errored out with exit status 1:
pandas ta quick start example
Mat.at(row,col) Opencv
train test split pandas
import pandas
choice without replacement python
wie printe ich in python
smtplib login
classification cross validation
hwo to except every error in python try statemen
python dict ffrom lists
-- python
convert integer unix to timestamp python
python fill table wiget
matplotlib display axis in scientific notation
sentiment analysis french python
convert arrary to int
mechanize python #4
flask blueprints
hash table in python
mayeutica
pandas filter rows that are in a list
how to convert integer to binary string python
how to print multiple integers in python in different line
python Ordered dict to dict
no module named pyplot
fichier python pour brython
p-norm of a vector python
create new column with length of old column value python
python repeating scheduler
django admin image
how to read a csv file in python
[] python
how to check which submit button is clicked in flask wtf
docstrinfs pyt
how to dynamically search for a class variable in python
os.execl
pytest local modules
access kwargs in template django
add widget in pyqt
registering static files in jango
torch concat matrix
python print
write geopands into postgres python
how to know the length of a dataset tensorflow
fix python code
list reverse method in python
python type hint array of objects
easiest way to position labels in tkinter
This branch is for urllib3 v1.22 and later
turtle meaning
create np nan array
the python libraries to master for machine learning
Difference between loc and iloc
list comprehension python one line
operation that returns True if all values are equal
python adding values to existing key
python copy file create intermediate directories
qspinbox disable wheel python
remove item from list if it exists python
how to add up a list in python
python string remove whitespace
pandas rename index values
python fibbonacci
how to check if character in string python
python console ending multiline input
how to refresh windows 10 with python
number of rows or columns in numpy ndarray python
how to access item in list private in python
code to calculate dice score
python remove all except numbers
html to docx python
pandas index dataframe with integers
ipython.display clear_output
mode with group by in python
how to download nltk in python
créer fonction python
python define propery by null
get false positives from confusoin matrix
gpt2 simple restore_from
generate random string with special characters and numbers in python
convert python pandas series dtype to datetime
pandas create a new column based on condition of two columns
How to colour a specific cell in pandas dataframe
multiple assessment in python
how to make a multiple choice quiz in python with tkinter
ascii tables in python
system calls in python
count occurrences of value in array python
python sympy solve equation equal to 0
ipywidget datepicker
accessing python dictionary values with dot
BusyIndicator Import
pickle a dictionary
rabbitmq pika username password
consider a string note: "welcome" statment will rais error
calculate quartil python
python youtube_dl custom path
pandas reading each xlsx file in folder
python index of lowest value in list
no connection could be made because the target machine actively refused it python
null=true django
print a random word from list python
python loops
choromap = go.Figure(data=[data], layout = layout)
webbrowser.google.open python
python comment header
how to decrease size of graph in plt.scatter
python argparse option groups
how to select python 3 interpreter in linux
python min length list of strings
Return an RDD with the values of each tuple
how to save date in cookie Python
increase colorbar ticksize
what if discord.py python add-in does not work
hex to string
.
|
https://www.codegrepper.com/code-examples/python/imbade+image+to+jupyter+notebook
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
simplevfs 0.2.1
Minimalist virtual file system for game development
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
SimpleVFS
.. image::
:target:
.. image::
:target:
Introduction
SimpleVFS it's a fork from
D-GameVFS: <>_ that updates to the latest changes of the
language, and attempt to polish and finish the previous works.
D:GameVFS, and by extension SimpleVFS, is a minimalist open source virtual file system library for the D programming language oriented at game developers.
Provided functionality is very basic - files and directories can be created, read and written, but not deleted. There are no security features - e.g. SimpleVFS can't handle a situation when a file it's working with is deleted outside the program. Only files in a physical file system are supported at the moment. There is no archive support right now.
Features
- File system independent, easy to use API for file/directory manipulation.
- No external dependencies.
- Seamless access to multiple directories as if they were a single directory.
- Easy to extend with custom file system backend.
- There is no support for ZIP or similar archive formats at the moment.
- There is no support for deleting files/directories, and none is planned.
- There are no security features and none are planned.
Directory structure
=============== =======================================================================
Directory Contents
=============== =======================================================================
./ This README file, utility scripts.
./docs API documentation
./source Source code.
./examples Code examples.
=============== =======================================================================
Getting started
^^^^^^^^^^^^^^^^^^^^^^^^ Install the DMD compiler ^^^^^^^^^^^^^^^^^^^^^^^^
Digital Mars D compiler, or DMD, is the most commonly used D compiler. You can find its
newest version
here <>_. Download the version of DMD
for your operating system and install it.
.. note::
Other D compilers exist, such as
GDC <> and
`LDC
^^^^^^^^^^^^^^^^^^^^^^^^ Simple SimpleVFS project ^^^^^^^^^^^^^^^^^^^^^^^^
Create a directory for your project. To have something for D:GameVFS to work with,
create subdirectories
main_data and
user_data in the project directory. In these
directories, create some random files or subdirectories. Create a file called
main.d in your project directory. Paste the following code into the file:
.. code-block:: d
import std.stdio; import std.typecons; import dgamevfs;
void main() {
// Two filesystem directories, one read-only and the other read-write. auto main = new FSDir("main", "main_data/", No.writable); auto user = new FSDir("user", "user_data/", Yes.writable); // Stack directory where "user" overrides "main". auto stack = new StackDir("root"); stack.mount(main); stack.mount(user); // Iterate over all files recursively, printing their VFS paths. foreach(file; stack.files(Yes.deep)) { writeln(file.path); } VFSFile file = stack.file("new_file.txt"); // Creates "new_file" in "user" (which is on top of "main" in the stack). file.output.write(cast(const void[])"Hello World!"); // Read what we've written. auto buffer = new char[file.bytes]; file.input.read(cast(void[]) buffer); writeln(buffer);
}
Code for this example can be found in the
examples/getting_started directory.
See the API documentation for more code examples.
^^^^^^^^^^^^^^^^^^^^^^^ Explanation of the code ^^^^^^^^^^^^^^^^^^^^^^^
We start by importing dgamevfs._ which imports all needed D:GameVFS modules.
D:GameVFS uses the Flag template instead of booleans for more descriptive parameters
(such as
Yes.writable instead of
true). You need to import std.typecons to use
Flag.
We create two FSDirs - physical file system directory objects, which will be called
main and
user in the VFS and will represent the
main_data and
user_data
directories which we've created in our project directory. We construct
main as
a non-writable directory - it's read-only for the VFS.
Next, we create a StackDir and mount() our directories to it. StackDir works with
mounted directories as if they were a single directory - for instance, reading
file.txt from the StackDir, will first try to read
user_data/file.txt, and if
that file does not exist,
main_data/file.txt. Files in directories mounted later
take precedence over those mounted earlier.
StackDir makes it possible, for example, to have a main game directory with common files and a mod directory overriding some of those files.
Then we iterate over all files in the StackDir recursively (using the
Yes.deep
argument) - including files in subdirectories. Path of each file in the VFS is printed.
You should see in the output that the files' paths specify
stack as their parent
since
main and
user are mounted to
stack. (Note that the paths will refer to
stack as parent even if iterating over
main and
user - as those are now
mounted to
stack.)
Then we get a VFSFile - D:GameVFS file object - from the
stack directory. This
file does not exist yet (unless you created it). It will be created when we write to it.
To obtain writing access, we get the VFSFileOutput struct using the VFSFile.output() method. VFSFileOutput provides basic output functionality. It uses reference counting to automatically close the file when you are done with it. Since we just want to write some simple text, we call its write() method directly. VFSFileOutput.write() writes a raw buffer of data to the file, similarly to fwrite() from the C standard library.
Note that we're working on a file from a StackDir. StackDir decides where to
actually write the data. In our case, the newest mounted directory is
user, which is
also writable, so the data is written to
user_data/new_file.txt.
In the end, we read the data back using the VFSFileInput class - input analog of VFSFileOutput - which we get with the VFSFile.input() method. We read with the VFSFileInput.read() method, which reads data to provided buffer, up to the buffer length. We determine how large buffer we need to read the entire file with the VFSFile.bytes() method. The buffer might also be larger than the file - read() reads as much data as available and returns the part of the buffer containing the read data.
For more details about SimpleVFS API, see the
documentation <>_.
^^^^^^^^^ Compiling ^^^^^^^^^
We're going to use dub, which we installed at the beginning, to compile our project.
Create a file called
dub.json with the following contents:
.. code-block:: json
{
"name": "getting-started", "targetType": "executable", "sourceFiles": ["main.d"], "mainSourceFile": "main.d", "dependencies": { "gamedvfs": { "version" : "~>0.2.1" }, },
}
This file tells dub that we're building an executable called
getting-started from
a D source file
main.d, and that our project depends on D:GameVFS 0.5.0 or any
newer, bugfix release of D:GameVFS 0.5 . DUB will automatically find and download the
correct version of D:YAML when the project is built.
Now run the following command in your project's directory::
dub build
dub will automatically download D:GameVFS and compile it, and then then it will compile
our program. This will generate an executable called
getting-started or
getting-started.exe in your directory.
License
D:GameVFS:GameVFS was created by Ferdinand Majerech aka Kiith-Sa kiithsacmp[AT]gmail.com .
SimpleVFS was a fork created by Luis Panadero Guardeño aka Zardoz luis.panadero[AT]gmail.com .
The API was inspired the VFS API of the
Tango library <>_.
D:GameVFS was created using Vim and DMD on Debian, Ubuntu and Linux Mint as a VFS
library in the
D programming language <>_.
- Registered by Luis Panadero Guardeño
- 0.2.1 released 2 years ago
- Zardoz89/SimpleVFS
- github.com/Zardoz89/SimpleVFS
- Boost 1.0
- Authors:
-
- Sub packages:
- simplevfs:getting-started, simplevfs:vfsfile
- Dependencies:
- none
- Versions:
- Show all 6 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
2 downloads total
- Score:
- 0.0
- Short URL:
- simplevfs.dub.pm
|
https://code.dlang.org/packages/simplevfs
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
table of contents
NAME¶
nda — NVMe Direct
Access device driver
SYNOPSIS¶
device nvme
device scbus
DESCRIPTION¶
The
nda driver provides support for direct
access devices, implementing the NVMe command protocol, that are attached to
the system through a host adapter supported by the CAM subsystem.
SYSCTL VARIABLES¶
The following variables are available as both sysctl(8) variables and loader(8) tunables:
- hw.nvme.use_nvd
- The nvme(4) driver will create
ndadevice nodes for block storage when set to 0. Create nvd(4) device nodes for block storage when set to 1. See nvd(4) when set to 1.
- kern.cam.nda.nvd_compat
- When set to 1, nvd(4) aliases will be created for all
ndadevices, including partitions and other geom(4) providers that take their names from the disk's name. nvd devices will not, however, be reported in the kern.disks sysctl(8).
- kern.cam.nda.sort_io_queue
- This variable determines whether the software queued entries are sorted in LBA order or not. Sorting is almost always a waste of time. The default is to not sort.
- kern.cam.nda.enable_biospeedup
- This variable determines if the
ndadevices participate in the speedup protocol. When the device participates in the speedup, then when the upper layers send a BIO_SPEEDUP, all current BIO_DELETE requests not yet sent to the hardware are completed successfully immediate without sending them to the hardware. Used in low disk space scenarios when the filesystem encounters a critical shortage and needs blocks immediately. Since trims have maximum benefit when the LBA is unused for a long time, skipping the trim when space is needed for immediate writes results in little to no excess wear. When participation is disabled, BIO_SPEEDUP requests are ignored.
- kern.cam.nda.max_trim
- The maximum number of LBA ranges to be collected together for each DSM trims send to the hardware. Defaults to 256, which is the maximum number of ranges the protocol supports. Sometimes poor trim performance can be mitigated by limiting the number of ranges sent to the device. This value must be between 1 and 256 inclusive.
The following report per-device settings, and are read-only unless otherwise indicated. Replace N with the device unit number.
- kern.cam.nda.N.rotating
- This variable reports whether the storage volume is spinning or flash. Its value is hard coded to 0 indicating flash.
- kern.cam.nda.N.unmapped_io
- This variable reports whether the
ndadriver accepts unmapped I/O for this unit.
- kern.cam.nda.N.flags
- This variable reports the current flags.
- kern.cam.nda.N.sort_io_queue
- Same as the kern.cam.nda.sort_io_queue tunable.
- kern.cam.nda.N.trim_ticks
- Writable. When greater than zero, hold trims for up to this many ticks before sending to the drive. Sometimes waiting a little bit to collect more trims to send at one time improves trim performance. When 0, no delaying of trims are done.
- kern.cam.nda.N.trim_goal
- Writable. When delaying a bit to collect multiple trims, send the accumulated DSM TRIM to the drive.
- kern.cam.nda.N.trim_lbas
- Total number of LBAs that have been trimmed.
- kern.cam.nda.N.trim_ranges
- Total number of LBA ranges that have been trimmed.
- kern.cam.nda.N.trim_count
- Total number of trims sent to the hardware.
- kern.cam.nda.N.deletes
- Total number of BIO_DELETE requests queued to the device.
NAMESPACE MAPPING¶
Each nvme(4) drive has one or more namespaces
associated with it. One instance of the
nda driver
will be created for each of the namespaces on the drive. All the
nda nodes for a nvme(4) device are
at target 0. However, the namespace ID maps to the CAM lun, as reported in
kernel messages and in the devlist sub command of
camcontrol(8).
Namespaces are managed with the ns sub
command of nvmecontrol(8). Not all drives support
namespace management, but all drives support at least one namespace. Device
nodes for
nda will be created and destroyed
dynamically as namespaces are activated or detached.
FILES¶
- /dev/nda*
- NVMe storage device nodes
SEE ALSO¶
cam(4), geom(4), nvd(4), nvme(4), gpart(8)
HISTORY¶
The
nda driver first appeared in
FreeBSD 12.0.
AUTHORS¶
Warner Losh <imp@FreeBSD.org>
|
https://manpages.debian.org/testing/freebsd-manpages/nda.4freebsd.en.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
PEP 680 -- tomllib: Support for Parsing TOML in the Standard Library
Contents
- Abstract
- Motivation
- Rationale
- Specification
- Maintenance Implications
- Backwards Compatibility
- Security Implications
- How to Teach This
- Reference Implementation
- Rejected Ideas
- Basing on another TOML implementation
- Including an API for writing TOML
- Assorted API details
- Controlling the type of mappings returned by tomllib.load[s]
- Removing support for parse_float in tomllib.load[s]
- Alternative names for the module
- Previous Discussion
- Appendix A: Differences between proposed API and toml
Abstract
This PEP proposes adding the tomllib module to the standard library for parsing TOML (Tom's Obvious Minimal Language,).
Motivation
TOML is the format of choice for Python packaging, as evidenced by PEP 517, PEP 518 and PEP 621. This creates a bootstrapping problem for Python build tools, forcing them to vendor a TOML parsing package or employ other undesirable workarounds, and causes serious issues for repackagers and other downstream consumers. Including TOML support in the standard library would neatly solve all of these issues.
Further, many Python tools are now configurable via TOML, such as black, mypy, pytest, tox, pylint and isort. Many that are not, such as flake8, cite the lack of standard library support as a main reason why. Given the special place TOML already has in the Python ecosystem, it makes sense for it to be an included battery.
Finally, TOML as a format is increasingly popular (for the reasons outlined in PEP 518), with various Python TOML libraries having about 2000 reverse dependencies on PyPI (for comparison, requests has about 28000 reverse dependencies). Hence, this is likely to be a generally useful addition, even looking beyond the needs of Python packaging and related tools.
Rationale
This PEP proposes basing the standard library support for reading TOML on the third-party library tomli (github.com/hukkin/tomli).
Many projects have recently switched to using tomli, such as pip, build, pytest, mypy, black, flit, coverage, setuptools-scm and cibuildwheel.
tomli is actively maintained and well-tested. It is about 800 lines of code with 100% test coverage, and passes all tests in the proposed official TOML compliance test suite, as well as the more established BurntSushi/toml-test suite.
Specification
A new module tomllib will be added to the Python standard library, exposing the following public functions:
def load( fp: SupportsRead[bytes], /, *, parse_float: Callable[[str], Any] = ..., ) -> dict[str, Any]: ... def loads( s: str, /, *, parse_float: Callable[[str], Any] = ..., ) -> dict[str, Any]: ...
tomllib.load deserializes a binary file-like object containing a TOML document to a Python dict. The fp argument must have a read() method with the same API as io.RawIOBase.read().
tomllib.loads deserializes a str instance containing a TOML document to a Python dict.
The parse_float argument is a callable object that takes as input the original string representation of a TOML float, and returns a corresponding Python object (similar to parse_float in json.load). For example, the user may pass a function returning a decimal.Decimal, for use cases where exact precision is important. By default, TOML floats are parsed as instances of the Python float type.
The returned object contains only basic Python objects (str, int, bool, float, datetime.{datetime,date,time}, list, dict with string keys), and the results of parse_float.
tomllib.TOMLDecodeError is raised in the case of invalid TOML.
Note that this PEP does not propose tomllib.dump or tomllib.dumps functions; see Including an API for writing TOML for details.
Maintenance Implications
Stability of TOML
The release of TOML 1.0.0 in January 2021 indicates the TOML format should now be officially considered stable. Empirically, TOML has proven to be a stable format even prior to the release of TOML 1.0.0. From the changelog, we can see that TOML has had no major changes since April 2020, and has had two releases in the past five years (2017-2021).
In the event of changes to the TOML specification, we can treat minor revisions as bug fixes and update the implementation in place. In the event of major breaking changes, we should preserve support for TOML 1.x.
Maintainability of proposed implementation
The proposed implementation (tomli) is pure Python, well tested and weighs in at under 1000 lines of code. It is minimalist, offering a smaller API surface area than other TOML implementations.
The author of tomli is willing to help integrate tomli into the standard library and help maintain it, as per this post. Furthermore, Python core developer Petr Viktorin has indicated a willingness to maintain a read API, as per this post.
Rewriting the parser in C is not deemed necessary at this time. It is rare for TOML parsing to be a bottleneck in applications, and users with higher performance needs can use a third-party library (as is already often the case with JSON, despite Python offering a standard library C-extension module).
TOML support a slippery slope for other things
As discussed in the Motivation section, TOML holds a special place in the Python ecosystem, for reading PEP 518 pyproject.toml packaging and tool configuration files. This chief reason to include TOML in the standard library does not apply to other formats, such as YAML or MessagePack.
In addition, the simplicity of TOML distinguishes it from other formats like YAML, which are highly complicated to construct and parse.
An API for writing TOML may, however, be added in a future PEP.
Backwards Compatibility
This proposal has no backwards compatibility issues within the standard library, as it describes a new module. Any existing third-party module named tomllib will break, as import tomllib will import the standard library module. However, tomllib is not registered on PyPI, so it is unlikely that any module with this name is widely used.
Note that we avoid using the more straightforward name toml to avoid backwards compatibility implications for users who have pinned versions of the current toml PyPI package. For more details, see the Alternative names for the module section.
Security Implications
Errors in the implementation could cause potential security issues. However, the parser's output is limited to simple data types; inability to load arbitrary classes avoids security issues common in more "powerful" formats like pickle and YAML. Also, the implementation will be in pure Python, which reduces security issues endemic to C, such as buffer overflows.
How to Teach This
The API of tomllib mimics that of other well-established file format libraries, such as json and pickle. The lack of a dump function will be explained in the documentation, with a link to relevant third-party libraries (e.g. tomlkit, tomli-w, pytomlpp).
Reference Implementation
The proposed implementation can be found at
Rejected Ideas
Basing on another TOML implementation
Several potential alternative implementations exist:
- tomlkit is well established, actively maintained and supports TOML 1.0.0. An important difference is that tomlkit supports style roundtripping. As a result, it has a more complex API and implementation (about 5x as much code as tomli). Its author does not believe that tomlkit is a good choice for the standard library.
- toml is a very widely used library. However, it is not actively maintained, does not support TOML 1.0.0 and has a number of known bugs. Its API is more complex than that of tomli. It allows customising output style through a complicated encoder API, and some very limited and mostly unused functionality to preserve input style through an undocumented decoder API. For more details on its API differences from this PEP, refer to Appendix A.
- pytomlpp is a Python wrapper for the C++ project toml++. Pure Python libraries are easier to maintain than extension modules.
- rtoml is a Python wrapper for the Rust project toml-rs and hence has similar shortcomings to pytomlpp. In addition, it does not support TOML 1.0.0.
- Writing an implementation from scratch. It's unclear what we would get from this; tomli meets our needs and the author is willing to help with its inclusion in the standard library.
Including an API for writing TOML
There are several reasons to not include an API for writing TOML.
The ability to write TOML is not needed for the use cases that motivate this PEP: core Python packaging tools, and projects that need to read TOML configuration files.
Use cases that involve editing an existing TOML file (as opposed to writing a brand new one) are better served by a style preserving library. TOML is intended as a human-readable and -editable configuration format, so it's important to preserve comments, formatting and other markup. This requires a parser whose output includes style-related metadata, making it impractical to output plain Python types like str and dict. Furthermore, it substantially complicates the design of the API.
Even without considering style preservation, there are too many degrees of freedom in how to design a write API. For example, what default style (indentation, vertical and horizontal spacing, quotes, etc) should the library use for the output, and how much control should users be given over it? How should the library handle input and output validation? Should it support serialization of custom types, and if so, how? While there are reasonable options for resolving these issues, the nature of the standard library is such that we only get "one chance to get it right".
Currently, no CPython core developers have expressed willingness to maintain a write API, or sponsor a PEP that includes one. Since it is hard to change or remove something in the standard library, it is safer to err on the side of exclusion for now, and potentially revisit this later.
Therefore, writing TOML is left to third-party libraries. If a good API and relevant use cases for it are found later, write support can be added in a future PEP.
Assorted API details
Types accepted as the first argument of tomllib.load
The toml library on PyPI allows passing paths (and lists of path-like objects, ignoring missing files and merging the documents into a single object) to its load function. However, allowing this here would be inconsistent with the behavior of json.load, pickle.load and other standard library functions. If we agree that consistency here is desirable, allowing paths is out of scope for this PEP. This can easily and explicitly be worked around in user code, or by using a third-party library.
The proposed API takes a binary file, while toml.load takes a text file and json.load takes either. Using a binary file allows us to ensure UTF-8 is the encoding used (ensuring correct parsing on platforms with other default encodings, such as Windows), and avoid incorrectly parsing files containing single carriage returns as valid TOML due to universal newlines in text mode.
Type accepted as the first argument of tomllib.loads
While tomllib.load takes a binary file, tomllib.loads takes a text string. This may seem inconsistent at first.
Quoting the TOML v1.0.0 specification:
A TOML file must be a valid UTF-8 encoded Unicode document.
tomllib.loads does not intend to load a TOML file, but rather the document that the file stores. The most natural representation of a Unicode document in Python is str, not bytes.
It is possible to add bytes support in the future if needed, but we are not aware of any use cases for it.
Controlling the type of mappings returned by tomllib.load[s]
The toml library on PyPI accepts a _dict argument in its load[s] functions, which works similarly to the object_hook argument in json.load[s]. There are several uses of _dict found on; however, almost all of them are passing _dict=OrderedDict, which should be unnecessary as of Python 3.7. We found two instances of relevant use: in one case, a custom class was passed for friendlier KeyErrors; in the other, the custom class had several additional lookup and mutation methods (e.g. to help resolve dotted keys).
Such a parameter is not necessary for the core use cases outlined in the Motivation section. The absence of this can be pretty easily worked around using a wrapper class, transformer function, or a third-party library. Finally, support could be added later in a backward-compatible way.
Removing support for parse_float in tomllib.load[s]
This option is not strictly necessary, since TOML floats should be implemented as "IEEE 754 binary64 values", which is equivalent to a Python float on most architectures.
The TOML specification uses the word "SHOULD", however, implying a recommendation that can be ignored for valid reasons. Parsing floats differently, such as to decimal.Decimal, allows users extra precision beyond that promised by the TOML format. In the author of tomli's experience, this is particularly useful in scientific and financial applications. This is also useful for other cases that need greater precision, or where end-users include non-developers who may not be aware of the limits of binary64 floats.
There are also niche architectures where the Python float is not a IEEE 754 binary64 value. The parse_float argument allows users to achieve correct TOML semantics even on such architectures.
Alternative names for the module
Ideally, we would be able to use the toml module name.
However, the toml package on PyPI is widely used, so there are backward compatibility concerns. Since the standard library takes precedence over third party packages, libraries and applications who current depend on the toml package would likely break when upgrading Python versions due to the many API incompatibilities listed in Appendix A, even if they pin their dependency versions.
To further clarify, applications with pinned dependencies are of greatest concern here. Even if we were able to obtain control of the toml PyPI package name and repurpose it for a backport of the proposed new module, we would still break users on new Python versions that included it in the standard library, regardless of whether they have pinned an older version of the existing toml package. This is unfortunate, since pinning would likely be a common response to breaking changes introduced by repurposing the toml package as a backport (that is incompatible with today's toml).
Finally, the toml package on PyPI is not actively maintained, but as of yet, efforts to request that the author add other maintainers have been unsuccessful, so action here would likely have to be taken without the author's consent.
Instead, this PEP proposes the name tomllib. This mirrors plistlib and xdrlib, two other file format modules in the standard library, as well as other modules, such as pathlib, contextlib and graphlib.
Other names considered but rejected include:
- tomlparser. This mirrors configparser, but is perhaps somewhat less appropriate if we include a write API in the future.
- tomli. This assumes we use tomli as the basis for implementation.
- toml under some namespace, such as parser.toml. However, this is awkward, especially so since existing parsing libraries like json, pickle, xml, html etc. would not be included in the namespace.
Previous Discussion
Appendix A: Differences between proposed API and toml
This appendix covers the differences between the API proposed in this PEP and that of the third-party package toml. These differences are relevant to understanding the amount of breakage we could expect if we used the toml name for the standard library module, as well as to better understand the design space. Note that this list might not be exhaustive.
No proposed inclusion of a write API (no toml.dump[s])
This PEP currently proposes not including a write API; that is, there will be no equivalent of toml.dump or toml.dumps, as discussed at Including an API for writing TOML.
If we included a write API, it would be relatively straightforward to convert most code that uses toml to the new standard library module (acknowledging that this is very different from a compatible API, as it would still require code changes).
A significant fraction of toml users rely on this, based on comparing occurrences of "toml.load" to occurrences of "toml.dump".
Different first argument of toml.load
toml.load has the following signature:
def load( f: Union[SupportsRead[str], str, bytes, list[PathLike | str | bytes]], _dict: Type[MutableMapping[str, Any]] = ..., decoder: TomlDecoder = ..., ) -> MutableMapping[str, Any]: ...
This is quite different from the first argument proposed in this PEP: SupportsRead[bytes].
Recapping the reasons for this, previously mentioned at Types accepted as the first argument of tomllib.load:
- Allowing paths (and even lists of paths) as arguments is inconsistent with other similar functions in the standard library.
- Using SupportsRead[bytes] allows us to ensure UTF-8 is the encoding used, and avoid incorrectly parsing single carriage returns as valid TOML.
A significant fraction of toml users rely on this, based on manual inspection of occurrences of "toml.load".
Errors
toml raises TomlDecodeError, vs. the proposed PEP 8-compliant TOMLDecodeError.
A significant fraction of toml users rely on this, based on occurrences of "TomlDecodeError".
toml.load[s] accepts a _dict argument
Discussed at Controlling the type of mappings returned by tomllib.load[s].
As mentioned there, almost all usage consists of _dict=OrderedDict, which is not necessary in Python 3.7 and later.
toml.load[s] support an undocumented decoder argument
It seems the intended use case is for an implementation of comment preservation. The information recorded is not sufficient to roundtrip the TOML document preserving style, the implementation has known bugs, the feature is undocumented and we could only find one instance of its use on.
The toml.TomlDecoder interface exposed is far from simple, containing nine methods.
Users are likely better served by a more complete implementation of style-preserving parsing and writing.
toml.dump[s] support an encoder argument
Note that we currently propose to not include a write API; however, if that were to change, these differences would likely become relevant.
The encoder argument enables two use cases:
- control over how custom types should be serialized, and
- control over how output should be formatted.
The first is reasonable; however, we could only find two instances of this on. One of these two used this ability to add support for dumping decimal.Decimal, which a potential standard library implementation would support out of the box. If needed for other types, this use case could be well served by the equivalent of the default argument in json.dump.
The second use case is enabled by allowing users to specify subclasses of toml.TomlEncoder and overriding methods to specify parts of the TOML writing process. The API consists of five methods and exposes substantial implementation detail.
There is some usage of the encoder API on; however, it appears to account for a tiny fraction of the overall usage of toml.
Timezones
toml uses and exposes custom toml.tz.TomlTz timezone objects. The proposed implementation uses datetime.timezone objects from the standard library.
This document is placed in the public domain or under the CC0-1.0-Universal license, whichever is more permissive.
|
https://www.python.org/dev/peps/pep-0680/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Red Hat Training
A Red Hat training course is available for OpenShift Container Platform. tweak your configuration later.
The following example
BuildConfig results in a new build every time a container image tag or the source code changes:
BuildConfig Object Definition
kind: "BuildConfig" apiVersion: Container Platform instance by configuring
env/BUILD_LOGLEVEL for the
BuildDefaults admission controller.:.
Whenever a build is run:
- assemble script. This means any input content that resides outside the
contextDirwill.
8.3.4. Git Source
When specified, source code will be fetched from the location supplied.
If an inline Dockerfile is supplied, it will overwrite the Dockerfile (if any) will use).
8.3.4.1. Using a Proxy.
Your source URI must use the HTTP or HTTPS protocol for this to work.
source: git: uri: "" httpProxy: httpsProxy: noProxy: somedomain.com, otherdomain.com./*'
If multiple
Secrets match the Git URI of a particular
BuildConfig, OpenShift Container Platform will select/*'.crt=<.crt=<, will above cases:
- If your
BuildConfigalready has a
Binarysource type defined, it will effectively be will. more information on how to control which assemble and run script is used by a Source build, see Overriding Builder Image Scripts.-artifacts script.
See the S2I Requirements topic for information on how to create a builder image supporting incremental builds.
8.5.1.3. Overriding Builder Image Scripts
You can override the assemble, run, and save-artifactsS2I scripts provided by the builder image in one of two ways. Either:
- Provide an assemble, run, and..
8.5.1.
8.5.4.2. Environment Variables.
8.7.2.1. GitHub Webhooksheader.
To configure a GitHub Webhook:
After creating a
BuildConfigfromto via.
8.7.2.2. configuration via.
8.7.2.3. Bitbucket Webhooks configuration via.
8.7.2.4. Generic Webhooks via webhook are ignored. Set the
allowEnvfield to
trueon the webhook definition to enable this behavior..
OpenShift Container Platform permits builds to be triggered: field in the build strategy to point to the image stream: image streams:.
8.7.4.1. Setting Triggers Manually
Triggers can be added to and removed from build configurations with
oc set triggers. For example,
By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited by specifying resource limits in a project’s default container limits.
You can also.
8.10.2. Setting Maximum Duration.
The following example shows the part of a
BuildConfig specifying
completionDeadlineSeconds field for 30 minutes:
spec: completionDeadlineSeconds: 1800
This setting is not supported with the Pipeline Strategy option.
8.
apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: nodeSelector:1 key1: value1 key2: value2.
See Configuring Global Build Defaults and Overrides for more information.
If the specified
NodeSelector cannot be matched to a node with those labels, the build still stay in the
Pending state indefinitely.
8.10.4. Chaining Builds
For compiled languages (Go, C, C++, Java, etc.),.
Although this example chains a Source-to-Image build and a Docker build, the first build can use any strategy that will produce an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image.
The first build takes the application source and produces an image containing a WAR file. The image is pushed to the
artifact-image image stream.: type: Git strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift type: Source: type: Dockerfile type: Docker need.
8.10.5. Build Pruning
By default, builds that have completed their lifecycle are persisted indefinitely. You can limit the number of previous builds that are retained by supplying a positive integer value for
successfulBuildsHistoryLimit or
failedBuildsHistoryLimit as shown in the following sample build configuration.
apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2
Build pruning is triggered by the following actions:
- Updating a build configuration.
- A build completes its lifecycle.
Builds are sorted by their creation timestamp with the oldest builds being pruned first.
|
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/developer_guide/builds
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
an auto-layout box.
This will make a box that contains static text or images but not other GUI controls. If you want to make a rectangular container for a set of GUI controls, use one of the grouping functions (BeginHorizontal, BeginVertical, BeginArea, etc...).
Boxes in the Game View.
// Draws a texture and a label inside 2 different boxes var tex : Texture;
function OnGUI() { if(!tex) { Debug.LogError("Missing texture, assign a texture in the inspector"); } GUILayout.Box(tex); GUILayout.Box("This is an sized label"); }
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Texture tex; void OnGUI() { if (!tex) Debug.LogError("Missing texture, assign a texture in the inspector"); GUILayout.Box(tex); GUILayout.Box("This is an sized label"); } }
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/2017.2/Documentation/ScriptReference/GUILayout.Box.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
How to Connect to the Outbrain API - Authenticate using Python
There are thousands of reasons why you would want to connect to the
Outbrain Amplify API. Regardless of yours, you will first need to go through the authentication. Here’s our guide of how to.
This is a part in our series of how to leverage the APIs of Native Advertising Platforms for smarter campaign management and reporting. A while back we wrote an article on the authentication process and how to connect to the Taboola API using python. Make sure to also check that out if you looking to work their API. Similarly to then, we will be using python through Jupyter Notebook. For reference, here’s the API documentation for Outbrain
Whether you’re annoyed by the manual effort of uploading piles of images and headline combinations or frustrated by not being able to structure reporting data the way you need it. The
Outbrain Amplify API will help you maintain your sanity. How do you get started with it though?
Access to the Outbrain API - Do you have it?
In order to be able to use the
Outbrain Amplify API as an advertiser you first need to get approval from Outbrain. If you don’t yet have it but already have an advertiser account for Outbrain, you can request Outbrain API access here. However, it might be faster and simpler to just directly ask your Outbrain account manager for access instead.
If you are unsure if you already gotten approved for using the API or not. There is one easy way to check it. Log into your account on my.outbrain.com. In the top right corner it should say:
Logged in as name@email.com. Click on it and check if
Amplify API Token appears in the drop down menu. See screenshot below.
If it’s there you have already been approved for using the API and can move to the next step. Otherwise get in touch with your account manager or use the link above to request it.
You can actually use the link from this drop down to generate a
Token directly in the interface that can be used for the API. As most likely, you would want to avoid manually logging in to the interface and collecting the
Token when using the
Outbrain API We are however going to explain how to generate one directly in the API.
The Outbrain Amplify API Token
What is the Outbrain API Token
The Outbrain Amplify API Token needs to be included in every request that you make to their API. It is what is telling the API which
user the request is coming from and that you actually are allowed to make a request on the behalf of a certain
marketer account. Understanding how
users and
marketers are structured for Outbrain is not relevant for the scope of this guide. But if you are interested, we have a more detailed explaination on Outbrain users and marketers in our article on the
Outbrain Marketer ID.
The
Token is then representing your credentials as a logged in user. There are a few things to keep track of regarding the
Outbrain Token:
- Once you generate a new
Token, it is valid for 30 days
- You can keep generating new
Tokenswith the previous still working.
- Important - only two new can be created per hour. In other words, keep track of them and try to avoid unnecessarily generating new
Tokens. Frustrating to wait for an hour before you can continue working.
- The
Outbrain APIis restricted to 30 requests per second and
Token. This is more than enough for almost all intents and purposes.
How to generate an Outbrain API Token
We already covered one of the two ways of creating an
Outbrain Token, manually from the user interface at my.outbrain.com. Now, more interestingly, how do you do it directly from the API.
Start up
Jupyter Notebook, or whichever IDE you are using to run
Python. And let’s import the same packages we used for the guide of how to access the Taboola API.
import requests import json
Run it by holding shift and clicking enter. If there are no errors if should look something like the following screenshot:
We will now need to define the
Outbrain API url, and the
username and
password that is used to login to your Outbrain account.
The Outbrain API is always using the as it’s base and different extension for various purposes. The relevant for authenticating and generating a
Token, and thus for us now, is
url = '' username = 'name@email.com' password = 'xxxxxxxxxxxx'
Then we are going to make a
GET request to the /login resource using our username and password. These are passed using HTTP Basic Authentication. No worries if you don’t know what this is, you don’t need to for now.
The request will then look like this:
response = request.get(url + '/login', auth=requests.auth.HTTPBasicAuth(username, password))
Let’s check whether we got request succeeded and what the response was.
response.ok
response.json()
You should now have the following output.
The
True from testing
response.ok indicates we got a 200 Status Code (working like expected). In case it returns
False there will likely be a message in the response from
response.json() that says what went wrong.
We now successfully authenticated and generated a
Token. It is part of the reponse and called
OB-TOKEN-V1 (Outbrain Token version 1) This should be used for all subsequent requests to the
Outbrain Amplify API. But first, let’s save it.
token = response.json()['OB-TOKEN-V1']
Now let’s see it in action!
Master Advertising On Outbrain
Register for the ultimate native advertising course and get exclusive insights into building successful Outbrain campaigns.
Testing the Outbrain Amplify Token
With the
Outbrain Token in hand, you can now make requests to the API for your
user. Let’s see how the
Token used in a request and confirm that it works.
We will test listing the ‘marketers’ connected to this specific
user that we just authenticated. To do that we use the
/marketers resource.
The request will look like this:
response = requests.get(url + '/marketers', headers={'OB-TOKEN-V1': token})
The
Token saved from previously is now included in the headers of the request.
Again checking if the request was okay and what the response is:
response.ok
response.json()
This should give you that
response.ok returns
True again and one or more marketers listed in the reponse from
response.json(). Then it is working like it should and your result are looking something like:
What’s your next step?
If you got through the previous step, you now have access to Outbrain’s API and can start playing around. Regardless of whether you want to use it for checking the approval status of new creatives or collecting reporting data. Just keep including the
Token in the header of all future requests, and generate a new one whenever needed.
What do you want to use the
Outbrain Amplify API for? Share with us on Linkedin or Twitter, and let us know if there’s something you would like us to focus on in a future article.
Thanks for reading our guide on how to connect to the Outbrain API. We will keep releasing similar articles on how to take advantage of the APIs of Native Platforms. Check out the Outbrain API documentation for more information on specific requests.
If you are looking for a better way to work with, manage and optimize campaigns across native advertising platforms, check out our native advertising management platform. We integrate with the largest networks and offer on top features and insights.
|
https://joinative.com/outbrain-amplify-api-python-connect
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
The Java Tutorials have been written for JDK 8. Examples and practices described in this page don't take advantage of improvements introduced in later releases.
SpringLayout class was added in JDK version 1.4 to support layout in GUI builders.
SpringLayout is a very flexible layout manager that can emulate many of the features of other layout managers.
SpringLayout is, however, very low-level and as such you really should only use it with a GUI builder, rather than attempting to code a spring layout manager by hand.
This section begins with a simple example showing all the things you need to remember to create your first spring layout and what happens when you forget them! Later it presents utility methods that let you lay out components in a couple of different types of grids.
Here are pictures of some of the layouts we will cover:
Spring layouts do their job by defining directional relationships, or constraints, between the edges of components. For example, you might define that the left edge of one component is a fixed distance (5 pixels, say) from the right edge of another component.
In a
SpringLayout, the position of each edge is dependent on the position of just one other edge. If a constraint is subsequently added to create a new binding for an edge, the previous binding is discarded and the edge remains dependent on a single edge.
Unlike many layout managers,
SpringLayout does.
Components define edge properties, which are connected by
Spring instances. Each spring has four properties its minimum, preferred, and maximum values, and its actual (current) value. The springs associated with each component are collected into a
SpringLayout.Constraints object.. The difference between its maximum and preferred values indicates the ease with which the
Spring can be extended.
Based on this, a
SpringLayout can be visualized as a set of objects that are connected by a set of springs on their edges.
This section takes you through the typical steps of specifying the constraints for a container that uses
SpringLayout. The first example,
SpringDemo1.java, is an extremely simple application that features a label and a text field in a content pane controlled by a spring layout. Here is the relevant code:
public class SpringDemo1 { public static void main(String[] args) { ... Container contentPane = frame.getContentPane(); SpringLayout layout = new SpringLayout(); contentPane.setLayout(layout); contentPane.add(new JLabel("Label: ")); contentPane.add(new JTextField("Text field", 15)); ... frame.pack(); frame.setVisible(true); } }
Click the Launch button to run SpringDemo1 using Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
Here is what the GUI looks like when it first comes up:
Here is what it looks like when it is resized to be bigger:
Obviously, we have some problems. Not only does the frame come up way too small, but even when it is resized the components are all located at (0,0). This happens because we have set no springs specifying the components' positions and the width of the container. One small consolation is that at least the components are at their preferred sizes we get that for free from the default springs created by
SpringLayout for each component.
Our next example,
SpringDemo2.java, improves the situation a bit by specifying locations for each component.Click the Launch button to run SpringDemo2 using
Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
In this example, we will specify that the components should appear in a single row, with 5 pixels between them. The following code specifies the location of the label:
//Adjust constraints for the label so it's at (5,5). layout.putConstraint(SpringLayout.WEST, label, 5, SpringLayout.WEST, contentPane); layout.putConstraint(SpringLayout.NORTH, label, 5, SpringLayout.NORTH, contentPane);
The first
putConstraint call specifies that the label's left (west) edge should be 5 pixels from its container's left edge. This translates to an x coordinate of 5. The second
putConstraint call sets up a similar relationship between the top (north) edges of the label and its container, resulting in a y coordinate of 5.
Here is the code that sets up the location of the text field:
//Adjust constraints for the text field so it's at //(<label's right edge> + 5, 5). layout.putConstraint(SpringLayout.WEST, textField, 5, SpringLayout.EAST, label); layout.putConstraint(SpringLayout.NORTH, textField, 5, SpringLayout.NORTH, contentPane);
The first
putConstraint call makes the text field's left (west) edge be 5 pixels away from the label's right (east) edge. The second
putConstraint call is just like the second call in the first snippet, and has the same effect of setting the component's y coordinate to 5.
The previous example still has the problem of the container coming up too small. But when we resize the window, the components are in the right place:
To make the container initially appear at the right size, we need to set the springs that define the right (east) and bottom (south) edges of the container itself. No constraints for the right and bottom container edges are set by default. The size of the container is defined by setting these constraints.
SpringDemo3.java shows how to do this. Click the Launch button to run SpringDemo3 using
Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
Here is the code that sets the container's springs:
layout.putConstraint(SpringLayout.EAST, contentPane, 5, SpringLayout.EAST, textField); layout.putConstraint(SpringLayout.SOUTH, contentPane, 5, SpringLayout.SOUTH, textField);
The first
putConstraint call makes the container's right edge be 5 pixels to the right of the text field's right edge. The second one makes its bottom edge be 5 pixels beyond the bottom edge of the tallest component (which, for simplicity's sake, we've assumed is the text field).
Finally, the window comes up at the right size:
When we make the window larger we can see the spring layout in action, distributing the extra space between the available components.
In this case the spring layout has chosen to give all the extra space to the text field. Although it seems like the spring layout treats labels and text fields differently, spring layout has no special knowledge of any Swing or AWT components. It relies on the values of a component's minimum, preferred, and maximum size properties. The next section discusses how spring layout uses these properties, and why they can cause uneven space distribution.
A
SpringLayout object automatically installs
Springs for the height and width of each component that the
SpringLayout controls. These springs are essentially covers for the component's
getMinimumSize,
getPreferredSize, and
getMaximumSize methods. By "covers" we mean that not only are the springs initialized with the appropriate values from these methods, but also that the springs track those values. For example, the
Spring object that represents the width of a component is a special kind of spring that simply delegates its implementation to the relevant size methods of the component. That way the spring stays in sync with the size methods as the characteristics of the component change.
When a component's
getMaximumSize and
getPreferredSize methods return the same value,
SpringLayout interprets this as meaning that the component should not be stretched.
JLabel and
JButton are examples of components implemented this way. For this reason, the label in the SpringDemo3 example does not stretch.
The
getMaximumSize method of some components, such as
JTextField, returns the value
Integer.MAX_VALUE for the width and height of its maximum size, indicating that the component can grow to any size. For this reason, when the SpringDemo3 window is enlarged,
SpringLayout distributes all the extra space to the only springs that can grow those determining the size of the text field.
The SpringDemo examples used the
SpringLayout method
putConstraint to set the springs associated with each component. The
putConstraint method is a convenience method that lets you modify a component's constraints without needing to use the full spring layout API. Here, again, is the code from
SpringDemo3 that sets the location of the label:
layout.putConstraint(SpringLayout.WEST, label, 5, SpringLayout.WEST, contentPane); layout.putConstraint(SpringLayout.NORTH, label, 5, SpringLayout.NORTH, contentPane);
Here is equivalent code that uses the
SpringLayout.Constraints and
Spring classes directly:
SpringLayout.Constraints contentPaneCons = layout.getConstraints(contentPane); contentPaneCons.setX(Spring.sum(Spring.constant(5), contentPaneCons .getConstraint(SpringLayout.WEST))); contentPaneCons.setY(Spring.sum(Spring.constant(5), contentPaneCons .getConstraint(SpringLayout.NORTH)));
To see the entire demo converted to use this API, look at
SpringDemo4.java. That file also includes a more polished (and much longer) version of the code that sets the container's size. Click the Launch button to run SpringDemo4 using
Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
As the preceding snippets imply,
SpringLayout and
SpringLayout.Constraints tend to use different conventions for describing springs. The
SpringLayout API uses edges to define its constraints. Springs connect edges to establish linear relations between them. Edges are defined by components, using the following constants:
SpringLayout.NORTHspecifies the top edge of a component's bounding rectangle.
SpringLayout.SOUTHspecifies the bottom edge of a component's bounding rectangle.
SpringLayout.EASTspecifies the right edge of a component's bounding rectangle.
SpringLayout.WESTspecifies the left edge of a component's bounding rectangle.
SpringLayout.BASELINEspecifies the baseline of a component.
SpringLayout.HORIZONTAL_CENTERspecifies the horizontal center of a component's bounding rectangle.
SpringLayout.VERTICAL_CENTERspecifies the vertical center of a component's bounding rectangle.
Edges differ from
Spring objects The
SpringLayout.Constraints class knows about edges, but only has
Spring objects for the following properties:
Each
Constraints object maintains the following relationships between its springs and the edges they represent:
west = x north = y east = x + width south = y + height
If you are confused, do not worry. The next section presents utility methods you can use to accomplish some common layout tasks without knowing anything about the spring layout API.
Because the
SpringLayout class was created for GUI builders, setting up individual springs for a layout can be cumbersome to code by hand. This section presents a couple of methods you can use to install all the springs needed to lay out a group of components in a grid. These methods emulate some of the features of the
GridLayout,
GridBagLayout, and
BoxLayout classes.
The two methods, called
makeGrid and
makeCompactGrid, are defined in
SpringUtilities.java. Both methods work by grouping the components together into rows and columns and using the
Spring.max method to make a width or height spring that makes a row or column big enough for all the components in it. In the
makeCompactGrid method the same width or height spring is used for all components in a particular column or row, respectively. In the
makeGrid method, by contrast, the width and height springs are shared by every component in the container, forcing them all to be the same size. Furthermore, factory methods are provided by
Spring for creating different kinds of springs, including springs that depend on other springs.
Let us see these methods in action. Our first example, implemented in the source file
SpringGrid.java, displays a bunch of numbers in text fields. The center text field is much wider than the others. Just as with
GridLayout, having one large cell forces all the cells to be equally large. Click the Launch button to run SpringGrid using
Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
Here is the code that creates and lays out the text fields in SpringGrid:
JPanel panel = new JPanel(new SpringLayout()); for (int i = 0; i < 9; i++) { JTextField textField = new JTextField(Integer.toString(i)); ...//when i==4, put long text in the text field... panel.add(textField); } ... SpringUtilities.makeGrid(panel, 3, 3, //rows, cols 5, 5, //initialX, initialY 5, 5);//xPad, yPad
Now let us look at an example, in the source file
SpringCompactGrid.java, that uses the
makeCompactGrid method instead of
makeGrid. This example displays lots of numbers to show off spring layout's ability to minimize the space required. Click the Launch button to run SpringCompactGrid using
Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
Here is what the SpringCompactGrid GUI looks like:
Here is the code that creates and lays out the text fields in SpringCompactGrid:
JPanel panel = new JPanel(new SpringLayout()); int rows = 10; int cols = 10; for (int r = 0; r < rows; r++) { for (int c = 0; c < cols; c++) { int anInt = (int) Math.pow(r, c); JTextField textField = new JTextField(Integer.toString(anInt)); panel.add(textField); } } //Lay out the panel. SpringUtilities.makeCompactGrid(panel, //parent rows, cols, 3, 3, //initX, initY 3, 3); //xPad, yPad
One of the handiest uses for the
makeCompactGrid method is associating labels with components, where the labels are in one column and the components in another. The file
SpringForm.java uses
makeCompactGrid in this way, as the following figure demonstrates.
Click the Launch button to run SpringForm using Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
Here is the code that creates and lays out the label-text field pairs in SpringForm:
String[] labels = {"Name: ", "Fax: ", "Email: ", "Address: "}; int numPairs = labels.length; //Create and populate the panel. JPanel p = new JPanel(new SpringLayout()); for (int i = 0; i < numPairs; i++) { JLabel l = new JLabel(labels[i], JLabel.TRAILING); p.add(l); JTextField textField = new JTextField(10); l.setLabelFor(textField); p.add(textField); } //Lay out the panel. SpringUtilities.makeCompactGrid(p, numPairs, 2, //rows, cols 6, 6, //initX, initY 6, 6); //xPad, yPad
Because we are using a real layout manager instead of absolute positioning, the layout manager responds dynamically to changes in components involved. For example, if the names of the labels are localized, the spring layout produces a configuration that gives the first column more or less room, as needed. And as the following figure shows, when the window is resized, the flexibly sized components the text fields take all the excess space, while the labels stick to what they need.
Our last example of the
makeCompactGrid method, in
SpringBox.java, shows some buttons configured to be laid out in a single row. Click the Launch button to run SpringBox using
Java™ Web Start (download JDK 7 or later). Alternatively, to compile and run the example yourself, consult the example index.
Note that the behavior is almost identical to that of
BoxLayout in the case of a single row. Not only are the components laid out as
BoxLayout would arrange them but the minimum, preferred, and maximum sizes of the container that uses the
SpringLayout return the same results that
BoxLayout would. Here is the call to
makeCompactGrid that produces this layout:
//Lay out the buttons in one row and as many columns //as necessary, with 6 pixels of padding all around. SpringUtilities.makeCompactGrid(contentPane, 1, contentPane.getComponentCount(), 6, 6, 6, 6);
Let us look at what happens when we resize this window. This is an odd special case that is worth taking note of as you may run into it by accident in your first layouts.
Nothing moved! That is because none of the components (buttons) or the spacing between them was defined to be stretchable. In this case the spring layout calculates a maximum size for the parent container that is equal to its preferred size, meaning the parent container itself is not stretchable. It would perhaps be less confusing if the AWT refused to resize a window that was not stretchable, but it does not. The layout manager cannot do anything sensible here as none of the components will take up the required space. Instead of crashing, it just does nothing, leaving all the components as they were.
The API for using
SpringLayout is spread across three classes:
SpringLayout
SpringLayout.Constraints
Spring
The following table lists some examples that use spring layout.
|
https://docs.oracle.com/javase/tutorial/uiswing/layout/spring.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
In How To Use A MCP23017 I2C Port Expander With The Raspberry Pi – Part 2 I explained how to use an MCP23017 16-bit port expander to provide additional outputs. In this article I’ll show a basic input example where we read the status of a push switch.
In our example circuit the switch input uses the last bit of the GPA set of pins.
Python Script For Inputs
Here is an example script which will read the status of the switch in a loop and print message when it is pressed :
import smbus import time #bus = smbus.SMBus(0) # Rev 1 Pi uses 0 bus = smbus.SMBus(1) # Rev 2 Pi uses 1 DEVICE = 0x20 # Device address (A0-A2) IODIRA = 0x00 # Pin direction register GPIOA = 0x12 # Register for inputs # Set first 7 GPA pins as outputs and # last one as input. bus.write_byte_data(DEVICE,IODIRA,0x80) # Loop until user presses CTRL-C while True: # Read state of GPIOA register MySwitch = bus.read_byte_data(DEVICE,GPIOA) if MySwitch & 0b10000000 == 0b10000000: print "Switch was pressed!" time.sleep(1)
You can download direct to your Pi using :
wget
You can run the script using the following command :
sudo python mcp23017_inputs.py
The script above performs the following actions :
- Imports the smbus and time libraries
- Creates an smbus object named “bus”
- Configures some register address constants
- Sets first 7 GPA pins as outputs
- Sets last GPA pin as an input
- Reads the state of the 8th bit and prints a message if it goes high
When you press the button you should see “Switch was pressed!” printed to the screen.
In this example we only used one switch. Using both 8 bit registers (GPA and GPB) you’ve got a total of 16 pins to configure as inputs or outputs. That’s a possible 16 LEDs or switches in whatever combination you like. You just need to configure the IODIRA and IODIRB registers with the correct bit pattern.
Don’t forget to check out the online binary, hexadecimal, decimal number convertor to quickly generate binary, decimal and hexadecimal numbers for your own I2C scripts.
hello, I’ve seen several posts that says one of the restrictions of i2c is the cable length has to be less than 6″. Is that the cable length just between the Pi and the MCP23017? Or does that mean any peripherals attached to the MCP23017 can only be up to 6″ away?
Thanks.
I’m not sure but I did some quick Googling and plenty of people seem to have some sucess with cable lengths measured in metres. If there is a restriction it will be the two I2C lines between the MCP23017 and the Pi. The inputs and outputs on the chip are just normal logic pins.
Old thread, but still…
The length of the cables you’re talking about, are at the output side of the MCP23017 and have nothing to do with I2c. So, there’s not really a length restriction at that side.
I2c, however, is limited because of the line capacitance that should not be larger than 400pF.
There are solutions for that. One of them is using the P82B96 (). When you’re using this device, you can have lengths up to at least 20m (because at the transmission side, you can go up to 4000pF bus capacitance…)
An old question but it maybe helps:
We have extended the bus line and it depends on for example the cable type used. On the next page we have an example of how to extend the bus line. Allthough the shown setup is correct you should use octocouplers to protect the expanded bus part against shorts and interferance.
The chip mentioned below (P82B96 ) is also the one we used and if done correctly you can go up to 50 meters and counting. We also have the spec sheet on this page so you have a table explaining how far you can get at 100khz (which the pi uses by default).
The page is:
Hello. Any tutorials for how to read a temperature sensor like the DS18B20 using raspberry pi and the MCP23017? Thanks!
I’ve not used one with the MCP23017 but you can easily connect a DS18B20 using GPIO4. See this post.
all my gpio ports are full 🙂
that’s why i use the mcp for port expansion. I’ll try to figure it out myself. Thanks!
Hi ,
Did you get MCP23017 working with DS18B20 ? Or any other sensor with this MCP ? If so can you explain ?
Im looking to really increase the number of GPIO output.. I’m looking for 32 outputs, can I use 2 of these port expanders in parallel?
You can connect two using the same I2C pins on the Pi. You just need to set a different address for the second device using the A0,A1,A2 pins. ie rather than set them all low set A0 high.
Hi there. Great tutorial! Im fairly new at using i2c on the raspberry and have very little experience.. I wanted to know if you can maybe post a diagram of how 2 mcp chips can be connected to the RPi
It’s on my list of things to do! The only real difference is the A0, A1 and A2 pins. One of these must be set high to give the second device a unique address.
Hi there, I’m pretty new to I2C so I was wondering if it is possible to use interrupts with inputs on the MCP? If so could you explain how? If not I will have to stick with the 7 accessible pins on the GPIO header of my Rev 2 RPi.
Hi Samadi (and anyone else that can help),
I’m trying to use the interrupts on the slice of Pi/o mcp23017 with Python and smbus. How did you get on with enabling the interrupt?
I know I need to tie the INT pad back to one of the GPIO pins on the rpi, but I need to know how to tell the mcp23017 to change the INT state during an input change using bus.write_byte_data()
Any examples would be very helpful…
Thanks all,
R
I was searching about this, and came here. Anyone has some idea to do that, and got it working?
Matt, something for part 4?
Thank you in advance.
I have been through part 1 to part 3 and all work really well, thank you. However now I would also like to see if I can use the intrupts with your smbus library and any additional wiring. Did you ever create a part 4 or do you have some sample code or examples. I think I might also need some notes on connecting the INT pin as it look as though you need to connect it to one of the GPIO pins. I would be creatful for any information. Thank you.
The smbus library isn’t mine it’s just one of the libraries out there that everyone uses. I’m not sure who the author is. Unfortunately I haven’t had a chance to ever try out interrupts on the MCP23017.
In my consulting work I find that I have need for a programmable module and with some research find the raspberry pi to be of great interest. I have no use for video, audio or external connection such as TV or monitor. The module would be used to control a plurality of stepper motors and DC motors. Input information would be in multiple tables, or more desirable, with input values of A, B and C and a 6-step formula which would preclude the units to be confined to only the preloaded tables and their attendant high usage of memory. The latter would make the units more universal.
What linux language would be most appropriate? Would like any and all comments.
Thank for your work. It help me to start with i2c study.
Good work.
Thank from Italy
steve
I’d like to use the I2C ports to control 24v solenoid switches (irrigation valves) via an 8-channel opto relay board (such as listed at). Would those be treated like LEDs from an MPC23008 chip?
Thanks for the tutorial, but could someone clarify something? I have set GPA7 as an input and GPA1-6 as outputs using: i2cset -y 1 0x20 0x00 0x80. When I hold GPA7 at either 0v or 3.3v I was expecting to read the GPIOA register using: i2cget -y 1 0x20 0x12 w and was expecting to see either 0x80 or 0x00, as high or low. yet the returned values seem to randomly fluctuate. My only experience of the MCP23017 is this topic!
Although it may look random is the result you get correctly indicating the state of the input (bit 7)? If you haven’t set the outputs they may be floating. Bit 7 is probably correct. The other bits don’t matter as they are outputs. 0x80,0x60,0xA0 are all valid results for Bit 7 being High.
Thanks, I will set all GPA pins high and try again.
Thanks for your advice, I pulled the pin permanently high and could then see 0x00 when applying 0v to the pin.
Very nice tutorial. Thanks a lot.
at first it wasn’t clear where the numbers for DEVICE , IODIRA and GPIOA came from, then i googled it.
Hello
thanks for this gread howto…
but can anyone tell me how to use interrupts on the mcp23017 and readout and interpret them? the reason is, i want to wire more than one DHT11 on my pi but there are only a few pins free so i thought i use the mcp23017 or the mcp23S17 (heared that ISP is faster I2C)…
i found a python script to readout the dht11 directly wired with gpios and there is “GPIO.setup(4, GPIO.IN, pull_up_down=GPIO.PUD_UP)” in there…
so how can i translate it for the mcp??
thnx 4 help.
mike
Hello
Thanx for the help. But i am curious either if we enable the pin as input by using the gpioa/gpiob, is the input automatically become as internal pull up resistor?
One more thing is are u using the bank=0 and how do we know we are using that bank 1 or bank 0 and besides that to register the iodira to be all output why do u register it as 0x00 instead of 0xFF. Is it because u are using the bank =0?
Bank0 (A) is being used because “GPIOA = 0x12″. If it was being set to 0x13 it would be bank1 (B). “IODIRA = 0x00” is the register address. It will always be 0x00. This is used later on to set inputs and outputs (eg 0x80 for 1 input and 7 outputs).
Does input can be read in decimal?
Hello.. Thank you for this great tutorial.. It was really helpful.. I have configured my MCP23017 pins to make them as input or outputs.. Now I wanted to try and create a pulse like signal for a specific pin of the MCP23017 so that on providing this pulse I could read the state of the pin.. Could you please suggest something on how I could do this?
|
https://www.raspberrypi-spy.co.uk/2013/07/how-to-use-a-mcp23017-i2c-port-expander-with-the-raspberry-pi-part-3/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Admin rights shouldn't work cross-tenant
Bug Description
Consider user A is an `admin` in Y tenant and has no roles in Z tenant. Let G image be an image from Z tenant.
User A gets a token for Y tenant and sends a request: "delete G image". Currently, glance allows such malicious action.
That's because glance/
The problem exists in this branch:
commit fc758a46e77de17
Author: Dan Prince <email address hidden>
Date: Thu Jun 7 22:23:48 2012 -0400
The same security hole is found in Keystone: an `admin` user is allowed to do anything, e.g. remove users and their roles for any tenant.
Yeah, this should be fixed. We're treating roles as being globally scoped, when they're actually scoped to a specific tenant like you describe.
@Joe: Your take on the keystone side of the vulnerability ?
While I agree this should be fixed, it's not a security bug but how the initial version of authorization was implemented.
In the Diablo and Essex releases of OpenStack, Admin was effectively global and not per-tenant or per-service. That's the entire reason of adding in domains to Keystone, and behind the idea of unifying the role names (which are installation-
If you want a role that's a global admin, you can still use "admin" and create associated policy.json files that respect that identifier.
(tl;dr - to be resolved with Keystone V3 API implementation)
Brian, Joe: so you both agree this is not a vulnerability, but by (admittedly weak) design ? And that it should definitely be strengthened in future revisions of the API ?
If yes, I'd suggest that we open this bug as a known and wanted security improvement, rather than keep it embargoed as an exploitable vulnerability.
Alessio: would that work for you ?
Sorry for the delay!
Would it be acceptable if admin can access any image in owner_is_tenant mode, otherwise image is modifiable only by users of its tenant.
Here is my patch (https:/
def is_image_
"""Return True if the image is mutable in this context."""
# Is admin and owner is user == image mutable
if context.is_admin and not context.
return True
# No owner == image not mutable
if image['owner'] is None or context.owner is None:
return False
# Image only mutable by its owner
return image['owner'] == context.owner
(and so on for another functions)
@Brian, Joe: please answer to comment 6 and 7
I'm good for opening it up - it's how the roles are implemented and supported by the individual services. The keystone project has a blueprint that Liem (from HP) is working on now to gather up a recommended set of policy.json files and related roles to provide a layout with explicit per-service administration functions. (https:/
For Alessio, I'd recommend making that a rule in policy.json rather than in the code itself, as that makes it configurable by policy rather than hard coded, but I'll defer to whatever Brian suggests here.
I agree with Joe here, this is a future improvement, not a bug. However, we do need to be clear with that the implications of assigning a user an admin-like role entails.
As for future work, we need to separate service-level admin vs tenant-level admin. Right now, we create something that looks like a tenant-level admin, when it has the ability to act as if it were service-level.
Adding PTL for input. I /think/ this is by design though... admins are not scoped to tenants ?
|
https://bugs.launchpad.net/keystone/+bug/1010547
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
This is the mail archive of the mauve-discuss@sourceware.org mailing list for the Mauve project.
Hi All,
Last week Intel posted their AWT/Swing test suite at the Harmony project site. I've been taking a look at it, to see what value there is in it for GNU Classpath...and, in spite of a lot overlap with Mauve, there are many new tests in there that are useful to us.
Unfortunately they decided to write the tests in the java.awt.* and javax.swing.* namespace, which makes it awkward to run the tests, both against Sun's reference implementation (to verify the tests) and against GNU Classpath (to find bugs in our implementation). However, it has been relatively straightforward (though tedious) to refactor the tests to move the tests into their own namespace (I used test.java.awt.* and test.javax.swing.*).
You can find a jar file containing both the (JUnit-based) tests and corresponding source code (for around 2,600 tests so far) here:
To run the tests, use the following command line:
jamvm -classpath .:lib/junit.jar junit.textui.TestRunner test.java.awt.PackageTestSuite
...replacing the class name with the name of any test class, or the package level test suites:
test.java.awt.PackageTestSuite test.java.awt.datatransfer.PackageTestSuite test.java.awt.event.PackageTestSuite test.java.awt.font.PackageTestSuite test.java.awt.image.PackageTestSuite test.javax.swing.PackageTestSuite test.javax.swing.border.PackageTestSuite test.javax.swing.colorchooser.PackageTestSuite test.javax.swing.event.PackageTestSuite test.javax.swing.plaf.PackageTestSuite test.javax.swing.table.PackageTestSuite test.javax.swing.tree.PackageTestSuite test.javax.swing.undo.PackageTestSuite
There is also a test suite that will run all tests:
test.AWTSwingTestSuite (runs all tests)
There are still many tests that I haven't extracted from javax.swing.*, javax.swing.plaf.basic.*, javax.swing.plaf.metal.* and javax.swing.text.*. I'll post a revised test jar file when I've completed those.
Regards,
Dave
|
https://www.sourceware.org/ml/mauve-discuss/2006-q3/msg00005.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Wiki
ntobjx / Home
NT Objects
This utility, named NT Objects or short: ntobjx, can be used to explore the Windows object manager namespace.
The GUI should be self-explanatory for the most part, especially if you have ever used Windows Explorer. If you have already used WinObj by Microsoft, you will even recognize many of the names.
You can find the latest information about the tool on the overview page and can download the respective latest version from the download page.
How to build
Help/documentation
Similar software
- WinObj the original program from SysInternals (meanwhile a division of Microsoft).
- Object Manager Namespace Viewer a tool I cowrote with Marcel van Brakel in Delphi some years back in order to showcase the JEDI API header translations for the NT native API.
- ObjectManagerBrowser only found out about this by accident in June 2017. Referenced here so evidently around since at least 2014.
- WinObjEx64
If you are aware of any other similar software, please open a ticket or drop me an email.
Updated
|
https://bitbucket.org/assarbad/ntobjx/wiki/Home
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
1.4. Creating an IPython extension with custom magic commands
Although IPython comes with a wide variety of magic commands, there are cases where we need to implement custom functionality in new magic commands. In this recipe, we will show how to create line and magic cells, and how to integrate them in an IPython extension.
How to do it...
1. Let's import a few functions from the IPython magic system:
from IPython.core.magic import (register_line_magic, register_cell_magic)
2. Defining a new line magic is particularly simple. First, we create a function that accepts the contents of the line (except the initial
%-prefixed name). The name of this function is the name of the magic. Then, we decorate this function with
@register_line_magic:
@register_line_magic def hello(line): if line == 'french': print("Salut tout le monde!") else: print("Hello world!")
%hello
Hello world!
%hello french
Salut tout le monde!
3. Let's create a slightly more useful
%%csv cell magic that parses a CSV string and returns a pandas
DataFrame object. This time, the arguments of the function are the command's options and the contents of the cell.
import pandas as pd from io import StringIO ()
4. The method we described is useful in an interactive session. If we want to use the same magic in multiple notebooks or if we want to distribute it, then we need to create an IPython extension that implements our custom magic command. The first step is to create a Python script (
csvmagic.py here) that implements the magic. We also need to define a special function
load_ipython_extension(ipython):
%%writefile csvmagic.py import pandas as pd from io import StringIO')
Writing csvmagic.py
5. Once the extension module is created, we need to import it into the IPython session. We can do this with the
%load_ext magic command. Here, loading our extension immediately registers our
%%csv magic function in the interactive shell:
%load_ext csvmagic
%%csv col1,col2,col3 0,1,2 3,4,5 7,8,9
How it works...
An IPython extension is a Python module that implements the top-level
load_ipython_extension(ipython) function. When the
%load_ext magic command is called, the module is loaded and the
load_ipython_extension(ipython) function is called. This function is passed the current
InteractiveShell instance as an argument. This object implements several methods we can use to interact with the current IPython session.
The InteractiveShell class
An interactive IPython session is represented by a (singleton) instance of the
InteractiveShell class. This object handles the history, interactive namespace, and most features available in the session.
Within an interactive shell, we can get the current
InteractiveShell instance with the
get_ipython() function.
The list of all methods of
InteractiveShell can be found in the reference API (see the reference at the end of this recipe). The following are the most important attributes and methods:
user_ns: The user namespace (a dictionary).
push(): Push (or inject) Python variables in the interactive namespace.
ev(): Evaluate a Python expression in the user namespace.
ex(): Execute a Python statement in the user namespace.
run_cell(): Run a cell (given as a string), possibly containing IPython magic commands.
safe_execfile(): Safely execute a Python script.
system(): Execute a system command.
write(): Write a string to the default output.
write_err(): Write a string to the default error output.
register_magic_function(): Register a standalone function as an IPython magic function. We used this method in this recipe.
The Python extension module needs to be importable when using
%load_ext. Here, our module is in the current directory. In other situations, it has to be in the Python path. It can also be stored in
~/.ipython/extensions, which is automatically put in the Python path.
To ensure that our magic is automatically defined in our IPython profile, we can instruct IPython to load our extension automatically when a new interactive shell is launched. To do this, we have to open the
~/.ipython/profile_default/ipython_config.py file and put
'csvmagic' in the
c.InteractiveShellApp.extensions list. The
csvmagic module needs to be importable. It is common to create a Python package that implements an IPython extension, which itself defines custom magic commands.
There's more...
Many third-party extensions and magic commands exist, for example the
%%cython magic that allows one to write Cython code directly in a notebook.
Here are a few references:
- Documentation of IPython's extension system available at
- Defining new magic commands explained at
- Index of IPython extensions at
- API reference of InteractiveShell available at
See also
- Mastering IPython's configuration system
|
https://ipython-books.github.io/14-creating-an-ipython-extension-with-custom-magic-commands/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
If I have
@Entity
public class EntityBean implements Serializable, TextInterface{
Then annotation hints on the line with @Entity offers to "Implement methods from TextInterface", while hints on the
class definition line says "Create EJB Persistence Unit"
Could you please elaborate on what is actually wrong? I am sorry for my ignorance, but I don't know much about @Entity
and entity beans. Thanks.
Sure, np.
Simply put, the hints are switched. The hint on line with
"public class SomeClass implements SomeInterface"
should offer to implement abstract methods (as is the case in NB551). Not the line with "@Entity"
Put aside @Entity, while coding I saw that in standard JSE classes/annotations too. It seems like the hint for
implementing abstract classes is somehow glued to previous annotation instead of offending line.
Is this better?
If I see that again with JSE, I will try to post example, screenshot.
Dale
a) I have renamed the issue as it is generic, not bound to @Entity annotation only but any Java @Annotation.
b) see attached screenshots I made with @Deprecated annotation.
HTH, dale
Created attachment 46919 [details]
wrong hint line
Created attachment 46920 [details]
wrong hint line
Thanks Dale for an excellent report. I'm passing it on to the hints guys for closer look.
Well, the "non-abstract" error is simply placed at the first line of the class header - which in the given case contains
only "@Deprecated" annotation.
Resolving all issues with milestone "future" as LATER. If you feel strongly that
it should be implemented please reopen and set the target milestone to "next".
I'd like to NetFIX [1] this bug. Is it possible?
[1]
I am willing to review (and apply) the patch in java.hints, so you patch(es) is(are) welcome (unless Max, the current
owner of java.hints, disagrees).
I would recommend to first define the scope - what kind of errors do you want to move to a more appropriate place (and
what are the more appropriate places). The actual implementation of this should not be that difficult - see
ErrorHintsProvider.getLine in java.hints module. Creating a few tests would also be good, see ErrorHintsProviderTest in
java.hints. Note that there is a debugging panel "Errors" in the navigator that shows the error as returned by the
parser (javac): the start, end and preferred positions, error kind, etc. This panel is enabled only if assertions are
enabled in the IDE.
For the @Entity annotation, you would need to look at a different module (probably j2ee.jpa.verification or
j2ee.ejbverification).
Created attachment 80251 [details]
Patch for java.hints
To start, I have changed the way lines are highlighted for all class should be abstract errors (as listed in ImplementAllAbstractMethods). Now it highlights the class name instead of the annotation.
I also intend to change the EJB hint in order to highlight the annotation.
I find it hard to propose changes for the several other possible errors, so I think we should address just the cases
that have been reported in this issue. Other cases will result in new issues that could be addressed on their own.
Still haven't had the time to look at the tests, sorry.
j2ee.jpa.verification implements the Persistence Unit fix. However, given its common infrastructure for problem
detection and highlighting, a clean fix to this module would require analyzing all warnings it can generate and decide
which ones belong to the @Entity annotation, to the class and to other annotations it can handle.
Therefore, I suggest the current patch is applied to java.hints and another issue is filed against
j2ee.jpa.verification.
The patch seems fine to me, except that "compiler.err.abstract.cant.be.instantiated" is listed twice. (Also, it might be
reasonable to replace the cascade if with a switch here, but that is nitpick). Would be good to have some tests,
however. If you could provide test cases (small Java classes on which the fixed version provides better results, similar to:
), I can create the automated test cases from them myself.
Thanks.
> The patch seems fine to me, except that "compiler.err.abstract.cant.be.instantiated" is listed twice.
That's because ImplementAllAbstractMethods also contains the error twice in getCodes() and I used it to generate the
strings. I will submit a new patch without the duplicated line.
> (Also, it might be reasonable to replace the cascade if with a switch here, but that is nitpick).
Unless it falls through, that would actually change semantics, if I got the code right. For instance, the current code
would move the token from dot in:
p.new Something();
to Something since it would match dot and new.
> Would be good to have some tests, however. If you could provide test cases (small Java classes on which the fixed
> version provides better results, similar to:
>), I
> can create the automated test cases from them myself.
I will attach a few examples soon.
Created attachment 82078 [details]
Patch for java.hints
Created attachment 82079 [details]
Broken sample Java file for better fix
Created attachment 82080 [details]
Broken sample Java file for better fix
Honzo, can you review the latest version of the patch? Thanks!
Thanks for the patch, I have integrated it:
JPA issue submitted as # 165112. Closing this issue.
Integrated into 'main-golden', will be available in build *200905220201* on (upload may still be in progress)
Changeset:
User: Jan Lahoda <jlahoda@netbeans.org>
Log: #113119: correct placing of error underline for compiler.err.does.not.override.abstract and compiler.err.abstract.cant.be.instantiated errors.
Contributed by misterm@netbeans.org.
|
https://netbeans.org/bugzilla/show_bug.cgi?id=113119
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Posted On: Mar 22, 2018
Amazon Elastic Container Service (Amazon ECS) now includes.
When you use Amazon ECS service discovery, you pay for the Route 53 resources that you consume, including each namespace that you create, and for the lookup queries your services make. Service health checks are provided at no cost. For more information on pricing, please see the documentation. Today, service discovery is available for Amazon ECS tasks using AWS Fargate or the EC2 launch type with awsvpc networking mode.
To learn more, visit the Amazon ECS Service Discovery documentation.
You can use Amazon ECS Service Discovery in all AWS regions where Amazon ECS and Amazon Route 53 Auto Naming are available. These include US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions. For more information on AWS regions and services, please visit the AWS global region table.
|
https://aws.amazon.com/about-aws/whats-new/2018/03/introducing-service-discovery-for-amazon-ecs/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
My name is Piyush and I have just started learning to code in python. Right now I am working on a project. And would really appreciate if anyone can help in adding customized parameters in functions. I am mentioning my code and the problem I am facing down below:
class Chips (): def __init__ (self, total): while True: try: total = int(input("How many chips as total?: ?")) except: print("Please enter an integer") else: break self.total = total self.bet = 0 def win_bet (self): self.total = self.total + self.bet def loose_bet (self): self.total = self.total - self.bet
However, I can set total = 100 and can run the game but I want the user to be able to enter the total chips he/she wants to add.
I want the input(total) in the while loop to be as the argument while running the game. But I keep on getting this error:
-- TypeError Traceback (most recent call last) <ipython-input-9-b1b7b1c195f7> in <module>() 367 368 # Set up the Player's chips --> 369 player_chips = Chips() # remember the default value is 100 370 371 # Prompt the Player for their bet: TypeError: __init__() missing 1 required positional argument: 'total'
Please Help!
Thanks for your attention to my request.
Your class takes a parameter in its constructor, but you also read it from the input in your constructor.
I think you are confused in what you are trying to achieve here.
Option 1:
If the caller of your code (the code that constructs your class), can be modified and know the
total at the instance creation time, just add the parameter in the constructor call.
total = 100 player_chips = Chips(total)
Option 2:
in case you can't modify the caller, (most likely from what I read), then that means you want to actually read the total from the input. Remove the argument from your constructor.
def __init__ (self):
instead of
def __init__(self, total):
c = Chips(100)
at the bottom of your code - no error. You override the value of total in the interactive constructor, but hey, you promised "init" you were sending it.
c = Chips()
works if you change your signature to:
def __init__ (self, total):
The interactive constructor seems like a very bad idea overall.
You should add method get_total_from_user and set default param in constructor
class Chips (): def __init__ (self, total=None): self.total = self.get_total_from_user() if not total else total self.bet = 0 def win_bet (self): self.total = self.total + self.bet def loose_bet (self): self.total = self.total - self.bet def get_total_from_user(self): while True: try: return int(input("How many chips as total?: ?")) except: print("Please enter an integer") else: break
It allows you to get total from user
Chips()
Or you can set it via passing value
Chips(100):
I am trying to recreate this rasterio example:
|
https://cmsdk.com/python/how-to-add-custom-parameter-parameter-in-python-functions.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
1 /* crypto/aes/aes.h */ 2 /* ==================================================================== 3 * Copyright (c) 1998-2002 The OpenSSL Project. All rights. All advertising materials mentioning features or use of this 18 * software must display the following acknowledgment: 19 * "This product includes software developed by the OpenSSL Project 20 * for use in the OpenSSL Toolkit. ()" 21 * 22 * 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to 23 * endorse or promote products derived from this software without 24 * prior written permission. For written permission, please contact 25 * openssl-core@openssl.org. 26 * 27 * 5. Products derived from this software may not be called "OpenSSL" 28 * nor may "OpenSSL" appear in their names without prior written 29 * permission of the OpenSSL Project. 30 * 31 * 6. Redistributions of any form whatsoever must retain the following 32 * acknowledgment: 33 * "This product includes software developed by the OpenSSL Project 34 * for use in the OpenSSL Toolkit ()" 35 * 36 * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY 37 * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 38 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 39 * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR 40 * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 41 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 42 * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 43 * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 44 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 45 * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 46 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED 47 * OF THE POSSIBILITY OF SUCH DAMAGE. 48 * ==================================================================== 49 * 50 */ 51 52 #ifndef HEADER_AES_LOCL_H 53 # define HEADER_AES_LOCL_H 54 55 # include <openssl/e_os2.h> 56 57 # ifdef OPENSSL_NO_AES 58 # error AES is disabled. 59 # endif 60 61 # include <stdio.h> 62 # include <stdlib.h> 63 # include <string.h> 64 65 # if defined(_MSC_VER) && (defined(_M_IX86) || defined(_M_AMD64) || defined(_M_X64)) 66 # define SWAP(x) (_lrotl(x, 8) & 0x00ff00ff | _lrotr(x, 8) & 0xff00ff00) 67 # define GETU32(p) SWAP(*((u32 *)(p))) 68 # define PUTU32(ct, st) { *((u32 *)(ct)) = SWAP((st)); } 69 # else 70 # define GETU32(pt) (((u32)(pt)[0] << 24) ^ ((u32)(pt)[1] << 16) ^ ((u32)(pt)[2] << 8) ^ ((u32)(pt)[3])) 71 # define PUTU32(ct, st) { (ct)[0] = (u8)((st) >> 24); (ct)[1] = (u8)((st) >> 16); (ct)[2] = (u8)((st) >> 8); (ct)[3] = (u8)(st); } 72 # endif 73 74 # ifdef AES_LONG 75 typedef unsigned long u32; 76 # else 77 typedef unsigned int u32; 78 # endif 79 typedef unsigned short u16; 80 typedef unsigned char u8; 81 82 # define MAXKC (256/32) 83 # define MAXKB (256/8) 84 # define MAXNR 14 85 86 /* This controls loop-unrolling in aes_core.c */ 87 # undef FULL_UNROLL 88 89 #endif /* !HEADER_AES_LOCL_H */
|
https://fossies.org/linux/openssl/crypto/aes/aes_locl.h
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
[SOLVED]Show and use a custom dialog from within a widget or mainwindow
Hi all,
I have a 'stupid' question about using a costum dialog from within a widget/mainwindow.
I have a widget with a pushbutton, when i push the button i want to show a custom dialog and use its input to work with inside the first widget.
When i try this from within 'main.cpp' it works without a problem, but when i use the same from within the widget nothing seems to happen.
Does anybody know what im doing wrong or better, where i have to look for it?
This seems to not work:
@
//filename is widget.cpp
#include "dialog.h"
void Widget::on_pushButton_clicked()
{
this.hide(); //works
Dialog d; // doesnt work?
d.show(); // doesnt work
}
@
This works:
@
//filename is main.cpp
#include "dialog.h"
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Dialog d; //works
d.show(); //works
return a.exec();
}
@
I know that i can make a function within the main.cpp to get it work, but i want to use it from within the widget.cpp and not use the main.cpp for functions ed. Is this possible or is it a really stupid question?
Thanks in advance,
vinb.
You should not hide your main widget.
You should call d.exec() on the dialog. This opens the new window as modal dialog, i.e. it blocks input to all other windows while it is open. exec() returns when the dialog is closed (either by accept() or by reject(). See the docs of "QDialog": for some more information.
This looks a lot like another, "very recent discussion": here. You did "search before asking":, didn't you?
Thanks both!
And yes i've searched but with the wrong keywords i quess. :)
Sorry, for wasting your time.
|
https://forum.qt.io/topic/4686/solved-show-and-use-a-custom-dialog-from-within-a-widget-or-mainwindow
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Setting Up MochaSetting Up Mocha
React Native CLI installs the Jest testing framework by default, but in the last few versions of React Native it's had some stability issues.
Instead, we'll use the Mocha test framework. Its syntax is very similar to Jest. It takes a little work to set up, but it's worth it!
Removing JestRemoving Jest
First, let's remove Jest since we won't be using it. If you're using Expo Jest is not installed by default. If you're using React Native CLI, run the following:
$ yarn remove jest babel-jest
Then, remove the following config and script from
package.json:
"scripts": { - "start": "node node_modules/react-native/local-cli/cli.js start", - "test": "jest" + "start": "node node_modules/react-native/local-cli/cli.js start" }, ... "metro-react-native-babel-preset": "0.51.1", "react-test-renderer": "16.6.3" - }, - "jest": { - "preset": "react-native" }
Installing MochaInstalling Mocha
Mocha's ecosystem is split into several separate packages. We'll install the following, which are typically used together:
- Mocha, the test runner
- Chai, the assertion library
- Sinon, the test double library
sinon-chai, which allows for running more readable expectations against Sinon test doubles
Install all of them:
$ yarn add --dev mocha \ chai \ sinon \ sinon-chai
Next, add an NPM script to run mocha:
"scripts": { - "start": "node node_modules/react-native/local-cli/cli.js start" + "start": "node node_modules/react-native/local-cli/cli.js start", + "test": "mocha \"test/**/*.spec.js\"" },
Create a
test folder at the root of your project, then add a
mocha.opts file to configure mocha. Add the following to it:
--require @babel/register --require chai/register-expect --require test/setup
These flags do the following:
- Enables Babel transpilation so you can use modern JS features
- Sets up the
expect()function so you can use it in any file without importing it
- Loads a custom
setup.jsfile you'll create for additional setup
Let's create that
test/setup.js file now and add the following:
import chai from 'chai'; import sinon from 'sinon'; import sinonChai from 'sinon-chai'; global.sinon = sinon; chai.use(sinonChai);
This does the following:
- Makes
sinonavailable globally so you don't need to import it
- Loads
sinon-chaiso you can do more readable assertions against Sinon test doubles
With this, our setup should be done.
Smoke TestSmoke Test
To confirm Mocha is working, create a
test/unit folder, then create a
test/unit/smoke.spec.js file. Add the following contents:
describe("truth", () => { it("is true", () => { expect(true).to.equal(true); }); });
Run the tests with
yarn test. You should see output like the following:
$ yarn test yarn run v1.13.0 $ mocha "test/**/*.spec.js" truth ✓ is true 1 passing (29ms)
Configuring ESLintConfiguring ESLint
Mocha has a number of globally-available functions, and we've set up Chai and Sinon to use globals as well, so ESLint will complain about these. We need to configure ESLint to accept them.
If you aren't already using ESLint in your project, it's easy to install in a React Native project. Add the following packages:
yarn add -D eslint \ babel-eslint \ eslint-config-codingitwrong \ eslint-plugin-import \ eslint-plugin-jsx-a11y \ eslint-plugin-react
Then create an
.eslintrc.js file at the root of your project and add the following:
module.exports = { extends: [ 'plugin:react/recommended', 'codingitwrong', ], settings: { react: { version: '16.5', }, }, parser: 'babel-eslint', env: { 'browser': true, 'es6': true, 'node': true, }, rules: { 'react/prop-types': 'off', } };
Most code editors can be configured to run ESLint rules as you edit. You can also add an NPM script to do so:
"scripts": { "start": "node node_modules/react-native/local-cli/cli.js start", + "lint": "eslint \"**/*.js\"", "test": "jest" },
To configure ESLint to allow Mocha's globals, add them to the list of globals ESLint will accept:
'es6': true, 'node': true, }, + globals: { + 'after': true, + 'afterEach': true, + 'before': true, + 'beforeEach': true, + 'describe': true, + 'expect': true, + 'it': true, + 'sinon': true + }, rules: { 'react/prop-types': 'off', }
|
https://reactnativetesting.io/unit/setup.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
.
If you’ve been following my App Building Made Easier series (Parts 1 and 2 here), hopefully you’ve just started building your apps. Before you get too far, one thing that you’ll want to look at sooner rather than later is how you can secure the app. In this guest post, Andrei Marukovich explores several new features that have been added to Windows 8 and Visual Studio 2012 to help you do just that.
Guest post by Andrei Marukovich, Microsoft MVP
Building Secure Windows Store Apps
Application security is a topic of continuous interest and attention. Modern applications are expected to be robust, predictable and respectful of users’ privacy. To help developers build safer Windows Store apps, several new security features have been added to Windows 8 and Visual Studio 2012.
- App sandbox - Windows 8 runtime engine executes Store apps in a special protected container. This sandboxed environment prevents direct communications between Store apps and the system and ensures that other apps will not affect your application’s data or behaviour.
- App capabilities - Every app must declare which data or device capabilities it will use, for example Documents Library or webcam. Explicitly declared capabilities inform the user about your app’s behaviour and let Windows block all requests to undeclared resources.
- Digital signature - All Windows Store apps are signed. They can be signed automatically, using a certificate associated with your developer account, or manually, using your own certificate. Only applications with validated signatures can be installed from the Windows Store.
- Protection against common code exploits - Security techniques such as buffer overflow check, address space layout randomization, and prevention of code execution from data pages are enabled by default for applications compiled using Visual Studio 2012.
When thinking about the security, you should be thinking about securing your app (i.e. your code from hacks or other abuse) and securing your users (i.e. protecting their information). These are a few things that you can do to lock down as much as possible:
Minimize number of used capabilities - By disabling unused capabilities in your app’s manifest, you make the app safer for the customers. It mitigates risks associated with incorrect user’s actions and application errors, makes your app less vulnerable to attacks. For example, Internet capability is switched on by default, but if your app does not require Internet access and does not use AdControl this capability can be safely disabled. Similarly, use of file and folder pickers does not require declaration of library capabilities and when it is possible, you should prefer using pickers instead of direct access to the libraries.
Select and appropriate data storage strategy – The Windows execution environment isolates data and settings storages for Store apps; however, it does not prevent desktop applications from accessing and modifying application data – every stored file is vulnerable. In order to protect user’s data you may consider data encryption, storing data on a remote server (in the Cloud?), or perhaps decide not to store data at all. To help with data encryption, WinRT provides several cryptography-related classes grouped under Windows.Security.Cryptography.Core namespace.
Use credential locker for storing user credentials - As user’s credentials is one of the most sensitive pieces of information, WinRT includes a dedicated mechanism for storing user name and passwords – Credential Locker. It is accessible through the PasswordVault class and allows storing and retrieving user’s credentials. Moreover, the system can roam these credentials between user’s devices, ultimately improving your app’s usability.
Protect your code - As with any other application, Windows Store apps can be disassembled by reverse engineering your algorithms or business logic. Additionally, disassembled code may reveal your encryption algorithms and open a route for accessing user’s data or your assets. Code obfuscation is a simplest approach that can be easily applied to protect your app against reverse engineering. You can also develop critical components using C++ or even move them to a remote server/service where you can fully control the execution environment.
Do not trust anybody – Windows Store apps can be connected to multiple remote data sources, implement various contracts and support many protocols. All these connection points enrich application functionality but at the same time, they are potential targets for attacks. As the result, any data retrieved from external sources needs to be carefully validated before use - downloaded files, Web service responses, protocol parameters, etc. It is especially important for the apps created using JavaScript, where scripts can be executed dynamically.
For more on developing secure apps
If you are interested to get more information about security practices for your Windows Store apps, you can have a look at the Developing Secure Apps whitepaper on MSDN.
As always, if you have any specific questions or concerns about Windows Store app security or how to implement security protocols, or if you read something that you want to find more about, feel free to start a new discussion in the Canadian Developer Connection LinkedIn group. Andrei, community experts, and your fellow Canadian developers are there to answer and share.
|
https://blogs.msdn.microsoft.com/cdndevs/2013/03/19/building-secure-windows-store-apps/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi,
No sure if it is right place to ask this question.
I just installed jira-python tool in my Win7 with pip tool.
However, when I try to use jirashell. Python gives me such error message. I am not familiar with python.
Can someone give me some hint for how to fix ? thanks
F:\MyProgram\dist\pip-1.2.1>jirashell
Traceback (most recent call last):
File "F:\Python2.7\Scripts\jirashell-script.py", line 5, in <module>
from pkg_resources import load_entry_point
File "F:\Python2.7\lib\site-packages\setuptools-0.6c11-py2.7.egg\pkg_resources
.py", line 2607, in <module>
File "F:\Python2.7\lib\site-packages\setuptools-0.6c11-py2.7.egg\pkg_resources
.py", line 565, in resolve
pkg_resources.DistributionNotFound: pyreadline>=1.7.1
Did you perform the installation as per the docs at ? It says to install using "pip install jira-python"
(I am assuming that you are referring to the same)
It mentions a lot of dependencies and you exception looks to be a dependency issue and looks like you have just downloaded the distribution.
Yes. I use pip install and it helps to install a lot of modules...
I don't understand why I still can't use jirashell.
@Renjith is right, you need to run "pip install jira-python" rather than what the instructions say ("pip install jira").
i found out the solution. i was trying in windows. this is actually for linux, thanks
You can use python in Windows. Just make sure Python is in your PATH environmental variable..
|
https://community.atlassian.com/t5/Answers-Developer-Questions/JIRA-Python-installation-question/qaq-p/521705
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
", unless a "name" argument is specified below.
A well-written state function will follow these steps:
Note
This is an extremely simplified example. Feel free to browse the source code for Salt's state modules to see other examples.
Set up the return dictionary and perform any necessary input validation (type checking, looking for use of mutually-exclusive arguments, etc.).
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''} if foo and bar: ret['comment'] = 'Only one of foo and bar is permitted' return ret
Check if changes need to be made. This is best done with an information-gathering function in an accompanying execution module. The state should be able to use the return from this function to tell whether or not the minion is already in the desired state.
result = __salt__['modname.check'](name)
If step 2 found that the minion is already in the desired state, then exit
immediately with a
True result and without making any changes.
if result: ret['result'] = True ret['comment'] = '{0} is already installed'.format(name) return ret
If step 2 found that changes do need to be made, then check to see if the
state was being run in test mode (i.e. with
test=True). If so, then exit
with a
None result, a relevant comment, and (if possible) a
changes
entry describing what changes would be made.
if __opts__['test']: ret['result'] = None ret['comment'] = '{0} would be installed'.format(name) ret['changes'] = result return ret
Make the desired changes. This should again be done using a function from an
accompanying execution module. If the result of that function is enough to
tell you whether or not an error occurred, then you can exit with a
False result and a relevant comment to explain what happened.
result = __salt__['modname.install'](name)
Perform the same check from step 2 again to confirm whether or not the minion is in the desired state. Just as in step 2, this function should be able to tell you by its return data whether or not changes need to be made.
ret['changes'] = __salt__['modname.check'](name)
As you can see here, we are setting the
changes key in the return
dictionary to the result of the
modname.check function (just as we did
in step 4). The assumption here is that the information-gathering function
will return a dictionary explaining what changes need to be made. This may
or may not fit your use case.
Set the return data and return!
if ret['changes']: ret['comment'] = '{0} failed to install'.format(name) else: ret['result'] = True ret['comment'] = '{0} was installed'.format(name) return ret
Before the state module can be used, it must be distributed to minions. This
can be done by placing them into
salt://_states/. They can then be
distributed manually to minions by running
saltutil.sync_states or
saltutil.sync_all. Alternatively, when running a
highstate custom types will automatically be synced.
NOTE: Writing state modules with hyphens in the filename will cause issues with !pyobjects routines. Best practice to stick to underscores.
Any custom states which have been synced to a minion, that are named the same
as one of Salt's default set of states, will take the place of the default
state with the same name. Note that a state module's name defaults to one based
on its filename (i.e.
foo.py becomes state module
foo), but that its
name can be overridden by using a __virtual__ function.
As with Execution Modules, State Modules can also make use of the
__salt__
and
__grains__ data. See cross calling execution modules..
All of the Salt state modules are available to each other and state modules can call functions available in other state modules.
The variable
__states__ is packed into the modules after they are loaded into
the Salt minion.
The
__states__ variable is a Python dictionary
containing all of the state modules. Dictionary keys are strings representing
the names of the modules and the values are the functions themselves.
Salt state modules can be cross-called by accessing the value in the
__states__ dict:
ret = __states__['file.managed'](name='/tmp/myfile', source='salt://myfile')
This code will call the managed function in the
file state module and pass the arguments
name and
source
to it.
A State Module must return a dict containing the following keys/values:
name: The same value passed to the state as "name".
changes: A dict describing the changes made. Each thing changed should be a key, with its value being another dict with keys called "old" and "new" containing the old/new values. For example, the pkg state's changes dict has one key for each package changed, with the "old" and "new" keys in its sub-dict containing the old and new versions of the package. For example, the final changes dictionary for this scenario would look something like this:
ret['changes'].update({'my_pkg_name': {'old': '', 'new': 'my_pkg_name-1.0'}})
result: A tristate value.
True if the action was successful,
False if it was not, or
None if the state was run in test mode,
test=True, and changes would have been made if the state was not run in
test mode.
Note
Test mode does not predict if the changes will be successful or not,
and hence the result for pending changes is usually
None.
However, if a state is going to fail and this can be determined
in test mode without applying the change,
False can be returned.
comment: A list of strings or a single string summarizing the result. Note that support for lists of strings is available as of Salt 2018.3.0. Lists of strings will be joined with newlines to form the final comment; this is useful to allow multiple comments from subparts of a state. Prefer to keep line lengths short (use multiple lines as needed), and end with punctuation (e.g. a period) to delimit multiple comments.
Note
States should not return data which cannot be serialized such as frozensets..
Note
Be sure to refer to the
result table listed above and displaying any
possible changes when writing support for
test. Looking for changes in
a state is essential to
test=true functionality. If a state is predicted
to have no changes when
test=true (or
test: true in a config file)
is used, then the result of the final state should not be
None..
You can call the logger from custom modules to write messages to the minion logs. The following code snippet demonstrates writing log messages:
import logging log = logging.getLogger(__name__) log.info('Here is Some Information') log.warning('You Should Not Do That') log.error('It Is Busted')
A state module author should always assume that strings fed to the module
have already decoded from strings into Unicode. In Python 2, these will
be of type 'Unicode' and in Python 3 they will be of type
str. Calling
from a state to other Salt sub-systems, such as execution modules__').
|
https://docs.saltstack.com/en/develop/ref/states/writing.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
trace or exception is involved in a report, then grouping will only consider this information.
- If a template is involved, then grouping will consider the template.
- As a fallback, the message of the event will be used for grouping.
Grouping by Stacktrace
When Sentry detects a stack trace in the event data (either directly or as part of an exception), the grouping effectively is based entirely on the stack trace. trace trace by introducing a new level through the use of decorators, your stack trace will change and so will the grouping. For this matter many SDKs support hiding irrelevant stack trace frames. For instance the Python SDK will skip all stack frames with a local variable called
__traceback_hide__set to True).
Grouping By Exception
If no stack trace trace.
Minimal example
This minimal example will put all exceptions of the current scope into the same issue/group:
from sentry_sdk import configure_scope with configure_scope() as scope: scope.fingerprint = ['my-view-function']
sentry::configure_scope(|scope| { scope.set_fingerprint(Some(&["my-view-function"])); });
Sentry.configureScope((scope) => { scope.setFingerprint(['my-view-function']); });
using Sentry; SentrySdk.ConfigureScope(scope => { scope.SetFingerprint(new[] { "my-view-function" }); });
There are two common real-world use cases for the
fingerprint attribute:
Example: Split up a group into more groups (groups are too big))
using (SentrySdk.Init(o => { o.BeforeSend = @event => { if (@event.Exception is SqlConnection ex) { @event.SetFingerprint(new [] { "database-connection-error" }); } return @event; }; }
Example: Merge a lot of groups into one group (groups are too small)
A generic error, such as a database connection error, has many different stack traces and never groups together.
The following example will just; }; }
|
https://docs.sentry.io/data-management/rollups/?platform=browser
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
11.6. Applying digital filters to speech show how to play sounds in the Notebook. We will also illustrate the effect of simple digital filters on speech sounds.
Getting ready
You need the pydub package. You can install it with
pip install pydub or download it from.
This package requires the open source multimedia library FFmpeg for the decompression of MP3 files, available at.
How to do it
1. Let's import the packages:
from io import BytesIO import tempfile import requests import numpy as np import scipy.signal as sg import pydub import matplotlib.pyplot as plt from IPython.display import Audio, display %matplotlib inline
2. We create a Python function that loads a MP3 sound and returns a NumPy array with the raw sound data:
def speak(data): # We convert the mp3 bytes to wav. audio = pydub.AudioSegment.from_mp3(BytesIO(data)) with tempfile.TemporaryFile() as fn: wavef = audio.export(fn, format='wav') wavef.seek(0) wave = wavef.read() # We get the raw data by removing the 24 first # bytes of the header. x = np.frombuffer(wave, np.int16)[24:] / 2.**15 return x, audio.frame_rate
3. We create a function that plays a sound (represented by a NumPy vector) in the Notebook, using IPython's
Audio class:
def play(x, fr, autoplay=False): display(Audio(x, rate=fr, autoplay=autoplay))
4. Let's play a sound that had been obtained from:
url = ('' 'cookbook-2nd-data/blob/master/' 'voice.mp3?raw=true') voice = requests.get(url).content
x, fr = speak(voice) play(x, fr) fig, ax = plt.subplots(1, 1, figsize=(8, 4)) t = np.linspace(0., len(x) / fr, len(x)) ax.plot(t, x, lw=1)
5. Now, we will hear the effect of a Butterworth low-pass filter applied to this sound (500 Hz cutoff frequency):
b, a = sg.butter(4, 500. / (fr / 2.), 'low') x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr) fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.plot(t, x, lw=1) ax.plot(t, x_fil, lw=1)
We hear a muffled voice.
6. Now, with a high-pass filter (1000 Hz cutoff frequency):
b, a = sg.butter(4, 1000. / (fr / 2.), 'high') x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr) fig, ax = plt.subplots(1, 1, figsize=(6, 3)) ax.plot(t, x, lw=1) ax.plot(t, x_fil, lw=1)
It sounds like a phone call.
7. Finally, we can create a simple widget to quickly test the effect of a high-pass filter with an arbitrary cutoff frequency: We get a slider that lets us change the cutoff frequency and hear the effect in real-time.
from ipywidgets import widgets @widgets.interact(t=(100., 5000., 100.)) def highpass(t): b, a = sg.butter(4, t / (fr / 2.), 'high') x_fil = sg.filtfilt(b, a, x) play(x_fil, fr, autoplay=True)
How it works...
The human ear can hear frequencies up to 20 kHz. The human voice frequency band ranges from approximately 300 Hz to 3000 Hz.
Digital filters were described in Chapter 10, Signal Processing. The example given here allows us to hear the effect of low- and high-pass filters on sounds.
There's more...
Here are a few references:
- Audio signal processing on Wikipedia, available at
- Audio filters on Wikipedia, available at
- Voice frequency on Wikipedia, available at
- PyAudio, an audio Python package that uses the PortAudio library, available at
See also
- Creating a sound synthesizer in the Notebook
|
https://ipython-books.github.io/116-applying-digital-filters-to-speech-sounds/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Using. Then you have to deploy this change to your staging environment.. rinse, repeat. You will have to find a way to create that same table in an existing database. This gets harder when you don't have (easy) access to the database on the remote server.
This is where Umbraco migrations come in. Migrations were introduced in version 6 of Umbraco to help us change database schema's during an upgrade of Umbraco. Migrations use code instead of using SQL upgrade scripts, while scripts are nice they are limited to executing SQL. We realized that sometimes there's a need to execute some code during upgrades as well. For example: we sometimes want to migrate data that is difficult to handle in a SQL script but would be really easy to deal with using a bit of code.
As of Umbraco version 7.3 these migrations are also very useful for making schema changes in different environments by just deploying your code, no more need to write manual SQL scripts that need to run on each environment.
How it works
In the Umbraco database we have a table called umbracoMigration that tracks which migrations have ran for that specific database, so on a fresh 7.4.2 install you'd see the following:
When Umbraco starts, the first thing it does is that it gets the current version of Umbraco using `Umbraco.Core.Configuration.UmbracoVersion.Current` which in this case is 7.4.2 but when you update the Umbraco dlls to version 7.4.3 it will return 7.4.3. If Umbraco then cannot find the migration with name "Umbraco" and version "7.4.3" in the database, you will get the upgrade installer screen.
Once you click the Continue button Umbraco goes and finds all of the classes that have a `Migration` attribute with a version between the newest version in the `umbracoMigration` table and the `Umbraco.Core.Configuration.UmbracoVersion.Current` version. For example, if I've upgraded my Umbraco dlls from 7.3.5 to 7.4.2 it would find migrations targetting versions higher than 7.3.5 and lower than or equal to 7.4.2.
We don't do any migrations for patch releases, only for minor and major versions (remember a version is: major.minor.patch). So in reality the upgrade from 7.3.5 to 7.4.2 would only find migrations targeting version 7.4.0 like the ones above. After these have been executed, a new entry will appear in the `umbracoMigration` table, indicating the latest migration that ran on this database. For the Our Umbraco database, for example, you can see exactly which upgrades were done when:
The important part to understand about this is that when you deploy your website to the next environment, it will do this same comparison: find `Umbraco.Core.Configuration.UmbracoVersion.Current` and compare that to the highest migration in the `umbracoMigration` table. They will be different because the migration only ran on our local environment. You will again see the upgrade screen on your website, click continue and the migrations run on that environment after which the upgrade is complete. This means that any environment you deploy to will be consistent.
Sidenote: we didn't always use to do this and people would often forget to run the upgrade installer after deploying their upgraded website from local to the next environment. Most times this wasn't a problem, but when there was actually changes to be made to the database they might have been forgotten, leading to an inconsistent database, leading to problems later on. This is also why you sometimes see database upgrade errors when these migrations run, at some point the proper changes were not made to your database years ago, leading to wrong assumptions on our end and an inability to upgrade your database.
You too can do this
Long story short: migrations are great! Now let's see how you can utilize them.
The table that I mentioned in my previous blog post could, for example) consist of a nodeId (a reference to a page in Umbraco) and a count (the number of times this page was visited). In this example we're going to be Umbraco's ORM called PetaPoco, and when using that, we can describe the table we want to use in a C# class like so:
using Umbraco.Core.Persistence;
namespace Example.Models
{
[TableName("Statistics")]
[PrimaryKey("nodeId", autoIncrement = false)]
public class Statistics
{
[Column("nodeId")]
public int NodeId { get; set; }
[Column("count")]
public int Count { get; set; }
}
}
In order to build a migration, we can make a class that has the `Migration` attribute and inherits from `MigrationBase`. Inheriting from that requires you to implement the `Up()` and the `Down()` methods, for doing and upgrade and, if necessary, a downgrade.
using Example.Models;
using Umbraco.Core;
using Umbraco.Core.Logging;
using Umbraco.Core.Persistence;
using Umbraco.Core.Persistence.Migrations;
using Umbraco.Core.Persistence.SqlSyntax;
namespace Example.Migrations
{
[Migration("1.0.0", 1, "Statistics")]
public class CreateStatisticsTable : MigrationBase
{
private readonly UmbracoDatabase _database = ApplicationContext.Current.DatabaseContext.Database;
private readonly DatabaseSchemaHelper _schemaHelper;
public CreateStatisticsTable(ISqlSyntaxProvider sqlSyntax, ILogger logger)
: base(sqlSyntax, logger)
{
_schemaHelper = new DatabaseSchemaHelper(_database, logger, sqlSyntax);
}
public override void Up()
{
_schemaHelper.CreateTable<Statistics>(false);
// Remember you can execute ANY code here and in Down()..
// Anything you can think of, go nuts (not really!)
}
public override void Down()
{
_schemaHelper.DropTable<Statistics>();
}
}
}
The migration attribute needs to be provided with a version number, since we're just starting out this is set to "1.0.0". The next argument is the sort order, if there's multiple migrations necessary to upgrade to "Statistics" version 1.0.0 you can run them in the correct order. We use the `Statistics` class we created to describe the table earlier to create or drop the table.
Finally, we need to make this migration run. Since the attribute has the third argument "Statistics" it will not run and trigger when you upgrade Umbraco, only migrations with the name "Umbraco" run automatically. So we need to run it manually somehow. In the future we want to change Umbraco so that it also runs your custom migrations through the Umbraco upgrade installer, for now you'll need to handle it yourself.
In order to run this, we can create an EventHandler that runs when Umbraco starts. In this event handler we will look for the newest migration that ran for the "Statistics" product to check if we need to actually run any migrations. If the database tells us: version 1.0.0 of "Statistics" has been ran, we do nothing. If the version doesn't exist or is lower than the current version, we of course need to run the migration to get the database in a consistent state.
using System;
using System.Linq;
using Semver;
using Umbraco.Core;
using Umbraco.Core.Logging;
using Umbraco.Core.Persistence.Migrations;
using Umbraco.Web;
namespace Example.Eventhandlers
{
public class MigrationEvents : ApplicationEventHandler
{
protected override void ApplicationStarted(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext)
{
HandleStatisticsMigration();
}
private static void HandleStatisticsMigration()
{
const string productName = "Statistics";
var currentVersion = new SemVersion(0, 0, 0);
// get all migrations for "Statistics" already executed
var migrations = ApplicationContext.Current.Services.MigrationEntryService.GetAll(productName);
// get the latest migration for "Statistics" executed
var latestMigration = migrations.OrderByDescending(x => x.Version).FirstOrDefault();
if (latestMigration != null)
currentVersion = latestMigration.Version;
var targetVersion = new SemVersion(1, 0, 0);
if (targetVersion == currentVersion)
return;
var migrationsRunner = new MigrationRunner(
ApplicationContext.Current.Services.MigrationEntryService,
ApplicationContext.Current.ProfilingLogger.Logger,
currentVersion,
targetVersion,
productName);
try
{
migrationsRunner.Execute(UmbracoContext.Current.Application.DatabaseContext.Database);
}
catch (Exception e)
{
LogHelper.Error<MigrationEvents>("Error running Statistics migration", e);
}
}
}
}
Note: for versions before 7.4.2 you'll need to build in an extra `catch` as Umbraco was doing something silly, which is now fixed. So before the `catch (Exception e)` you can add a specific `catch`
catch (System.Web.HttpException e)
{
// because umbraco runs some other migrations after the migration runner
// is executed we get httpexception
// catch this error, but don't do anything
// fixed in 7.4.2+ see :
}
Cool, we now have a new table we can use and the migration has been noted for our local database. When we deploy this site, the migration will run again as it's not been recorded in that database yet.
Just as some code to test this I've added the counter to my Master template so it will execute on each page, it's not great architecture but it at least allows me to do some quick testing.
@{
Layout = null;
var database = ApplicationContext.Current.DatabaseContext.Database;
var query = new Sql()
.From<Statistics>()
.Where<Statistics>(x => x.NodeId == Model.Content.Id);
var result = database.Fetch<Statistics>(query).FirstOrDefault();
if (result == null)
{
database.Insert(new Statistics { NodeId = Model.Content.Id, Count = 1 });
}
else
{
result.Count = result.Count + 1;
database.Update(result);
}
}
<span>Views: @(result == null ? 1 : result.Count) - NodeId: @Model.Content.Id</span>
And after a few refreshes of the page I can see that this works like a charm.
And in our browser:
Now imagine you want to count things in different categories, like PageViews, Downloads, Clicks, etc. You can still use this table but you might want to add a category name to it so you can track different types of counters.
First, we can update our `Statistics` class and add the Category there.
[Column("category")]
public string Category { get; set; }
Then we can add a new migration that adds a column to the existing table.
using Umbraco.Core.Logging;
using Umbraco.Core.Persistence.Migrations;
using Umbraco.Core.Persistence.SqlSyntax;
namespace Example.Migrations
{
[Migration("1.0.1", 1, "Statistics")]
public class AddCategoryToStatisticsTable : MigrationBase
{
public AddCategoryToStatisticsTable(ISqlSyntaxProvider sqlSyntax, ILogger logger)
: base(sqlSyntax, logger)
{ }
public override void Up()
{
Alter.Table("Statistics").AddColumn("Category").AsString().Nullable();
}
public override void Down()
{
Delete.Column("Category").FromTable("Statistics");
}
}
}
The last thing we need to do is tell the EventHandler that we're expecting our "Statistics" product to be of a new version now, 1.0.1. Note that the migration above is also created to update the product to version 1.0.1.
var targetVersion = new SemVersion(1, 0, 1);
When this runs we can see in the `umbracoMigration` table that, once again, the migration ran. We also see the new column on the `Statistics` table that we have there.
A quick update of our code now allows us to log the category of our counter as well.
@using Example.Models
@using Umbraco.Core.Persistence
@inherits UmbracoTemplatePage
@{
Layout = null;
var database = ApplicationContext.Current.DatabaseContext.Database;
var category = "PageView";
var query = new Sql()
.From<Statistics>()
.Where<Statistics>(x => x.NodeId == Model.Content.Id && x.Category == category);
var result = database.Fetch<Statistics>(query).FirstOrDefault();
if (result == null)
{
database.Insert(new Statistics { NodeId = Model.Content.Id, Count = 1, Category == category });
}
else
{
result.Count = result.Count + 1;
database.Update(result);
}
}
<span>Views: @(result == null ? 1 : result.Count) - NodeId: @Model.Content.Id</span>
Conclusion
This post was very much inspired by a recent question on the forum and the answers there, where I learned not to do this in a "hacky" way.
In this article we've seen that we can create migrations, "things" that need to be executed once on each environment that you deploy your website to. These "things" could be database tables, but you could also imagine that you might want to add a property to a document type, anything is possible. Migrations can help you make sure that all of your environments are in a consistent state when you deploy it to the next environment.
18 comments on this article
Glad to see more info about this. Definitely appreciate the deep dive that even shows how it works at the database level. All I had to go on previously was this forum thread:
This is great. Can this be incorporated into the Umbraco documentation (if it isn't already)?
Really great and useful post! Thank you Seb !
Just a quick question, when Umbraco runs your custom migration, does it load Umbraco Upgrade page? or it does it behind the scene ?
@Ali No, right now your custom migrations always runs during startup in your event handler, not in the upgrade installer screen.
As said in the post above, we want to run this in the upgrade installer as well but that's not there yet.
Seb It's a big advantage that it doesn't run the Umbraco upgrade page. (I am a big fan of that now)
Adding that to the upgrade installer is also is useful but not essential.
I prefer the Umbraco Installer just updates Umbraco related stuff and shouldn't really care about what is happening outside of that.
Thanks for reply and really useful post. Keep up the good work !
The problem then, of course, is that there's no nice UI to show people if something went wrong, so you're going to be stuck on either throwing a YSOD or sending out email notifications or something.
There's also the timing issue, if your migrations take a long time to execute, then that might cause your site to seem unresponsive, hard to tell if it's completely broken or still working on the updates.
Thank you for this post. Works wonders!
Really nice and detailled explained. Love the way Umbraco is designed to do almost anything!
Thanks for helping me out with upgrading custom tables!
/Michaël
No migration should be using any reference to ApplicationContext.Current or the Database or DatabaseContext directly. The usage of this `_schemaHelper.CreateTable<Statistics>(false);` to create a table is incorrect and will result in you not being able to rollback if something goes wrong since this executes outside of the migrations transaction. You can see in the Umbraco Core the code we use to create/modify tables and all of that is done on the underlying migration context.
Also ApplicationContext.Current singletons shouldn't be used on views since it's already exposed as a property
<script>alert('hi')</script>
Hi.
I have a question, what will happened if upgrade fails?
Does all migration run in transaction?
Can I just restore only files in web site folder to rollback previous version?
This article really helped!
The only thing I didn't like with the implementation is that you have to tell what the field type is when a new field is created on a table:
```
Alter.Table("Statistics").AddColumn("Category").AsString().Nullable();
```
It has to have `.AsString()` or an error will occur!
I have already implemented the property on my model - if I create a new table using that model using the schema helper, it automagically decides what type the field should be:
```
schemaHelper.CreateTable<Statistics>(false);
```
I feel like I should be able to alter the table using an existing property on my model rather than having to use dodgy strings. Something more like:
```
Alter.Table<Statistics>().AddColumn<Statistics>(x => x.Category);
```
Where `Category` has already been decorated with `[NullSetting(NullSetting = NullSettings.Null)]`.
Great post about Migrations.
Can I use Migrations on Umbraco Cloud projects e.g. to create a custom table or alter an existing one on another environment. I have seen it being used, but mostly in packages.
Is it better to use Migrations to create the custom table (if it doesn't exists), than just running the logic in e.g. ApplicationStarted event, when it isn't a package?
Yes you can use migrations on Cloud. However, be aware that data in your custom tables will not deploy between Cloud environments.
I'm not sure what you mean with your package question, it's always great to run migrations to create a table if it doesn't exist though. ;-)
As Shannon says, have a look at the examples in the core for better code than in this post, for 7.7.0 we have these migrations for example (creating tables is in there):
Hi Sebastiaan
What I meant was that it is not just limited to use in packages, but also for creating a table in any project, if it doesn't exists?
I my case I just needed to extend uCommerce with a custom Act Target, so I needed to create a custom table for this and insert a few entries in two existing core uCommerce tables.
I don't need to deploy changes between environments, but just ensure these data are created on other environments and if other coworkers are cloning the project to local.
All this is working and running in ApplicationStarted event, but I guess I can remove this logic to a Migrations class in the Up method and start with version 1.0.0 for the Migration attribute :)
Ah I see what you mean. Migrations are definitely not just limited to packages!
Yes, Migrations are nicer for this as you can easily make them version dependent and do automatic upgrades if that table ever needs to change. In that case, the "1.0.1" (or whatever version) upgrade will only run on environments that have not yet executed the 1.0.1 upgrade.
Now to just get this completed and you don't need to do any of the manual work. Might need to spend an evening and just get this merged in - as a community contribution -
For those looking for some examples of how to "correctly" create tables etc. without referrring to the DatabaseContext as per Shannon's first reply above check out this bit of code:
|
https://cultiv.nl/blog/using-umbraco-migrations-to-deploy-changes/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
/* * version.h * * Version number header file for Audio * * A test application for Audio * * H323 Library. * * The Initial Developer of the Original Code is Equivalence Pty. Ltd. * * Portions of this code were written with the assisance of funding from * Vovida Networks, Inc.. * * Contributor(s): ______________________________________. * * */ #ifndef _Audio_VERSION_H #define _Audio_VERSION_H #define MAJOR_VERSION 1 #define MINOR_VERSION 0 #define BUILD_TYPE AlphaCode #define BUILD_NUMBER 0 #endif // _Audio_VERSION_H // End of File ///////////////////////////////////////////////////////////////
|
http://pwlib.sourcearchive.com/documentation/1.10.3-0ubuntu1/samples_2audio_2version_8h-source.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
This Tip/Trick is an alternative to the original tip Sorting using C# Lists[^].
You often face the issue that a given class has not one Collating Sequence[^] but many various ways to order the class objects. This alternative tip shows how to choose the ordering without depending on the class implementing IComparable[^].
IComparable<T>
IComparer
IComparable
var querySortedByProperty = from element in collection
orderby element.property
select element;
foreach(var item in querySortedByProperty) { ... }
var querySortedByProperty = collection.OrderBy(e=>e.property);
foreach(var item in querySortedByProperty) { ... }
Sort
[__em__]collection.Sort((a,b)=>a.property.CompareTo(b.property));
foreach(var item in colleciton) { ... }
Assuming the following class analogous to the one from the original tip without implementing IComparable<T>.
public class Student
{
public string Name { get; private set; }
public int Age { get; private set; }
public Student(string name, int age)
{
Name = name;
Age = age;
}
}
var querySortByName = from s in students orderby s.Name select s;
var querySortByAge = from s in students orderby s.Age select s;
var querySortByName = students.Orderby(s=>s.Name);
var querySortByAge = students.Orderby(s=>s.Age);
students.Sort((a,b)=>a.Name.CompareTo(b.Name));
students.Sort((a,b)=>a.Age.CompareTo(b.Age));
Checkout 101 LINQ samples[^]. That gives many first steps on LINQ.
|
http://www.codeproject.com/Tips/535112/Sorting-Csharp-collections-that-have-no-collating
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I ran into this when I was investigating functional programming in C#. That’s been around for a while, but apart from using it for traversing or modifying collections or lists, I never actually created something that was making use of a function parameter.
Anyway, one of my most used constructs is the ForEach<T> method of List<T>. I always thought it to be quite annoying that it is only available for List<T>. Not for IList<T>, ICollection<T>, or whatever. Using my trademark solution pattern – the extension method ;-)
I tried the following:
using System; using System.Collections.Generic; namespace LocalJoost.Utilities { public static class GenericExtensions { public static void ForEach<T>(this IEnumerable<T> t, Action<T> action) { foreach (var item in t) { action(item); } } } }
And that turned out to be all. IList<T>, ICollection<T>, everything that implements IEnumerable<T> – now sports a ForEach method. It even works for arrays, so if you have something like this
string[] arr = {"Hello", "World", "how", "about", "this"}; arr.ForEach(Console.WriteLine);It nicely prints out
Hello
World
how
about
this
I guess it's a start. Maybe it is of some use to someone, as I plod on ;-)
2 comments:
Nice one! Will follow up on those posts, finally learning what Extensions are all about :)
Everyone who writes higher order functions (=functions that take other functions as parameters and/or return functions) must have written this ForEach extension method on IEnumerable for himself already.
Think I saw a blog post about the reason why it isn't in the BCL already (something to do with duplicating the foreach statement or something, giving people more than one way to do the exact same thing, which could get confusing), but I think it should have been in the BCL. Or List.ForEach() should not have existed, right?
|
http://dotnetbyexample.blogspot.com/2010/01/one-foreach-to-rule-them-all.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
gsasl_stringprep_saslprep - API function
#include <gsasl.h> char * gsasl_stringprep_saslprep(const char * in, int * stringprep_rc);
const char * in input ASCII or UTF-8 string with data to prepare according to SASLprep. int * stringprep_rc pointer to output variable with stringprep error code, or NULL to indicate that you don’t care about it.
Process a Unicode string for comparison, according to the "SASLprep" stringprep profile. This function is intended to be used by Simple Authentication and Security Layer (SASL) mechanisms (such as PLAIN, CRAM-MD5, and DIGEST-MD5) as well as other protocols exchanging user names and/or passwords.
Return a newly allocated string that is the "SASLprep" processed form of the input string, or NULL on error, in which case stringprep_rc contain the stringprep library error code.
Use gsasl_saslprep().
|
http://huge-man-linux.net/man3/gsasl_stringprep_saslprep.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Timeline)
Nov 1, 2009:
- 11:48 PM dreamkxd edited by
- (diff)
- 11:47 PM Changeset [6993] by
- googlemapmacro/0.11/tracgooglemap/macro.py
Removed deprecated
import md5and changed it to
trac.util.md5.
- 11:36 PM.
- 11:33 PM Changeset [6991] by
- shortcuticonplugin/0.11/tracshortcuticon/plugin.py
Added use of
trac.conf.Optionetc. classes.
- 11:27 PM Changeset [6990] by
- serversideredirectplugin/0.11/tracserversideredirect/plugin.py
Code cleanup. Improved detection of other 'tracredirect' macro.
- 8:16 PM Changeset [6989] by
- freedocplugin/0.11/freedoc/plugin.py
Ehm, /s /S what's the difference ;-)
-
- 6:20 PM TracUserSyncPlugin edited by
- (diff)
- 6:15 PM Changeset [6987] by
- tracusersyncplugin/0.11/trunk/tracusersync/web_ui.py
! wrong defaults for excludes
-
- 4:39 PM Changeset [6985] by
- freedocplugin/0.11/freedoc/plugin.py
Included the 'tip' directive
- 3:54 PM FreeDocPlugin edited by
- (diff)
- 2:40 PM TracUserSyncPlugin edited by
- added version information (diff)
- 2:32 PM FreeDocPlugin edited by
- (diff)
- 2:23 PM FreeDocPlugin edited by
- (diff)
- 2
- 2:14 PM TracUserSyncPlugin edited by
- (diff)
- 2:13 PM TracUserSyncPlugin edited by
- (diff)
- 2:08 PM FreeDocPlugin edited by
- Included picture example (diff)
- 2:03 PM FreeDocPlugin created by
- New hack FreeDocPlugin, created by rickvanderzwet
- 2:03 PM Changeset [6983] by
- freedocplugin
- freedocplugin/0.11
New hack FreeDocPlugin, created by rickvanderzwet
- 1:48 PM rickvanderzwet created by
- New user rickvanderzwet registered
- 1:32 PM Changeset [6982] by
- tracusersyncplugin/0.11/trunk/tracusersync/api.py
- tracusersyncplugin/0.11/trunk/tracusersync/web_ui.py
+ implemented purge of non-existing users
- 12:13 PM TracUserSyncPlugin edited by
- (diff)
- 12
- 12:11 PM Changeset [6980] by
- tracusersyncplugin/0.11/trunk/tracusersync/web_ui.py
! removing tracenv from sync list on permission error
- 11:09 AM Changeset [6979] by
- serversideredirectplugin/0.11/tracserversideredirect/plugin.py
- Replaced SQL code with Trac API calls.
- Added debug messages.
- 6:28 AM dreamkxd edited by
- (diff)
- 6:06 AM dreamkxd edited by
- (diff)
- 4:56 AM dreamkxd created by
- New user dreamkxd registered
Oct 31, 2009:
- 10:51 PM TracUserSyncPlugin edited by
- (diff)
- 10
- 9:27 PM TracUserSyncPlugin edited by
- (diff)
- 9:22 PM Changeset [6977] by
- tracusersyncplugin/0.11/trunk/tracusersync/api.py
- tracusersyncplugin/0.11/trunk/tracusersync/web_ui.py
- skipping environments using a different password file
- 8:32 PM TracUserSyncPlugin edited by
- (diff)
- 8:21 PM TracUserSyncPlugin edited by
- (diff)
- 7:48 PM TracUserSyncPlugin edited by
- Description updated (diff)
- 7
- 7:27 PM TracUserSyncPlugin created by
- New hack TracUserSyncPlugin, created by izzy
- 7:27 PM Changeset [6975] by
- tracusersyncplugin
- tracusersyncplugin/0.11
New hack TracUserSyncPlugin, created by izzy
- 3:04 AM Changeset [6974] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Removed wrong
returnwhich caused the DB to be updated twice.
Oct 30, 2009:
- 10:37 PM Ticket #6132 (SchedulingToolsPlugin - Incompatibility with TracDateField) created by
- TracDateField stores date in the DB (m/d/Y d/m/Y or m-d-Y or ...), But …
- 10:36 PM Changeset [6973] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
- 10:29 PM WatchlistPlugin edited by
- (diff)
- 10:16 PM TracViewScript edited by
- We can take advantage of some wiki formatting! (diff)
- 10:15 PM Changeset [6972] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
- watchlistplugin/0.11/tracwatchlist/templates/watchlist.html
Messages on watchlistpage can now be disabled.
- 10:08 PM Changeset [6971] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Fixed double messages for watchlist actions.
- 10:03 PM Changeset [6970] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Added messages on wiki or ticket pages for watchlist actions if
stay_at_resourceis
true.
- 9:51 PM TracViewScript edited by
- (diff)
- 9:51 PM TracViewScript edited by
- (diff)
- 9:50 PM Ticket #6131 (TracCiaPlugin - Notifications should be sent asynchronously) created by
- If due to some reason plugin can't access CIA, then any action slows …
- 9:32 PM Changeset [6969] by
- tracviewscript/0.11/tracview/template/+package+/+package+.py_tmpl
- import tag, forgot this
- repr(project) is a command to pastescript so ${} it
- 9:29 PM Changeset [6968] by
- tracviewscript/0.11/tracview/template/+package+/+package+.py_tmpl
forgot a )
- 9:26 PM Changeset [6967] by
- tracviewscript/0.11/tracview/template/+package+/+package+.py_tmpl
forgot to close a }
-
- 9:21 PM TracViewScript created by
- New hack TracViewScript, created by k0s
- 9:21 PM Changeset [6965] by
- tracviewscript
- tracviewscript/0.11
New hack TracViewScript, created by k0s
- 8:50 PM SensitiveTicketsPlugin edited by
- added hint to environment upgrade (diff)
- 8:29 PM Changeset [6964] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Added config file option to disable notify column in watchlist tables.
- 8:23 PM Changeset [6963] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Added own notify navigation items like the own from the AnnouncerPlugin.
- 8:21 PM Ticket #6130 (MenusPlugin - Cannot apply MenusPlugin properties to the TracTabPlugin) created by
- As I noted briefly in Ticket #6129, it does not seem to work to apply …
- 8:04 PM Ticket #6129 (MenusPlugin - Strange behavior when custom defined menu item is not enabled) created by
- Below is a partial view of my configuration, which defines an entire …
- 7:59 PM MenusPlugin edited by
- Adding separate links for open and open/closed ticket queries (diff)
- 7:37 PM Changeset [6962] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
- plugin.py
- Added code for redirection back to (un)watched resource.
- 6:50 PM Ticket #6128 (AnnouncerPlugin - Allow Announcer to send notifications to the user listed in a custom ...) created by
- This would be extremely useful in the case of having a QA contact. If …
-.
- 5:59 PM Changeset [6960] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
- tracwatchlist/plugin.py
- Main navigation item
Watchlistis now always displayed even when watchlist empty.
- 5:54 PM ListOfWikiPagesMacro edited by
- (diff)
- 5:54 PM Changeset [6959] by
- listofwikipagesmacro/0.11/traclistofwikipages/macro.py
- traclistofwikipages/macro.py
- Updated documentation.
- 5:52 PM ListOfWikiPagesMacro edited by
- (diff)
-.
- 5:39 PM Changeset [6957] by
- listofwikipagesmacro/0.11/setup.py
- setup.py
- Added dependency to AdvParseArgsPlugin.
- 5:39 PM ListOfWikiPagesMacro edited by
- (diff)
- 5:33 PM AdvParseArgsPlugin edited by
- (diff)
-.
- 5:00 PM Ticket #6074 (ListOfWikiPagesMacro - Add way to exclude sets of pages) closed by
- fixed: (In [6955]) Added
exclude=patternoption to
ListOfWikiPages. This …
- 5:00 PM Changeset [6955] by
- listofwikipagesmacro/0.11/traclistofwikipages/macro.py
Added
exclude=patternoption to
ListOfWikiPages. This closes #6074.
- 3:53 PM AdvParseArgsPlugin edited by
- (diff)
-.
- 3:06 PM Ticket #3777 (TracPluginTemplateScript - template should include 'package_data' by default) closed by
- fixed: thanks for the patch; applied in r6953 ; god i hate setuptools
- 3:05 PM Changeset [6953] by
- tracplugintemplatescript/0.11/setup.py
include package data, refs #3777
- 2:49 PM Changeset [6952] by
- advparseargsplugin/0.11/setup.py
- setup.py
- Updated version number.
- 2:38 PM Ticket #4705 (WikiCssPlugin - WikiCssPlugin: PostgresSQL error with latest changeset) closed by
- fixed: Fixed in [6951]. Now the WikiPage model of Trac is used which should …
- 2:37 PM Changeset [6951] by
- wikicssplugin/0.11/tracwikicss/plugin.py
Changed code to use Trac API instead of own SQL code.
- 1:42 PM Ticket #3777 (TracPluginTemplateScript - template should include 'package_data' by default) reopened by
- The
package_dataentry must also exists in the
setup.pyfile of …
- 1:38 PM Ticket #6105 (TracUserPagePlugin - RuntimeError: maximum recursion depth exceeded) created by
- Just installed this plugin and encountered this error when navigating …
- 12:51 PM Ticket #4705 (WikiCssPlugin - WikiCssPlugin: PostgresSQL error with latest changeset) reopened by
- This should be reprogrammed to use the WikiPage model provided by Trac.
- 12:48 PM Changeset [6950] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Changed SQL code to support MySQL and PostgresSQL backends better. See #6090, #4097.
- 12:45 PM Ticket #6104 (TracKeywordsPlugin - Genshi template is missing) created by
- The Genshi template, which should've been included in r6528 was …
- 11:11 AM Changeset [6949] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Made some SQL adjustment to better support MySQL. See #6090.
- 10:36 AM Ticket #6092 (WikiRenamePlugin - When try to rename a page with utf-8 character, urllib.quote throw a ...) created by
- it's a know problem with urllib.quote. To avoid this you can add …
- 10:19 AM Changeset [6948] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Changed SQL cast type from
intto
integerfor MySQL users. See #6090.
- 9:24 AM Ticket #6091 (TicketExtPlugin - any plan to support dynamic show/hide custom field?) created by
- auto show/hide might be better way than enable/disable
- 9:05 AM Changeset [6947] by
- plannedmilestonesmacro/0.11/PlannedMilestones.py
Minor refactoring
- 8:59 AM Changeset [6946] by
- plannedmilestonesmacro/0.11/PlannedMilestones.py
PlannedMilestonesMacro now lists only upcoming milestones, as specified in the wiki documentation. Options will soon be added to control whether overdue milestones are displayed
- 8:04 AM Changeset [6945] by
- plannedmilestonesmacro/0.11/PlannedMilestones.py
Fixed indentation and added URL for PlannedMilestonesMacro trac-hacks wiki page
- 7:36 AM Ticket #6090 (WatchlistPlugin - Just installed in 11.5, get error when trying to use it) closed by
- fixed: I just successfully tested it on my test trac system and the error is …
- 7:26 AM Changeset [6944] by
- watchlistplugin/0.11/tracwatchlist/plugin.py
Fixed error occurring if user does not has a settings entry in
watchlist_settingstable.
See #6090.
- 7:21 AM PlannedMilestonesMacro edited by
- Clarification (diff)
- 6:53 AM PlannedMilestonesMacro edited by
- Removing this sentence because it doesn't seem entirely valid. For … (diff)
- 6:49 AM PlannedMilestonesMacro edited by
- Documented upcoming changes associated with #5449 and #5634 (diff)
- 5:49 AM Ticket #6090 (WatchlistPlugin - Just installed in 11.5, get error when trying to use it) created by
- New 11.5 install, WatchlistPlugin not looks like to work, I have the …
- 5:34 AM Thanatermesis created by
- New user Thanatermesis registered
- 4:02 AM Ticket #5177 (TracUserPagePlugin - Examples please!) closed by
- fixed: Closed since no response for some time.
- 4:00 AM Ticket #6089 (TracHacks - Attachments to delete) created by
- I'm opening this ticket to list attachments that I'd like the admin to …
- 3:49 AM PlannedMilestonesMacro edited by
- (diff)
- 3:45 AM Ticket #285 (WikiCalendarMacro - WikiCalendarMacro: integrate changes from Wiki attachments) closed by
- duplicate
- 3:44 AM Ticket #6088 (WikiCalendarMacro - Adoption request) created by
- I would like to adopt this hack if no one else wishes to maintain it …
- 3:38 AM Ticket #6065 (WikiCalendarMacro - how to make it work on trac0.11.5) closed by
- fixed
- 3:33 AM PlannedMilestonesMacro edited by
- (diff)
- 2:57 AM Ticket #467 (PlannedMilestonesMacro - [Patch] Limit which milestones are listed via pattern match) closed by
- fixed: (In [6943]) Pattern can be specified to filter list of displayed …
- 2:57 AM Changeset [6943] by
- plannedmilestonesmacro/0.11/PlannedMilestones.py
Pattern can be specified to filter list of displayed milestones.
[[PlannedMilestones(filter)]],
[[PlannedMilestones(filter,N)]], and
[[PlannedMilestones(,N)]]are valid. Fixes #467
- 2.
- 2:09 AM PlannedMilestonesMacro edited by
- Documenting change in [6941] (diff)
- 2:07 AM Ticket #5633 (PlannedMilestonesMacro - List only next N milestones) closed by
- fixed: (In [6941]) PlannedMilestones(N)? will list the N upcoming …
- 2:07 AM Changeset [6941] by
- plannedmilestonesmacro/0.11/PlannedMilestones.py
PlannedMilestones(N)? will list the N upcoming milestones. Fixes #5633
- 2:01 AM Ticket #6072 (HudsonTracPlugin - HudsonPlugin install) closed by
- fixed: Yes, like all plugins it must be enabled. I added instructions to the …
- 2:01 AM HudsonTracPlugin edited by
- Document the defaults for the config options, and provide sample … (diff)
- 1:32 AM HudsonTracPlugin edited by
- Added note about enabling the plugin in the config (diff)
- 12:35 AM Changeset [6940] by
- listofwikipagesmacro/0.11/traclistofwikipages/macro.py
Fixed one-off error in SQL limit.
- 12:06 AM Changeset [6939] by
- watchlistplugin/0.11/tracwatchlist/templates/watchlist.html
Added ticket summary as tool tip title to the title number.
|
https://trac-hacks.org/timeline?from=2009-11-06T21%3A02%3A47%2B01%3A00&precision=second
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Description
A tool that enables files to be compiled into an executable and extracted at startup.
Package Information
Installation
To use this package, put the following dependency into your project's dependencies section:
dub.json
dub.sdl
Readme
Bin2D
A Command line application that produces D module files, which can be compiled into an executable and extracted at startup.
Features:
- Limit code generated by:
- package modifier
version(unittest)
- Option for enum for usage at compile time, instead of
const(ubyte[])
- Automatic finding/inclusion of files in folders.
- Outputs included files at runtime to a specified directory or temporary directory
Warning extra files in specified folder will be removed
Known limitations
Does not allow for filenames used in different directories
Basic usage:
Basic usage is as follows
**Example** I have a tkd project that I want to pack up into a single executable. I need some files and dll's for it to work. Folder of needed stuff: I added the Bin2d.exe to my path for convience. Create this(MAIN.d) file and added to my C:\temp folder.
import std.stdio; import std.process; import PKG = Resource_Reference; void main() { string[string] FILE_LOCATIONS = PKG.outputFilesToFileSystem(); foreach(string key; PKG.originalNames){ writeln("extracting: ", key , " : " , FILE_LOCATIONS[key] ); } execute(FILE_LOCATIONS["my tkd app.exe"]); PKG.cleanup(); }
Compile with:
If you want to do what I did with a gui app you might want to link to windows:subsystem. ## But what if I don't know the name at compile time? To get access to *all* the values with names you need to iterate over two seperate arrays. The first ``names`` will give you the mangled names. The second ``values`` will give you the values based upon the index in assetNames. ## So how do you extract? This will extract any files given to it. With specific output directory. It returns an array of the file systems names with the original extension. Directories have been encoded away however.
import modulename; outputFilesToFileSystem("output/stored/here");
**And for a temporary directories?**
import modulename; outputFilesToFileSystem();
It does return the same result as output outputBin2D2FS(string) does. ## Why not string imports? - String mixins at least on Windows are bugged. They cannot use subdirectories. In newer versions this should be fixed. - Assets do not change often so regeneration process can be manual. - Easy export to file system.
|
http://code.dlang.org/packages/bin2d
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I know it sounds stupid, but I'm using MinGW32 on Windows7, and "
to_string was not declared in this scope." It's an actual GCC Bug, and I've followed these instructions and they did not work. So, how can I convert an int to a string in C++11 without using
to_string or
stoi? (Also, I have the
-std=c++11 flag enabled).
Its not the fastest method but you can do this:
#include <string> #include <sstream> #include <iostream> template<typename ValueType> std::string stringulate(ValueType v) { std::ostringstream oss; oss << v; return oss.str(); } int main() { std::cout << ("string value: " + stringulate(5.98)) << '\n'; }
|
http://databasefaq.com/index.php/answer/203/c-string-c-11-gcc-how-can-i-convert-an-int-to-a-string-in-c-11-without-using-to-string-or-stoi
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
AKA ‘avoiding the dreaded “Can not register property YourProperty after containing type (YourBaseType) has been instantiated” error message’
Somewhere between CSLA 3.0 en 3.6 a new way of registering properties has become into being:
// old skool CSLA private string _oldProp = string.Empty; public string OldProp { get {return _oldProp;} set { if (value == null) value = string.Empty; if (!_oldProp.Equals(value)) { _oldProp = value; PropertyHasChanged("OldProp"); } } } //new skool CSLA private static PropertyInfo NewPropProperty = RegisterProperty(c => c.NewProp); public string NewProp { get { return GetProperty(NewPropProperty); } set { SetProperty(NewPropProperty, value); } }
In CSLA 4.0 the last style is mandatory, so I started upgrading some objects (while currently using CSLA 3.8.3) in anticipation. So I upgraded my base object
using Csla; namespace CslaInheritance { public abstract class MyBaseClass : BusinessBase<MyBaseClass> { protected static PropertyInfo<string> MyProp1Property = RegisterProperty<string>(c => c.MyProp1); public string MyProp1 { get { return GetProperty(MyProp1Property); } set { SetProperty(MyProp1Property, value); } } protected static PropertyInfo<string> MyProp2Property = RegisterProperty<string>(c => c.MyProp2); public string MyProp2 { get { return GetProperty(MyProp2Property); } set { SetProperty(MyProp2Property, value); } } } }and then my child object
using Csla; namespace CslaInheritance { public abstract class MyConcreteClass1 : MyBaseClass { protected static PropertyInfo<string> ConcreteProp1Property = RegisterProperty<string>(c => c.ConcreteProp1); public string ConcreteProp1 { get { return GetProperty(ConcreteProp1Property); } set { SetProperty(ConcreteProp1Property, value); } } protected static PropertyInfo<string> ConcreteProp2Property = RegisterProperty<string>(c => c.ConcreteProp2); public string ConcreteProp2 { get { return GetProperty(ConcreteProp2Property); } set { SetProperty(ConcreteProp2Property, value); } } } }
And then I noticed something odd: according to the compiler, ConcreteProp1 and ConcreteProp2 were not defined. Even worse is the situation when you choose to upgrade your properties not using lambda expressions, but PropertyInfo objects, like this:
protected static PropertyInfo<string> ConcreteProp3Property = new PropertyInfo<string>("ConcreteProp3Property"); public string ConcreteProp3 { get { return GetProperty(ConcreteProp3Property); } set { SetProperty(ConcreteProp3Property, value); } }because this will compile - and run. Until you create a second child class MyConcreteClass2, instantiate it, then instantiate a MyConcreteClass1 – and then you will get a cryptical runtime error message saying “Can not register property ConcreteProp1Property after containing type MyBaseClass has been instantiated”.
Fortunately the CSLA framework comes with sources, and after some rooting around I found the culprit, if you can call it that, in Csla.BusinessBase:
protected static PropertyInfo<P> RegisterProperty<P>(Expression<Func<T, object>> propertyLambdaExpression) { PropertyInfo reflectedPropertyInfo = Reflect<T>.GetProperty(propertyLambdaExpression); return RegisterProperty(Csla.Core.FieldManager.PropertyInfoFactory.Factory.Create<P>( typeof(T), reflectedPropertyInfo.Name)); }
Although MyConcreteClass1 inherits from MyBaseClass, MyBaseClass inherits in turn from templated class BusinessBase<MyBaseClass>. Therefore, in RegisterProperty called from MyConcreteClass1 T is still MyBaseClass. It does not matter that I actually called it from a child class. So what happens is that all the statics are defined in the base class MyBaseClass. If you are using the lambda variant to register, the compiler saves your *ss, but if you use the PropertyInfo method something weird happens. Remember, statics in a class are initialized as soon as you touch any one of statics. So what happens is: you instantiate your concrete child class, immediately the statics of both the concrete and the base class are initialized, and all the properties are registered in the base class. If you try to instantiate a second concrete child class, Csla finds that your base class properties are already initialized, and the dreaded “Can not register property ConcreteProp1Property after containing type MyBaseClass has been instantiated” error message appears.
Now you can of course change the way you implement classes. I might make MyBaseClass generic as well, then T changes along. But when upgrading an existing API out of a situation in which direct inheritance used to be perfectly legal, it’s a different story.
There are actually two ways out of this. The first one is: use PropertyInfo, but explicitly name the object type to which the property belongs
protected static PropertyInfo<string> ConcreteProp3Property = RegisterProperty(typeof(MyConcreteClass1), new PropertyInfo<string>("ConcreteProp3Property")); public string ConcreteProp3 { get { return GetProperty(ConcreteProp3Property); } set { SetProperty(ConcreteProp3Property, value); } }
This works, but I like the solution below better, because that uses lambda expressions again and so your friend the compiler ;-) can help you catch typo’s. The only way I see to realize that is to add a static method at the bottom of your class
private static PropertyInfo<T> RegisterPropertyLocal<T>( Expression<Func<MyConcreteClass1, object>> propertyLambdaExpression) { var reflectedPropertyInfo = Reflect<MyConcreteClass1>.GetProperty(propertyLambdaExpression); return RegisterProperty(typeof(MyConcreteClass1), Csla.Core.FieldManager.PropertyInfoFactory.Factory.Create<T>( typeof(MyConcreteClass1), reflectedPropertyInfo.Name); }and then register your properties like this from now on:
protected static PropertyInfo<string> ConcreteProp1Property = RegisterPropertyLocal<string>(c => c.ConcreteProp1); public string ConcreteProp1 { get { return GetProperty(ConcreteProp1Property); } set { SetProperty(ConcreteProp1Property, value); } }
The drawback of this solution is, of course, that you have to define a static RegisterPropertyLocal in every inherited class you define. But at least you will be saved from typos and weird runtime errors.
Now you are ready to upgrade, but I would recommend recording some macros to do the actual syntax change, unless you are very fond of very dull repetitive typing jobs ;-)
|
http://dotnetbyexample.blogspot.com/2010/08/fixing-clsa-property-registration.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
1. Introduction
In this short article, we will see packing multiple SQL statements in SqlCommand and process it through the SqlDataReader object. The previous article on ado.net already made you familiar with Connection, Command and Reader objects. Hence, we will concentrate on processing multiple results.
Have a look at the below picture. There are three result sets returned by the SqlCommand and the SQLDataReader object processes all of them. The Read method reads records of the single result set and the when there is no record to read the method return false stating that. Similarly, the NextResult method of the SqlDataReader iterates through the result sets and returns false when there is no more to read.
You can use this technique to avoid multiple hits to the database. In our example, we process three results one at a time avoiding multiple hits to the database.
2. About the Example
The below screen shot shows the example we are going to create:
The example retrieves the data from the SQL Server sample database Pubs. A total number of authors queried from the table authors is displayed in a label control marked as 1 and author name from the same table is displayed in the combo box item marked as 2. The list box marked as 3 displays all store names by querying the table stores from the Pubs database. When Get Data button is clicked (Marked as 4), all the data is retrieved through a single SqlCommand formed by three SQL statements.
3. Making the Example
The below video explains making the sample application:
Video 1: Making the Sample App
4. Code Explanation
1) A using statement is placed at the top of form code file frmResults.cs and the code is given below:
//Sample 01: Using Statements
using System.Data.SqlClient;
2) Click event for the “Get Data” button is handled and in the handler, SqlConnection object is created which tells how the application can make a successful connection to the ‘SQL Server Pubs’ database. Note that the connection string is referred from the application settings like “Properties.Settings.Default.PubsConstr”. Making the connection string can be referred in the video mentioned below the code snippet.
//Sample 02: Open connection to Pub Db of Sql Server
SqlConnection PubsDbCon = new SqlConnection(Properties.Settings.Default.PubsConstr);
PubsDbCon.Open();
Video 2: Forming the connection string
3) After we have a valid connection object, SqlCommand object is created. Once SqlCommand object is created, a single string containing three SQL queries is supplied to it through its property CommandText and the same way database connection also supplied through the property Connection. Note that the SQL queries are separated by the semi-colon. Preparing the SqlCommand object is shown in the below code:
//Sample 03: Form Multiple Single Command for More than one Query
String sqlQuery = "Select count(au_id) as TotAuthors from authors;" +
"Select Au_fname + ' ' + Au_lname as FullName from authors;" +
"Select stor_name from stores;";
SqlCommand MultiCmd = new SqlCommand();
MultiCmd.Connection = PubsDbCon;
MultiCmd.CommandText = sqlQuery;
4) The call to ExecuteReader on the SqlCommand object returns the SqlDataReader object. Since the SqlCommand contains three SQL select statements there will be three corresponding result set objects. Below is the code, which retrieves the reader object:
//Sample 04: Open the Reader and Iterate through all three result sets
SqlDataReader ReaderMultiSet = MultiCmd.ExecuteReader();
5) Once we have the reader in hand, we can retrieve all the data returned as three separate result sets. To iterate through these results sets, make a call to the NextResult method and this method moves the reader to the next valid result set. When there is no result to process, the methods returns false. This will be useful if you want to form a while loop based on the returned value. In our example, we are not using the loops. Once your reader is at the required result set you can read the individual record from the result set by making the call to Read() method on the SqlDataReader object. Note that the Result sets are ordered in the same order it was given to the SqlCommand object. In our case, the first result set is, Total authors (One Record), the next result is a list of authors and the final one is the list of stores. Have a look at the picture at the Introduction section again to have a better understanding. Below is the piece of code, which iterates through the records on each result sets:
//4.1: Process First Result Set.
bool ret = ReaderMultiSet.Read();
if (ret == true)
lblTotAuthors.Text = ReaderMultiSet["TotAuthors"].ToString();
//4.2: Retrive List of Authors from Next Result set
bool ResultExits = ReaderMultiSet.NextResult();
if (ResultExits == true)
{
while (ReaderMultiSet.Read())
{
string AuthorName = ReaderMultiSet["FullName"].ToString(); ;
cmbAuthors.Items.Add(AuthorName);
cmbAuthors.SelectedIndex = 0;
}
}
//4.3: Retrive List of Stores from Next Result set
ResultExits = ReaderMultiSet.NextResult();
if (ResultExits == true)
{
while (ReaderMultiSet.Read())
{
string StoreName = ReaderMultiSet["stor_name"].ToString(); ;
lstBStores.Items.Add(StoreName);
}
}
5. Running the Example
To run the example you need Pubs sample database, visit the page to get the sample database. Visit this video to know creating the Connection string. The below video shows running the Example:
Leave your comment(s) here.
|
http://www.mstecharticles.com/2015/04/adonet-processing-multiple-result-set.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
just follow the subversion installation instructions at the link above,
you need lazy-pac-cli-devel too which follow the same steps as libpypac-devel, you'll get it by the same checkout,
I'd be curious to see your implementation of this. There are a few different ways to do this if you are trying to do a "progress bar" type thing depending on what you are actually doing.]]>
*xerxes hugs tmaynard very hard*
that last stuff works beutifully,
edit: if you feel lucky you can get the evidence on svn,
more info here:
Non curses version:
import sys import time myStrings=["Pacman working ...","pAcman working ...","paCman working ...", "pacMan working ...","pacmAn working ...","pacmaN working ...", "pacman Working ...","pacman wOrking ...","pacman woRking ...", "pacman worKing ...","pacman workIng ...","pacman workiNg ...", "pacman workinG ..."] for eachString in myStrings: sys.stdout.write(eachString) sys.stdout.flush() sys.stdout.write("r") time.sleep(1) print("n")
Enjoy!]]>
sonix stuff doesn't seem to work,
i will try tmaynards curses sollution also
my function is beeing fed by another function so how do i deal with that?
or should i have one curses function like tmaynards and use sleep()?
Try this :
import curses import time stdscr = curses.initscr() myStrings=["Pacman working ...","pAcman working ...","paCman working ...", "pacMan working ...","pacmAn working ...","pacmaN working ...", "pacman Working ...","pacman wOrking ...","pacman woRking ...", "pacman worKing ...","pacman workIng ...","pacman workiNg ...", "pacman workinG ..."] for eachString in myStrings: stdscr.addstr(0, 0, eachString,curses.A_REVERSE) time.sleep(1) stdscr.refresh() curses.endwin()
Is this kinda what you are loooking for?]]>
yes that is exactly what I'm saying.
edit: Well atleast I didnt need anything to make it work in java... I dont know python so I cant really say if you need it or not.]]>
are you saying that i don't have to use curses?]]>
Sorry for writing it in java (dont really know python).
anyway::
The curses part of pacmans "download printing" is printing the carriage return (ascii value 13) as a character to the end of each line until 100% download is reached.Then it prints linefeed (ascii 10) to change the line.
In other words : you should be able to overwrite your prints if you supply a carriage return at the end (like in my java prog and pacman) if python doesnt do anything strange behind your back when you do a print.
in python please,
and i don't understand what you mean anyway,]]>
Is this what you need ?(sorry for writing an example in your favourite language xerces2)
public class JCurses { public static void main(String[]args) throws Exception{ String[] pacman = new String[]{ "Pacmanr", "pAcmanr", "paCmanr", "pacManr", "pacmAnr", "pacmaNr", "pacmann"}; for(String p:pacman){ System.out.print(p); Thread.sleep(1000); } } }
all pacman does is a simple carriage return at the end.
if python doesnt have a built in carriage return, just print its ascii value 13 as a character.
(put the code in a file named JCurses.java
compile it with "javac JCurses.java
run it with "java JCurses")
give me a few weeks and i'll do it,
this curses stuff is not so easy to dig into, i just want to make oneliners, it shouldn't be like rocket science,]]>
Problem is, I've never used ncurses in any programming language. :-P I just knew that pacman used it to get that one lie output (from passing comment Xentac made once), and I remembered that Python had a curses module.
Its the best I could do on such short notice. Would you like to write my master's thesis? :-P
Dusty]]>
that looks like what i was looking for,
couldn't you supply some code too
now i have to rtfm also 'before' i start to code,
Maybe?
Dusty]]>
|
https://bbs.archlinux.org/extern.php?action=feed&tid=15174&type=atom
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I am currently studying java. I have a question about inheritance.
I am aware you can inherit the variables of the parent class using:
public class ChildClass extends ParentClass{
}
private int number_of_legs;
private int number_of_eyes;
Animal first_animal = new Animal(4,2);
/*where 4 and 2 are the number of legs and eyes which
*i attribute through a constructor present in Animal.
*/
Dog first_dog = new ??? // i tried with ...new first_animal.Dog but it didn't work.
This will do:
Dog dog = new Dog(4, 2);
You cannot create an instance of a subclass as an instance of a super class. You can say that all
Dogs are
Animals, but not that all
Animals are
Dogs.
Note that the statement above means:
//valid and accepted because one kind of instance of animal is a dog Animal dog = new Dog(4, 2); //invalid and throws a compiler error because one kind of instance of a Dog cannot be any animal Dog anotherDog = new Animal(4, 2);
new Animal.Dog is wrong here because it means that you have a
static class Dog inside
Animal class.
new first_animal.Dog is wrong here because it means that you have an inner class
Dog inside
Animal class. Those are different topics you should not cover yet.
|
https://codedump.io/share/ElqMvplLpWs0/1/instance-of-a-class-inheritance
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Microsoft.TeamFoundation.VersionControl.Client Namespace
The Microsoft.TeamFoundation.VersionControl.Client namespace provides the APIs that are required for a client application to work with the version control functionality in Visual Studio Team Foundation Server.
This namespace provides methods and classes that enable you to work with the version control features of Team Foundation Server. You can access the APIs that represent version-controlled files and folders, changes and pending changes, shelvesets, merges, branches, and team projects.
The VersionControlServer class represents the Team Foundation Server repository. You can use this class to perform tasks such as the following:
Execute queries.
Check in shelvesets.
Get changesets.
Create workspaces.
The Item class represents a file or folder that is registered to the repository. The Change object represents an individual change. It also contains references to the item that is affected by the change and the type of change that occurred. The Changeset class represents the collection of changes to a file or folder in the repository.
The PendingChange class represents a change that has not been committed to the repository. The PendingSet class represents a collection of pending changes.
The Shelveset class represents changes that are set aside for later work or check-in.
The Conflict class represents a difference between two items in a the repository.
|
https://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client(v=vs.100).aspx
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I am trying to declare a vector pointer, and an ifstream pointer in a class that I plan to use to read binary files. I have heard from someone that using #include in header is often not best (not sure why), that it is better to use "class <name>", and then in the *.cpp file do the #include <name> call.
I have tried doing that in my code but it doesn't work. I have tried this with some qt classes before and it works.
I get this error:I get this error:Code:
// #include <vector>
// #include <fstream>
class std::vector<uint32_t>;
class std::fstream;
Code:
g++ -c -Wall main.cpp -o main.o
error: 'vector' is not a template
error: 'uint32_t' was not declared in this scope
error: 'vector' in namespace 'std' does not name a type
error: 'fstream' in namespace 'std' does not name a type
error: 'uint32_t' does not name a type
error: using-declaration for non-member at class scope
error: expected ';' before '*' token
make: *** [main.o] Error 1
|
http://cboard.cprogramming.com/cplusplus-programming/133684-trying-use-forward-declaration-printable-thread.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
»
Developer Certification (SCJD/OCMJD)
Author
import and member variable conventions
Jay Bromley
Ranch Hand
Joined: Aug 09, 2003
Posts: 48
posted
Dec 24, 2003 03:06:00
0
Hello all,
I've got a couple of easy questions. The first involves naming conventions for class members. Do any of you use some sort of prefix or suffix to indicate your class variables? I'm used to using a _ pre/suf-fix to mark my members in C++, and without them I tend to lose class members among local variables when looking at code. However, Sun seems pretty anal about their coding conventions and I know the JLS says don't use _ as the first character of a member name, so I'm not too sure what to do about this. Or does it just not matter?
And how about imports? When do you stop listing individual classes in a package and start using *? I tend to like to import each class individually, but this can get out of hand sometimes. Any recommendations?
Thanks and regards,
Jay
james airey
Ranch Hand
Joined: Dec 15, 2003
Posts: 41
posted
Dec 24, 2003 03:28:00
0
Breaking Sun's conventions on naming sounds like an automatic failure to me. If you are coming from a c++ background, the
java
conventions might look odd to you, but once you get used to it, you'll see that for the most part, java code has a consistent look and feel, which makes it easier to pick up and maintain other people's code. So there is a genuine benefit to it
Regarding imports, it's up to you. The compiler will only pick up the classes it needs to import anyway, so there is no performance impact. Whatever you feel makes you code the most readable, I guess.
Jay Bromley
Ranch Hand
Joined: Aug 09, 2003
Posts: 48
posted
Dec 26, 2003 15:06:00
0
Hello James,
I am definitely agreed that I shouldn't do anything that the JLS says not to do, such as starting a name with '_', but it doesn't say anything about using a '_' as a suffix.
Incidentally, I am from a C++ background, and I like the '_' suffix since it allows me and automated tools to quickly pick out where member variables are being touched. I find that when looking at Java code (even my own) it takes a bit longer for me to get things, since I have to look up which variables belong to a class. (I know a code browser helps, but even then you've got to focus your attention somewhere other than the code for a second.) Is there some sort of Java convention about this same thing? I've seen "my" used to mark members sometimes.
Thanks and regards,
Jay
Jim Yingst
Wanderer
Sheriff
Joined: Jan 30, 2000
Posts: 18671
posted
Dec 26, 2003 17:51:00
0
It's been stongly hinted in the past, by people in a position to know, that candidates can lose points for failing to follow Sun coding standards. The ones that are linked to from
Sun's main J2SE documaentation page
.
Now my own opinion is that Sun has done a very poor job of ommunicating this fact (if indeed it's still correct) in my own instructions. The client ("Bodgitt & Scarper") did not specify anything at all about coding standards. (If they did, it would have almost certainly overridden Sun's standards for our purposes here.) But they did supply one code fragment - the Data.java file. (Or DBAccess.java in some assignments.) Normally I would advocate following whatever style the customer is using, provided they
have
a consistent style. But the coding style in Data.java is abysmal, an indication that the people at Bodgitt & Scarper wouldn't know good coding style if it came and bit them on the butt. So frankly just about
any
consistent style would be an improvement over B&S. All other things being equal - Sun's style seems like a good option. And since it's been hinted that Sun's graders are, well, unjustifiably obstinate on this point, that's even more reason to do it.
Is there some sort of Java convention about this same thing? I've seen "my" used to mark members sometimes.
Not a standard one, no. Some shops religiously use "this.foo" if foo is a member variable, even if the this is implicit in many contexts. Some advocate creating getters and (if appropriate) setters, and accessing member variables
only
using the getters/setters. (Except within the getters/setters themselves.) Some use "my" or an underscore. Most I've worked with don't require any of these techniquest, and Sun's standard agrees with this. Plenty of IDEs are able to keep track of member variables vs. local variables. And my own personal preference is to keep all my methods short enough that it's really easy to see what variables have been declared locally; anything else must be an instance or class variable.
On a loosely related note, I'm a big fan of using immutable classes and final variables wherever possible, in order to simplify the task of tracking changes in a variable. (Especially in a multithreaded environment.) Certainly many things
can't
be represented with immutable classes or primitive finals. But for any new class or variable you create, consider making it immutable or final. If you can, it cuts down on the things to worry about later; if you can't, then go ahead and make it mutable.
"I'm not back." - Bill Harding,
Twister
I agree. Here's the link:
subject: import and member variable conventions
Similar Threads
Doubt: Main method
Vector cannot be private?
Help on overiding
Access Modifiers in Java
Accessor (set) / Mutator (get) Methods
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/184775/java-developer-SCJD/certification/import-member-variable-conventions
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
I needed one simple interface for logging messages on the NT system, so I use the CXEventLog class written by Hans Dietrich to create the IEventLogger interface. Basically I made some minor changes to this class and wrapped it with a simple COM Interface. The only one dummy message will be linked to this object. The COM object can be simple used by all applications without linking any additional modules.
CXEventLog
IEventLogger
IEventLogger
The interface should be simple so it contains only three following functions:
HRESULT LogError (BSTR sMessage);
HRESULT LogInfo (BSTR sMessage);
HRESULT LogWarning (BSTR sMessage);
Calling one of those functions for the first time causes internal initialization of the underlying logging class:
EventLogWriter
The COM object EventLogWriter is a simple ATL based COM wrapper for the CXEventLog class. The compiled message resource Message.mc will be linked with this component. The resulting module EventLogWriter.dll can be copied to any directory. Before using it it must be registered once. (Use regsvr32)
A small application is supplied to demonstrate the usage of the component, it is a simple dialog based MFC program. The constructor for the application is initializing the COM and the destructor is uninitializing it. The component will be created in the function OnInitDialog() and released in the destructor. The function OnButtonWriteLog() is demonstrating the usage of the interface.
OnInitDialog()
OnButtonWriteLog()
You can create the instance of the interface including the header file and the interface definition file into your project and make instance of it like this:
#include "EventLogWriter.h" //definition for IEventLogger interface
* * *
// We get the IEventLogger interface
HRESULT hr = CoCreateInstance(CLSID_EventLogger, NULL,
CLSCTX_INPROC_SERVER, IID_IEventLogger, (void**)(&m_pEventLogger));
if (FAILED(hr)) {
MessageBeep(MB_OK);
MessageBox(_T("The component: EventLogWriter.EventLogger"
"\ncould not be created! "), _T("Error"));
}
To learn more details about this subject I recommend their contributions on this.
|
http://www.codeproject.com/Articles/6497/IEventLogger-COM-Interface-for-easy-Event-Logging?fid=36151&df=10000&mpp=10&sort=Position&spc=None&tid=2749531
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
As the subject says: "I don't like doing this but i'm desperate"...I have analyzed my program bit by bit to try to figure out what I am doing wrong, but I haven't been able to find my mistake. It is a simple stack program, with no purpose. The problem is that I can execute the program without any problems, until I try to see the elements of my stack, then it says that there are no elements...which leads me to think that something is wrong in my push function.
Here is the code that I have:
I have taken out the "pop" function because it is working OK.I have taken out the "pop" function because it is working OK.PHP Code:
/* Stack example */
#include <stdio.h>
struct information {
int number;
struct information *next;
};
struct information *first_element; /* points to the first element */
/* function prototypes */
void push(void); /* insert a new item into the stack */
void pop(void); /* remove an existing item from the stack */
void show_stack(void);
int show_menu(void);
int main()
{
int choice;
/* since the list is empty, the first element points to null */
first_element = (struct information *) NULL;
while((choice = show_menu()) != 4)
{
switch(choice) {
case 1: push(); break;
case 2: pop(); break;
case 3: show_stack(); break;
default: printf("Not an element\n"); break;
}
}
}
void push()
{
/* create a pointer to the struct for the new element */
struct information *new_element;
/* obtain memory for the new element, make sure there is enough memory */
new_element = (struct information *) malloc(sizeof(struct information));
if (new_element == NULL) {
printf("\nNot enough memory.");
}
else {
/* obtain new data */
printf("\nWhat number would you like to insert? ");
scanf("%d", &new_element->number);
if(first_element == NULL) {
/* this is the first element being stored */
first_element == new_element;
new_element->next = NULL;
}
else {
/* new_element->next will now point to the element that the first element
used to point to. It ocuppies the first position */
new_element->next = first_element;
first_element = new_element;
}
}
}
void show_stack()
{
struct information *temp;
int i; /* counter */
i = 0;
/* start from the beginning */
temp = first_element;
while(temp != NULL) {
printf("%d -> ", temp->number);
i++;
/* follow the -link- */
temp = temp->next;
}
if(i == 0)
printf("\nThere are no elements in the stack.");
else
printf("\nThere are %d element in the stack", i);
}
int show_menu()
{
int choice;
printf("\n--- Options ---\n"
"1. Push\n"
"2. Pop\n"
"3. Show stack\n"
"4. Exit\n?");
scanf("%d", &choice);
return choice;
}
I can't find the problem with the "push" function. Please help, I am going mad.
Thanks.
|
http://cboard.cprogramming.com/c-programming/22094-i-don't-like-doing-but-i'm-desperate.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
5
results of 5
Mats Bengtsson wrote:
> On my 10.2.8 box I experience drawing glitches on buttons
> when the focus ring is being drawn.
>
> shot:
> code:
> Question: What does it look like on post 10.2 systems?
I see no drawing issues on my 10.4 box.
Jeff
On my 10.2.8 box I experience drawing glitches on buttons when the focus
ring is being drawn.
shot:
code:
This seems to be related to the so called 'adornmment' memeber of ThemeButtonDrawInfo.
Experimenting with tiles mapping:
static Ttk_StateTable ButtonAdornmentTable[] = {
{ kThemeAdornmentDefault, TTK_STATE_ALTERNATE, 0 },
{ kThemeAdornmentFocus, TTK_STATE_FOCUS, 0 },
{ kThemeAdornmentNone, 0, 0 }
};
has proven unsuccesful, and I have an increasing suspicion that this is an
Apple bug.
Question: What does it look like on post 10.2 systems?
/Mats
PS: my firefox/mozialla doesn't draw any button focus rings at all and show
no glitches. They seem to use similar APIs for drawing widgets and elements as tile does.
PSS: the focus ring can be switched off for buttons using
static void ButtonElementDraw(
void *clientData, void *elementRecord, Tk_Window tkwin,
Drawable d, Ttk_Box b, Ttk_State state)
{
Rect bounds = BoxToRect(Ttk_PadBox(b, ButtonMargins));
ThemeButtonParms *parms = clientData;
ThemeButtonDrawInfo info;
if (parms->kind == kThemePushButton) {
state &= ~TTK_STATE_FOCUS;
}
info = computeButtonDrawInfo(parms, state);
BEGIN_DRAWING(d)
DrawThemeButton(&bounds, parms->kind, &info,
NULL/*prevInfo*/,NULL/*eraseProc*/,NULL/*labelProc*/,NULL/*userData*/);
END_DRAWING
}
hey guys,
On Jun 16, 2005, at 3:56 PM, Daniel A. Steffen wrote:
>
> the relevant script used to be in WIsh.pbroj as mentioned by
> Andreas but I have in fact now also moved this into the tk/macosx/
> Makefile (as this now builds the embedded Wish.app), c.f. below
...snip...
> yes, in fact the installer packages in TclTkAqua that I build as
> separate downloadable tarballs _are_ relocatable (but you still
> need an admin password), the main reason I disabled this in the
> disk image build is because there used to be severe bugs in Apple's
> Installer with .mpkg containing relocatable .pkg (I believe some of
> these are fixed in recent 10.3.x and 10.4)
...if I understand this correctly, the script modifies a built copy
of the tcl/tk frameworks to be embeddable (ie. install_name of
@executable_path...), but it doesn't do the same for Wish.app (or
rather, just make the "standalone" wish point to embedded frameworks)?
...I ask because I think this is what I need to do...my binary will
need to occupy the Contents/MacOS folder for a variety of reason, so
I'm trying to make an embedded tcl/tk with wish.app embedded in the
tk.framework (much like it is in the normal system-wide installs)...
...is this possible? I'd like to think so, because then I can
distribute experimental versions using tcl/tk 8.5, in addition to a
select few tcl/tk extensions...
jamie
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Joey Mukherjee wrote:
| Ok,
|
|
You may want to try the Tile-dev mailing list--it's a better place to
get Tile-specific answers to your questions. See to sign up.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird -
iD8DBQFDBIqHJmdQs+6YVcoRAj+5AJ0RK5plX69LJQFJIWCKUpWlieX8qACfYsC3
up9cpgYaedO9ZB/udj0b1Mk=
=0G29
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi everyone,
I'm trying to write a Tcl wrapper for the Carbon Help API
(AHRegisterHelpBook, AHGotoPage, etc.). Because I'm not very fluent in
C, I'm working with Swig. And I'm stuck. I get an extension generated,
but when I load the library into tclsh, after I try to run commands from
the Help API, I get an error message: "invalid command name."
Here are the steps I'm taking:
1. First, I'm writing a Swig interface file. Since I want to simply wrap
the Carbon Help API, the file is short, as follows:
%module tclAppleHelp
%{
#include
"/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Headers/AppleHelp.h"
%}
2. Next, I run Swig:
swig -tcl -namespace tclAppleHelp.i
3. This generates a C file, tclAppleHelp_wrap.c. Next, I compile the file:
gcc -framework Tcl -framework Carbon -dynamiclib tclAppleHelp_wrap.c -o
tclAppleHelp.so
4. This works. Then I move into tclsh and run this command:
load tclAppleHelp.so
That also works.
5. Then, I get stuck. I'm not sure how to invoke the functions I've
supposedly wrapped. Swig generates the tclapplehelp:: namespace, so I've
tried a few variations: "tclapplehelp::AHRegisterHelpBook" is one
example, which generates the "invalid command name" error.
These functions generally only work when called inside an app bundle, so
I don't expect to get perfect output, but the error messages indicate to
me that the functions are not being loaded in the way I thought they would.
It's quite possible I should include some other header files as well,
but I'm not sure which ones.
Any help is appreciated. Once I get this extension built and tested in
my own applications, I'll release it as open-source under a BSD-style
license, and perhaps it can be considered for inclusion in the next BI
distro. I like using the native Apple help system in my programs, but I
now have to resort to Python hacks to get the help books loaded
properly, and I'd like a cleaner interface.
Thanks in advance.
- --
Cheers,
Kevin Walzer, PhD
WordTech Software
sw at wordtech-software.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird -
iD8DBQFDBEMzJmdQs+6YVcoRAv4eAKCCFxn9LVoi2pc5GWhtxLJMDCENPQCfS8rp
bsoxglhPv8It3EEtC4CQT9Y=
=/few
-----END PGP SIGNATURE-----
|
http://sourceforge.net/p/tcl/mailman/tcl-mac/?viewmonth=200508&viewday=18
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
1.1.0.M1
Table of Contents
<repositories />element
:
Example 1.2. Using a snapshot version
.
Example 1.3. Extending the overriden as custom beans from this configuration - we'll touch them in the individual manual sections as needed (for example repositories, validation and custom converters).
The library provides a custom namespace that you can use in your XML configuration:
Example 1.4. Basic.
Abstract
This chapter describes how to model Entities and explains their counterpart representation in Couchbase Server itself.:
Example 2.1. A simple Document with Fields:
The goal of Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores..
Example 4.1. Annotation-Based Repository Setup
@Configuration @EnableCouchbaseRepositories(basePackages = {"com.couchbase.example.repos"}) public class Config extends AbstractCouchbaseConfiguration { //... }
XML-based configuration is also available:
Example 4.2. XML-Based Repository Setup
<couchbase:repositories
In the simplest case, your repository will extend the
CrudRepository<T, String>, where
T is the entity that you want to expose. Let's look at a repository for a user:
Example 4.3. A User repository:
Example 4.4. An extended User repository:
Example 4.5. The all view map function
//):
Example 4.7. A parameterized view map function".
Example 4.8. Query a repository method with custom params.
//..:
Example 6.1.
AbstractCouchbaseConfiguration for.
Example 6.2. Caching example
@Cacheable(value="persistent", key="'longrunsim-'+#time") public String simulateLongRun(long time) { try { Thread.sleep(time); } catch(Exception ex) { System.out.println("This shouldnt happen..."); } return "Ive.
|
http://docs.spring.io/spring-data/couchbase/docs/1.1.0.M1/reference/htmlsingle/
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
On Thu, 14 Apr 2011,> V5: Correct subject to match implementation, correct stable submission> > Signed-off-by: Darren Hart <dvhart@linux.intel.com>> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>> Reported-by: Tim Smith <tsmith201104@yahoo.com>> Reported-by: Torsten Hilbrich <torsten.hilbrich@secunet.com>> Cc: Thomas Gleixner <tglx@linutronix.de>> Cc: Peter Zijlstra <peterz@infradead.org>> Cc: Ingo Molnar <mingo@elte.hu>> Cc: John Kacur <jkacur@redhat.com>> Cc: stable@kernel.org> ---> kernel/futex.c | 2 +-> 1 files changed, 1 insertions(+), 1 deletions(-)> > diff --git a/kernel/futex.c b/kernel/futex.c> index bda4157..abd5324 100644> --- a/kernel/futex.c> +++ b/kernel/futex.c> @@ -1886,7 +1886,7 @@ retry:> restart->futex.val = val;> restart->futex.time = abs_time->tv64;> restart->futex.bitset = bitset;> - restart->futex.flags = flags;> + restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;We only get here when a timeout is pending. So why don't we just dothe obvious:--- linux-2.6.orig/kernel/futex.c+++ linux-2.6/kernel/futex.c@@ -1902,16 +1902,13 @@ out: static long futex_wait_restart(struct restart_block *restart) { u32 __user *uaddr = restart->futex.uaddr;- ktime_t t, *tp = NULL;+ ktime_t t; - if (restart->futex.flags & FLAGS_HAS_TIMEOUT) {- t.tv64 = restart->futex.time;- tp = &t;- }+ t.tv64 = restart->futex.time; restart->fn = do_no_restart_syscall; return (long)futex_wait(uaddr, restart->futex.flags,- restart->futex.val, tp, restart->futex.bitset);+ restart->futex.val, &t, restart->futex.bitset); } Thanks, tglx
|
http://lkml.org/lkml/2011/4/15/48
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Comment: Re:Some people better be out of a job... (Score 1) 21
Peer Name Resolution.
The problem is that it's patent encumbered, by Mickeysoft, so it's useless.
There is also something called Hierarchical DHT-based name resolution.
Abstract:
Information-centric network (ICN) architectures are an increasingly important approach for the future Internet. Several ICN approaches are based on a flat object ID namespace and require some kind of global name resolution service to translate object IDs into network addresses. Building a world-wide NRS for a flat namespace with 10^1^6 expected IDs is challenging because of requirements such as scalability, low latency, efficient network utilization, and anycast routing that selects the most suitable copies. In this paper, we present a general hierarchical NRS framework for flat ID namespaces. The framework meets those requirements by the following properties: The registration and request forwarding matches the underlying network topology, exploits request locality, supports domain-specific copies of binding entries, can offer constant hop resolution (depending on the chosen underlying forwarding scheme), and provides scoping of publications. Our general NRS framework is flexible and supports different instantiations. These instantiations offer an important trade-off between resolution-domain (i.e. subsystem) autonomy (simplifying deployment) and reduced latency, maintenance overhead, and memory requirements. To evaluate this trade-off and explore the design space, we have designed two specific instantiations of our general NRS framework: MDHT and HSkip. We have performed a theoretical analysis and a simulation-based evaluation of both systems. In addition, we have published an implementation of the MDHT system as open source. Results indicate that an average request latency of (well) below 100ms is achievable in both systems for a global system with 12 million NRS nodes while meeting our other specific requirements. These results imply that a flat namespace can be adopted on a global scale, opening up several design alternatives for information-centric network architectures....
--
BMO
|
http://slashdot.org/~oursland/firehose
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
This is a small project to generate classes for accessing stored procedures via a webservice, born out of frustration with SQLXML 3.0.
This is a very basic (and somewhat rough) code class that finds all the stored procedures in a SQL Server database, and generates a C# class to access them. This came about because I was looking for a fast way to expose several stored procedures via a web service layer I was working on. After a couple of days spent fooling around with SQLXML 3.0, and ultimately becoming a bit disappointed with the results, a code generator seemed like a good idea. So this is it. The code is not fancy at all. Just grabs a list of stored procedures, and uses a lot of string builders to produce a code class. I did look at some of the template based generators around on Code Project and elsewhere (a la Raptier), but at the end, didn't have much time to go through them.
There are two types of classes produced, depending on the setting of the 'webMethod' property. If set to true, then a few extra things are included to expose the methods via webservices. If you choose this option, the only thing produced is the code class. You'll still have to link it up to an .asmx page.
webMethod
true
Another thing this generates is a app.config file, with a section containing all the procedure names, along with a boolean to indicate if the procedure should be accessible via the C# code. It defaults to 'false', which would mean that the procedure is restricted. Just change it to 'true' for the procedures you want to expose. The intention was to copy the section from the generated file into your 'real' .config file, although it seems to work fine, just as it is.
false
Before running the code, you will have to set the connection string (on line 17 of generateProcCode.cs) to something useful:
private string m_sconn =
"Persist Security Info=False;Integrated Security=false;" +
"user id=[lebowski];password=[thedude];database=arkis2db;" +
"server=[ipaddress];encrypt=false\";
I've included a small test project which should help you figure out the rest. Basically, just instantiate the class, set the webMethod property, and call the genClass method with a path for output as the only parameter. Once the stored procedures class is generated, you will have to change the namespace and set the connection string for your project.
genClass
Here is an example of the output:
///<summary>
///summary description for p_generateLoanID
///</summary>
public DataSet p_generateLoanID(
out int RETURN_VALUE,
int type,
string aukt,
ref int NewID){
if(false == procRunStatus("p_generateLoanID"))
throw new Exception("Stored procedure p_generateLoanID not enabled");
using(SqlConnection oconn = new SqlConnection(m_sconn)){
DataSet ds = new DataSet();
SqlCommand cmd = new SqlCommand("p_generateLoanID",oconn);
SqlDataAdapter da = new SqlDataAdapter(cmd);
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter prm_RETURN_VALUE =
cmd.Parameters.Add("@RETURN_VALUE", SqlDbType.Int);
SqlParameter prm_type = cmd.Parameters.Add("@type", type);
SqlParameter prm_aukt = cmd.Parameters.Add("@aukt", aukt);
SqlParameter prm_NewID = cmd.Parameters.Add("@NewID", NewID);
prm_RETURN_VALUE.Direction = ParameterDirection.ReturnValue;
prm_NewID.Direction = ParameterDirection.Output;
oconn.Open();
da.Fill(ds);
RETURN_VALUE =
cmd.Parameters["@RETURN_VALUE"].Value == DBNull.Value ? -12345 :
(int)cmd.Parameters["@RETURN_VALUE"].Value;
NewID = cmd.Parameters["@NewID"].Value == DBNull.Value ? -1 :
(int)cmd.Parameters["@NewID"].Value;
return ds;
}
}
There may be some problems with methods handling type conversion between SQL and C#. I opted to just use string/int most of the time. This seems to be working fine in our apps, which mostly use uniqueidentifier, nvarchar[] and int as params. If you have other types of input params to your stored procedures, you may get some trouble.
string
int
uniqueidentifier
nvarchar[]
I had another problem with some SQL params called '@ref'. I've tried to keep the param names the same in C# and SQL Server, but obviously that isn't going to work well with keywords. In this case, I just convert all C# params to 'reff' from 'ref'.
@ref
That's it. I hope someone might find this interesting or useful. I welcome all feedback, both positive and negative. I've been coding in a vacuum (only developer on site) for a few years, so please let me know if you think something could/should/must be done differently.
|
http://www.codeproject.com/Articles/8827/Stored-Procedure-class-generator
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.