text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
There should be a page about how to use constexpr on the arduino refrence
and it should be preferred over #define.
In general, the const keyword is preferred for defining constants and should be used instead of #define.
I've been informed by Arduino that .ino files are not written in C++.
The Arduino Language Reference is the only existing documentation of the Arduino Language. I feel that it's essential for a programming language to be documented 100%.
If that becomes true in a technical sense, rather than just as some marketing sound-bite, it will be very sad. :-(
And THAT will become a huge task.
I believe, according to my software engineer son, that you program in the IDE in C++ using Arduino defined Libraries, Macros, Functions, etc. that are also written in C++ when desired.
Add #include <Arduino.h> to the primary .ino file (if the file doesn't already contain that #include directive.Add function prototypes for any functions in .ino files that don't already have a prototype.
Arduino Language has its own data types and functions.
necessity to "restrict" user to a init and loop function
int main(void){ init(); initVariant();#if defined(USBCON) USBDevice.attach();#endif setup(); for (;;) { loop(); if (serialEventRun) serialEventRun(); } return 0;}
Anyway, if the founders think it is a new language then whatever we say will not change what they think. All we can advise is that an Arduino language sketch is very similar to a C++ program, and outline the differences.
Arduino language is a greatly simplified make language. Make tools such as cmake are too much for beginners. In this sense of make, it is a language.
Nothing gets translated or modified in the ino files.000
The drawback is you must have the appropriate includes
The build process is clearly documented at. It merges the files, adds function prototypes for functions declared in the INO files and add the include for arduino.h.
Make is a language that programmers use to compile their programs. Arduino has changed this process so that the user only specifies the board. The make language has been reworked so that the board manufactures and arduino ide has created the required make information for the user.
I think it's a good idea that this is called Arduino language because beginners are still missing several key concepts.
great documentation, but that information is not very accessible to someone trying to learn about the Arduino programming language.
I'm well aware of what the make language is. What I don't understand is what that has to do with the Arduino programming language.
I think it's a good idea that this is called Arduino language
|
https://forum.arduino.cc/index.php?topic=619198.0
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Behaviors¶
Besides having static content types definitions with their schema, there is the concept of behaviors. This allows us to provide functionality across content types, using specific marker interfaces to create adapters and subscribers based on that behavior and not the content type.
Definition of a behavior¶
If you want to have a shared behavior based on some fields and operations that needs to be shared across different content types, you can define them on a
guillotina.schema interface:
from zope.interface import Interface from guillotina.schema import Text
guillotina.behaviors.instance.AnnotationBehavior as the default annotation storage.
For example, in case you want to have a class that stores the field as content and not as annotations:
from guillotina.behaviors.properties import ContextProperty from guillotina.behaviors.instance import AnnotationBehavior from guillotina.interfaces import IResource from guillotina import configure, schema from zope.interface import Interface class IMarkerBehavior(Interface): """Marker interface for content with attachment.""" class IMyBehavior(Interface): foobar = schema.TextLine() @configure.behavior( title="Attachment", provides=IMyBehavior, marker=IMarkerBehavior, for_=IResource) class MyBehavior(AnnotationBehavior): """If attributes """ text = ContextProperty(u'attribute', ())
In this example
text will be stored on the context object and
text2 as a annotation.
Static behaviors¶
With behaviors you can define them as static for specific content types:
from guillotina import configure from guillotina.interfaces import IItem from guillotina.content import Item @configure.contenttype( type_name="MyItem", schema=IItem, behaviors=["guillotina.behaviors.dublincore.IDublinCore"]) class MyItem(Item): pass')
Create and modify content with behaviors¶
For the deserialization of the content you will need to pass on the POST/PATCH operation the behavior as a object on the JSON.
CREATE an ITEM with the expires : POST on parent:
{ "@type": "Item", "guillotina.behaviors.dublincore.IDublinCore": { "expires": "1/10/2017" } }
MODIFY an ITEM with the expires : PATCH on the object:
{ "guillotina.behaviors.dublincore.IDublinCore": { "expires": "1/10/2017" } }
Get content with behaviors¶
On the serialization of the content you will get the behaviors as objects on the content.
GET an ITEM : GET on the object:
{ "@id": "", "guillotina.behaviors.dublincore.IDublinCore": { "expires": "2017-10-01T00:00:00.000000+00:00", "modified": "2016-12-02T14:14:49.859953+00:00", } }
Dynamic Behaviors¶
guillotina offers the option to have content that has dynamic behaviors applied to them.
Which behaviors are available on a context¶
We can know which behaviors can be applied to a specific content.
GET CONTENT_URI/@behaviors:
{ "available": ["guillotina.behaviors.attachment.IAttachment"], "static": ["guillotina.behaviors.dublincore.IDublinCore"], "dynamic": [], "guillotina.behaviors.attachment.IAttachment": { }, "guillotina.behaviors.dublincore.IDublinCore": { } }
This list of behaviors is based on the
for statement on the configure of the behavior.
The list of static ones are the ones defined on the content type definition on the configure.
The list of dynamic ones are the ones that have been assigned.
Add a new behavior to a content¶
We can add a new dynamic behavior to a content using a
PATCH operation on the object with the
@behavior attribute,
or in a small
PATCH operation to the
@behavior entry point with the value to add.
MODIFY an ITEM with the expires : PATCH on the object:
{ "guillotina.behaviors.dublincore.IDublinCore": { "expires": "1/10/2017" } }
MODIFY behaviors : PATCH on the object/@behaviors:
{ "behavior": "guillotina.behaviors.dublincore.IDublinCore" }
|
https://guillotina.readthedocs.io/en/latest/developer/behavior.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
How to use recursion in Python
This.
What does recursion look like?
Now that we know recursion is characterized by the use of a function that calls itself, let's have a look using the textbook example of the fibonacci sequence.
For the uninitiated, the fibonacci sequence goes something like this: 1, 1, 2, 3, 5, 8, 13, 21, 34. Each subsequent number is obtained by summing the previous two numbers.
A recursive algorithm to generate the nth number in the sequence would look something like this:
def fibonacci(n): if n == 0: return 1 elif n == 1: return 1 return fibonacci(n-1) + fibonacci(n-2)
To wrap your head around this, let's use the call of fibonacci(4) as an example.
When you call the above function with a value of 4, the function will hit:
return fibonacci(3) + fibonacci(2)
We don't know the solution to either of these, so we call fibonacci(3) first:
return fibonacci(2) + fibonacci(1)
Again, we don't know the solution to fibonacci(2), so we call that which hits:
return fibonacci(1) + fibonacci(0)
Great! This is a problem small enough for us to solve so both of those calls to our function return 1. Then, we end up back at:
return fibonacci(2) + fibonacci(1)
Except this time, fibonacci(2) has a value of 2 so then fibonacci(1) is called. This quickly returns 1 so now we have completely resolved fibonacci(3) from the first iteration. It evaluates to 3!
Now we just have to deal with fibonacci(2). This hits:
return fibonacci(1) + fibonacci(0)
Which returns a value of 2. Our function terminates with a final evaluation of 2 + 3. We return the value 5.
Now wasn't that easy? No? That's fine, it still melts my brain.
Here's a summary of the functions indented by what level the calls actually occur:
fibonacci(4) fibonacci(3) fibonacci(2) fibonacci(1) = 1 fibonacci(0) = 1 fibonacci(1) = 1 fibonacci(2) fibonacci(1) = 1 fibonacci(0) = 1
Fibonacci is a decent way to demonstrate recursion unfortunately, it is purely academic. It has no real world value since an iterative algorithm is much easier to read and works faster. Let's move on to some more practical examples.
Testing a word to see if it is a palindrome
A palindrome is a phrase that is the same written forward and backwards. "Madam, I'm Adam" and "rotator" are both palindromes. Now how would you write an algorithm to test for palindromes using a for or while loop? It's doable, but not so simple. It turns out, this is a perfect application for breaking the problem into smaller problems we can solve easily and using, you guessed it, recursion!
# assume we have made sure w is a lowercased phrase stripped of spaces and punctuation def isPalindrome(w): if len(w) == 1: return True elif len(w) == 2: if w[0] == w[1]: return True else: return False else: if w[0] == w[-1]: return isPalindrome(w[1:-1]) else: return False
I'll spare you the stack trace. You'll see our two small problems we know how to solve (our base cases).
If a word is 1 character, it is necessarily a palindrome. If it is 2 of the same characters, it is a palindrome as well.
If it is longer than that, we don't know. We check the first and last letters to make sure they're the same then recursively call our function with w[1:-1] which is the same phrase without the outer two letters.
Here's an output of log statements from calling "isPalindrome('madamimadam')":
calling isPalindrome on madamimadam comparing m to m calling isPalindrome on adamimada comparing a to a calling isPalindrome on damimad comparing d to d calling isPalindrome on amima comparing a to a calling isPalindrome on mim comparing m to m calling isPalindrome on i It is a palindrome!
See how recursion allows us to solve the smallest case of a problem, then apply it on a much wider scale by breaking our large problem into smaller pieces?
Traversing files
Is the palindrome example still too contrived for you? Well how about this one?
Let's say you are writing a python script to find all of the image files in a given directory and its subdirectories and do something with them, say append something to their name. If you did not do this recursively, you're looking at a huge headache. Who knows how deep the directory structure goes? Yes, Python has a built in method "os.walk" that does this for you, but it's possible you'll need to be more flexible and need to implement this yourself. Here goes!
def appendToImages(dir, image_file_extensions=['jpg', 'jpeg', 'gif', 'png']): filenames = os.listdir(dir) for filename in filenames: filepath = os.path.join(dir, filename) if os.path.isfile(filepath): # ignore non image files filename, file_extension = os.path.splitext(filepath) if not file_extension[1:] in image_file_extensions: continue os.rename(filename + file_extension, "{0}-IMAGE{1}".format(filename, file_extension)) print("renamed {0}{1} to {0}-IMAGE{1}".format(filename, file_extension)) elif os.path.isdir(filepath): appendToImages(filepath)
To quickly summarize, we take a directory, list its files, then iterate over those files. If it is in fact a file, we decide if it is an image and work on it. If it is a directory, we call "appendToImages" on that directory.
Tips and tricks
The first thing about recursion that trips up new programmers is termination. Your function must have a termination condition that is reached 100% of the times it is called. If not, you will have an infinite loop that sucks up memory until it stack overflows or crashes. In the above scripts, we either act upon a number that decreases each time the function is called or we can assume the list of files is finite.
When should you use recursion?
This is a tough answer, since any recursive algorithm could be written iteratively. My recommendation is to become comfortable with recursion, learn about recursive sorting and searching algorithms, then use it when it feels like you are solving the same problem over and over with a smaller set of data each time. There are performance concerns as well. Many recursive algorithms are slower than their iterative counterparts, but that is a guide for another day.
|
https://howchoo.com/g/mzy2y2zmymq/how-to-use-recursion-in-python
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
1. Overview
JGoTesting is a JUnit-compatible testing framework inspired by Go's testing package.
In this article, we'll explore the key features of the JGoTesting framework and implement examples to showcase its capabilities.
2. Maven Dependency
First, let's add the jgotesting dependency to our pom.xml:
<dependency> <groupId>org.jgotesting</groupId> <artifactId>jgotesting</artifactId> <version>0.12</version> </dependency>
The latest version of this artifact can be found here.
3. Introduction
JGoTesting allows us to write tests that are compatible with JUnit. For every assertion method JGoTesting provides, there is one in JUnit with the same signature, thus implementing this library is really straightforward.
However, unlike JUnit, when an assertion fails, JGoTesting doesn't stop the execution of the test. Instead, the failure is recorded as an event and presented to us only when all assertions have been executed.
4. JGoTesting in Action
In this section, we will see examples of how to setup JGoTesting and explore its possibilities.
4.1. Getting Started
In order to write our tests, let's first import JGoTesting's assertion methods:
import static org.jgotesting.Assert.*; // same methods as JUnit import static org.jgotesting.Check.*; // aliases starting with "check" import static org.jgotesting.Testing.*;
The library requires a mandatory JGoTestRule instance marked with the @Rule annotation. This indicates that all tests in the class will be will be managed by JGoTesting.
Let's create a class declaring such rule:
public class JGoTestingUnitTest { @Rule public final JGoTestRule test = new JGoTestRule(); //... }
4.2. Writing Tests
JGoTesting provides two set of assertion methods to write our tests. The names of the methods in the first set start with assert and are the ones compatible with JUnit, and the others start with a check.
Both sets of methods behave the same, and the library provides a one-to-one correspondence between them.
Here's an example to test whether a number is equal to another, using both versions:
@Test public void whenComparingIntegers_thenEqual() { int anInt = 10; assertEquals(anInt, 10); checkEquals(anInt, 10); }
The rest of the API is self-explanatory, so we won't go into further details. For all examples that follow, we are going to focus only on the check version of the methods.
4.3. Failure Events and Messages
When a check fails, JGoTesting records the failure in order for the test case to continue its execution. After the test ends, the failures are reported.
Here's an example to show what this looks like:
@Test public void whenComparingStrings_thenMultipleFailingAssertions() { String aString = "The test string"; String anotherString = "The test String"; checkEquals("Strings are not equal!", aString, equalTo(anotherString)); checkTrue("String is longer than one character", aString.length() == 1); checkTrue("A failing message", aString.length() == 2); }
After executing the test, we get the following output:
org.junit.ComparisonFailure: Strings are not equal! expected:<[the test s]tring> but was:<[The Test S]tring> // ... java.lang.AssertionError: String is longer than one character // ... java.lang.AssertionError: Strings are not the same expected the same:<the test string> was not:<The Test String>
Besides passing the failure messages in each method, we can also log them so that they only appear when a test has at least one failure.
Let's write a test method that puts this into practice:
@Test public void whenComparingNumbers_thenLoggedMessage() { log("There was something wrong when comparing numbers"); int anInt = 10; int anotherInt = 10; checkEquals(anInt, 10); checkTrue("First number should be bigger", 10 > anotherInt); checkSame(anInt, anotherInt); }
After test execution, we get the following output:
org.jgotesting.events.LogMessage: There was something wrong when comparing numbers // ... java.lang.AssertionError: First number should be bigger
Notice that in addition to logf(), which can format messages as the String.format() method, we can also use the logIf() and logUnless() methods to log messages based on a conditional expression.
4.4. Interrupting Tests
JGoTesting provides several ways to terminate tests cases when they fail to pass a given precondition.
Here is an example of a test that ends prematurely because a required file doesn't exist:
@Test public void givenFile_whenDoesnotExists_thenTerminated() throws Exception { File aFile = new File("a_dummy_file.txt"); terminateIf(aFile.exists(), is(false)); // this doesn't get executed checkEquals(aFile.getName(), "a_dummy_file.txt"); }
Notice that we can also use the terminate() and terminateUnless() methods to interrupt test execution.
4.5. Chaining
The JGoTestRule class also has a fluent API that we can use to chain checks together.
Let's look at an example that uses our instance of JGoTestRule to chain together multiple checks on String objects:
@Test public void whenComparingStrings_thenMultipleAssertions() { String aString = "This is a string"; String anotherString = "This Is a String"; test.check(aString, equalToIgnoringCase(anotherString)) .check(aString.length() == 16) .check(aString.startsWith("This")); }
4.6. Custom Checks
In addition to boolean expressions and Matcher instances, JGoTestRule‘s methods can accept a custom Checker object to do the checking. This is a Single Abstract Method interface which can be implemented using a lambda expression.
Here's an example that verifies if a String matches a particular regular expression using the aforementioned interface:
@Test public void givenChecker_whenComparingStrings_thenEqual() throws Exception { Checker<String> aChecker = s -> s.matches("\\d+"); String aString = "1235"; test.check(aString, aChecker); }
5. Conclusion
In this quick tutorial, we explored the features JGoTesting provides us for writing tests.
We showcased the JUnit-compatible assert methods as well as their check counterparts. We also saw how the library records and reports failure events, and we wrote a custom Checker using a lambda expression.
As always, complete source code for this article can be found over on Github.
|
https://www.baeldung.com/jgotesting
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
How can I do a foreach loop for an array of booleans
“How can I do a foreach loop for an array of booleans” Code Answer
How can I do a foreach loop for an array of booleans
whatever by
Sore Salmon
on Apr 27 2020
Donate
0
// The problem is not that your array is boolean, // it is that you are trying to assign value to buttonBoolean in foreach loop. It is read-only as every foreach iteration variable is // You could do this instead: public bool[] inputBool; for(int i = 0; i < inputBool.Length; i++) { inputBool[i] = false; }
Add a Grepper Answer
Whatever answers related to “How can I do a foreach loop for an array of booleans”
adding delay in javascript foreach loop
boucle foreach js
foeach in js
for each array javascript
for each javascript
for each js
for-each loop in java
For-each over an array in JavaScript
forEach
foreach array for 3 or more
foreach in javascript
foreach jas
foreach javascript
foreach js
foreach loop javascript
foreach loop js arrow functons
foreach over array javascript
how to use the foreach method in javascript
javascript .foreach
javascript example of foreach loop
javascript for each loop
javascript foreach example
javascript forEach return
js for each item do
what foreach method returns in javascript
what is foreach method in javascript
Whatever queries related to “How can I do a foreach loop for an array of booleans”
for cycle array booleans
foreach with bool array
javascript can i make two promise inside one function
How can I do a foreach loop for an array of booleans
Learn how Grepper helps you improve as a Developer!
INSTALL GREPPER FOR CHROME
More “Kinda” Related Whatever Answers
View All Whatever Answers »
grepper belt rankings
grepper belts
use recursion to create a range of numbers
Explannation of count += st[i: i + 4] == 'Emma"
Illegal assignment from Object to List<Object>
number of spanning tree gfg
unity sort a list
erase duplicates and sort a vector
algoritmo cpf
maven dependency tree
subroutine
A List Apart
how to compare two arrays and return the missing values
what is the complexity of a hashmap
rondom choise from list
trpilet of array
godot for loop
godot loop through array
bladre foreach index
throw new Error('algorithms should be set');
osx tree
KTHREAD structure
how to make an array of arrays unity
list of 0 to 100
super bowl seqhaqks
hash collision when truncating sha-1
Zero Sum Subarrays
lottie delay between loops
xrandr duplicate displays
haskell empty list
orderd dictionary pop vs del
cube numbers list
loopback sum
is it safe to have tags on trees
stephen hawking cause death
what is the lioyd-max algorithm
islands dupe glitch
even numbers in array stack overflow
how to log an arryay
lev
paco reduce
nu när han är hemma på spanska
count subarrays whose product is divisible by 4
higher order functions event
why a array are not int
k pop
cls in r?
Why shoudnt I throw garbage in creeks
sort the list base on hthe count of characters
gdscript loop through array
k palindrome
find by
gorm order by
enhanced for loop for object 2D array
proclus algorithm
compare two arrays and remove not matched objects
A Tree Diagram is a drawing with branches of all possible outcomes
c style array
how to sum the values of a hashmap
buble sort c
si prefixes
stalin sort
was dolittle a flop
treeset vs treemap
how to make array uniq
triple sort
difference between treeset and treemap
pre-increment vs post-increment
map(int input().split())
c allocate memory for struct
margin order
previous smaller element hackerrank
full tree vs complete tree
remove arraylist index in for loop
how to multiply in arduino
linkedlist add element
int_max
pseudocode for uniform cost search
Declare, instantiate, initialize and use a one-dimensional array
max size for int c
iterator vs enumerator
a backwards counting forloop
iterator vs listiterator
array index out of bound exception
find mindepth 1 maxdepth 1
four dimensional array
difference between find element and find elements
vector vs linkedlist
hackerrank alex has a list of items to purchase at a market
greater number in arraya jd
i have two array want second array to sort in same way like first
merge sort vs quicksort
call the last element in an array matlab
update hashmap value while iterating
advantages and disadvantages of array
multithreading
arduino print array
reverse for loop
loopback where between
Operators with the same precedence are evaluated in which manner?
find unique values between multiple array
add arrays to yaml elements
find my phone
pacman 30th anniversary
highest possible z index
reading a 2d array with for each loop
difference between iterator and listiterator
find maximum number in array
Difference between HashMap and HashTable
ArrayList vs Vector
linked list insertion at beginning algorithm
nodelist to array
gcd algorithm
reverse of array in groups
complexity analysis of factorial using recursion
priority queue min heap
difference between group by and order by
linked list head and tail
array arduino
hashmap vs linkedhashmap
what is arraylist
heap vs stack memory
char array to int arduino
length arduino
treemap in reverse order
you are given an array of integers. your task is to print the sum of numbers that occurs for an even number of times in the array.
how to move an unordered list to the left
mean stack
sort vector
get diameter of binary tree
order by in ci
Ford Fulkerson Algorithm Edmonds Karp Algorithm For Max Flow time complexity
collections.sort custom comparator
construct all possible trees from given inorder traversal
kmp code
Clone Graph
what is the difference between union and union all
maximum element in a window of size k
generate array range
how to add element in list
shortest common supersequence
balanced parentheses leetcode
maximum subarray solution leetcode
swapping two numbers using call by reference
pseudo class
diff two arrays lodash
matrix determinant
height of a binary tree
intersection observer multiple elements
sort dict by value
duplicate function implementation
recursion factorial algorithm
brad traversy
find the most frequent element
how to make array empty
which sorting algorithm is best
how to find max in array
error log array
countvectorizer with list of list
pick random from array
findall(sort sort) example
permutations of an array
List vs Set
reverse a linked list
divide and conquer algorithm
segmented trees
ls -al list only directories
selection sort
intersection observer example
what is recursion
list of numbers 1 1000
what does iterable mean
how to print arraylist
house robber leetcode
max element in array
how to sort an array
rabin-karp algorithm
merge two lists
merge sort in linked list
bubble sort lagorithm
inorder preorder postorder
dot product array
dequeue operations using static array
maximum number from Array
append a list to another list as element
Given an array of integers, find the one that appears an odd number of times. There will always be only one integer that appears an odd number of times
0-1 knapsack problem dynamic programming using single array
unrecognized operator abs
stacks in c
extended euclidean algorithm
find out if a linked list contains a loop
kadane algorithm
bfs time complexity
what is a complete binary tree
find longest subarray by sum
interchange sort in system programming
priority_queue
array_walk_recursive get return value
react-map-gl
heap
bubble sort code
sort an array of 0s 1s and 2s
threaded binary tree
Write a program to Add Two Distances (in inch-feet) System Using Structures.
define hashmap and pre set value
find all permutations of a string
arrays
Write a program to show swap of two numbers without using third variable.
for each loop in c
Duplicates in a repeater are not allowed.
search in rotated sorted array
insertion sort part 1 hackerrank solution
argument list too long
how to get index of max n values in an array
initialise List<List<Integer>>
how to print sum of array
bubblesort
list is an inline element?
How do you find the missing number in a given integer array of 1 to 100?
list contains
lca of binary tree
bucket sort algorithm
binary search tree program in c
bst traversal with recursion c
preorder to postorder converter online
best case complexity of quick sort
how to sort an array from greatest to least
counter most_common
insert at any position in linked list
how to get the first few lines of an ndarray
difference between heap vs stack memory
list.extnd
what is backward traceability matrix
how to sort the arraylist without changing the original arraylist
populate associative array from output bash
flatten a list of list itertools
how to enumerate arrays
k modes
fill a two dimensional array with default value
haskell check if list is sorted
flutter sort list by object property
paranthesis matching using stacks
how to find missing value in sorted array
difference between pop and push
how to initialize a queue in c
can you push falsy values to array
for arduinp
dfs time complexity
desc by sum student submissions
argmax vs max
insert in unordered_map
function that takes any question and returns a randomly selected item from an array
function allLongestStrings(inputArray) {
hamming distance
maximum length bitonic subarray
ruby get the first element of an array
write a function that will concatenate two circular linked list producing one circular linked list
binary search time complexity
diff between array and list
minimum number of swaps required to sort the array in ascending order.
list of numbers 1 100
find max value in map
bubble sort
how to print all permutations of a string
introducto to algorithms
"$".repeat
check if a graph has cycle
longest palindromic subsequence
Given an array nums. We define a running sum of an array as runningSum[i] = sum(nums[0]…nums[i]). Return the running sum of nums.
equivalence class partitioning
find largest number in array
knapsack algorithm
how to find element with max frequency in array
how to set priority in testng
javascripte list length
equivalence class partitioning testing
arraylist contains doc
Malloc
least common multiple algorithm
flatlist last item column 2
addition of two arrays
Dijkstra’s Algorithm code in C
c program to count frequency of each element in an array
forward traceability matrix
findind no of ways to reach a end of array
heap sort
adjacency list representation of graph
clone from one repo to another
radix sort
sort list with respect to another list
numpy create empty array
swap 2 integers without using temporary variable
bfs of a graph
tree traversal
bfs algorithm
radix sort pseudocode
how to make a pointer point to a the last value in an array
tower of hanoi worst case time complexity
huffman coding algorithm code
minimum number of swaps to sort an array
add two arraylist
binary search tree
unordered_set find
hashmap and linkedhashmap
When will Hill-Climbing algorithm terminate?
bubble sort algorithm
quicksort in c
breadth first search
merge sort algo
max subsequence sum in array
counting sort
postorder traversal c++
how to insert duplicate key in hashmap
for vs foreach loop
Quick Sort
TreeSet
treemap
how to convert array to int
find duplicate in an array using xor
how to reverse a linked list
matrix chain multiplication
find a loop in linked list
uniion of two arry
find second max element in array in 1 iteration
map
dijkstra algorithm
reverse linked list
.find
insertion sort
binary search iterative
variable array arduino
analysis of quick sort
what is queue
selection sort algorithm
tree data structure
ef concat orderby
symbols array
array of vectors glsl
how to send values to a selection sorting function
How to write an array in VS code
size product
first repeating element
forebets
palindrome partitioning ii
find all permutations of a set of string
lodash get last index of array
Smallest divisible number
godot count amount of one item in array
treemap vs hashmap
Jspinner max and min value
Algorithm check balanced parentheses
how to determine if a function has log complexity
Function to find a pair in an array with a given sum using hashing
minimum value in array template function
How to choose randomly between two integers
list variables iterm
Kafka only provides a _________ order over messages within a partition.
how to add to an array
9.9.4: Decreasing Resolution
regex for largest number
difference between iterator vs enumerator
ruby array of symbols shorthand
time complexity of sum of even number
lexicographical permutation nayuki.io
merge sort array
length of each number in array
out of range array index
sum of sub matrix
equivalent partition
algorithme tri topologique graphe
cuemath leap
longest increasing subsequence techie delight
coin change problem minimum number of coins dynamic programming
lineList.reduceByKey(lambda accum,n: accum + n)
k’th largest element in bst
write a function that takes in 2 arrays , merges them together an then return the new array sorted
how to divide array in chunks
The most significant phase in a genetic algorithm is
how to count the number of occurrences of a character in an arraylist recursively
length of array glsl
onyitanyhfkjkhf,k
MedianInARowWiseSortedMatrix
mini-max sum hackerrank solution
how to slip array
what does pwn mean in leetspeak
sorting-a-dictionary-by-value-then-by-key
printing number in decreasing order using For in range
largest subarray of 0's and 1's
sort the list of x, y pair with x
return index of element prove a condition in array matlab
return matching index array
how to count all childs
Find largest sub-array formed by consecutive integers
is radix sort in place
discrete mathematics well ordered set
Order and grouping alphabetically
depth first search stack
forming an object with reduce
find the nth largest number in an array
The type 'Map<String, dynamic>' used in the 'for' loop must implement Iterable
zoeken in een array
untiy add to array
binary tree with sibling pointer in leetcode
how to count number of gcd
sum([2, 3, 4], 1) should equal 2.
aurelia array
Complete the sock Merchant function in the editor below. It must return an integer representing the number of matching pairs of socks that are available.
get the keys from another array by matching values
How to get all the key in map in array
sort half in ascendng and descending array
min heap insertion
concat push
ruby find lower number array object
avl tree gfg
traverse
subset
split array in smaller arrays
longest peak geeksforgeeks
sliding puzzle javascript
indexOf in myltidimentionall array
ruby sort method
array rotation code
find top 2 values in array list
react-intersection-observer
write a program to print following n*n where 1<=n<=26
k
Sort an array of 0’s, 1’s and 2’s
velocity add item to array
update on segment tree
bubble sort on a doubly linked list
last 10 element of array
Minimum Swaps for Bracket Balancing
max_ele
get length of map elixir
What would be the DFS traversal of the given Graph
generate order number
logical vector passed in R
how to find nth min salary from table
when the imposter is sus
golang pop first of array
k nearest neighbor algorithm
preorder traversal visualization
use of pointer in multidimensional array
maptoint vs map
haxe loop through array
print the Next Greater Element (NGE) for every element
how can we find lexicographically smallest anagram of string
negative array size exception
find the position surrounded by the opposite no in a 2d array
getch () in c
C function is a linked list empty
hunity animition loop as delay why
map in javascript
search in dict as hashing
Given array representation of min Heap, convert it to max Heap in O(n) time
array mit random zahl mischen
what is the use of sentinels in merge sort
geek for geeks dining philosopher
why not find element
algorithm for insertion sort in daa
how to get last element of array in perl
list object has no attribute intersection
number of iterations exceeded maximum of 50 nls
string split method
Merge two arrays by satisfying given constraints
kadane algorithm actual
sort without repitition R
list vs map
MDN new Array()
how to define array
Write a function reverse that reverses the order of the characters in a string. The function should be recursive. Example: reverse('live') should return 'evil'.
Representation of data structure in memory is known as:
System.out.println(matrix[0].length);
how to trace efficiently
mirror a binary tree
array index of repeating element with lowest index c
kadanes algorithm
Linked List Cycle
longest consecutive subsequence
can a numpy array have missing entries
least recently used
word count program in hadoop with explanation
20. Write a generic method to find the maximal element in the range [begin, end) of a list. 4
why does array index start from 0
queue
merge vector c++
order by 1
find repeat number using bitwise operator
Recursion
jekyll map
how to shuffle array in c
golang convert fix length byte array to slices
k nearest neighbor
ecg is a graph of
Design, Develop and Implement a menu driven program using C Programming for the following operations on Binary Search Tree (BST) of Integers.
c++ list add
untiy list
Find index of 0 to be replaced to get maximum length sequence of continuous ones
pebble loop through array
o(n+m) means o(n)
collection coding interview questions
lemon iterate over all arcs in graph
vbs arrays
disjoint set data structure cp algorithms
how to find the last element in an array
bsl 1.0
d = {'Mark': 100.0, 'Peter': 50.0, 'John': 25.0}for i, j in d.items(): print (i + ' pays ' + str(j))
algorithm to fing the rank of the matrix
fork example questions counter
check if bfs is possible or not
Counting subgrids
build bst time complexity
itertools .cycle()
cloves
On what factors the maximum no of threads in a process depends?
array join pgp
Number of array elements
bash function to populate associative array
forward traceability matrix vs backward traceability matrix
min max value for inout
what is find by
list.ForEach(i => { i.a = "hello!"; i.b = 99; });
how to find sum of array
c how to find next multipliy of a number
determine a value of an array element based on a condition in another array
list element from string to int
get first and last element of array matlab
arraylist remove by value
merge sort using recursion
iterative segment tree codeforces
is spanning tree enabled by default
qsort c code
how to sort array
counting k primes
an associative array save duplicated values under single key
put numbersin order
c program for array
Find Colleges, Courses, Cutoff
kth permutation sequence
collection.sort time complexity
when to use map
r check number of items on a list
row with max 1s leetcode
: s = 'Hi hi hi bye bye bye word count’ sc.parallelize(seq).map(lambda word: (word, 1)).reduceByKey(add).collect()
time complexity
how to find how many times does an object comes in an arraylist
largest of four numbers using ternary operator
insertion sort on array automata
bfs
distinct ele in array
public void write(byte[] byteArray,int offset ,int length )throws IOException
vector vs linked list
Smallest Range II
priority
randomly return a sample from an array
Given an array of integers, every element appears thrice except for one which occurs once.
nested binary tree
Check tree is balanced or not
lowest common multiple
heapify down
Make a list of the first 10 cubes (that is, the cube of each integer from 1 through 10), and use a for loop to print out the value of each cube.
difference between iterative and prototype model
8-queens problem can be solved by
bi-valued slice in an array
array in assembly
what is intersect
Given an integer A pairs of parentheses, write a function to generate all combinations of well-formed parentheses of length 2*A.
list replication haskell
Rearrange an array with alternate high and low elements
set .union() operation hackerrank solution
Given an array of all your wishlist items, figure out how much it would cost to just buy everything at once
c program for array stackflowover
<ol>
Count the number of 3-cycles in a graph
check if there are reapeted values in an array
construct binary tree from array
max unsiged int
Write an ALP to arrange given series of hexadecimal bytes in an ascending order.
$tmpmembershipfee = MembershipFee::orderBy('created_at','DESC') ->first();
how to find length of an array in matlab
kruskal algorithm in c program
.map((user, index)
c program array prime number
multidimensional array
while k4 < 20: k4 = k4 + 1 k = k != 5 print(k4)
iiit gwalior
repl it
tnt duplicator ilmango
How to delete the middle element from an array
greedy algorithm
use of simplex algorithm
how to get only duplicates from array in loadash
concat()
find if element is powers of 2
how to list numbers 0- 20 in using putchar only 3 times
find the value of k if kx(x-2root5)+10=0
mutateTheArray(n, a)
comparearray return difference
scheduling algorithm os c code
$result = $distance[$cityA][$cityB];
Write a function that calculates the mean, median, variance, standard deviation, minimum and maximum of of list of items. You can assume the given list is contains only numerical entries, and you may use numpy functions to do this.
arrays with for loops
Find equilibrium index of an array
The longest common suffix
Method orderBy does not exist.
javas script add list
array using for loop stack overflow
Array ( [0] => 00000 [1] => [2] => ) Array ( [0] => 00000 [1] => [2] => )
javqascript sarray push method
array of range of numbers
Given a list of numbers, write a list comprehension that produces a list of only the positive numbers in that list. Eg:- input = [-2, -1, 0, 1, 2] Output = [1,2]
how do i index into array
how the destrucuring of array works
hashmap clone
how to differentiate all levels in level order traversal of tree
dijkstra algorithm in nlogn time cp algorithm
find size of hashmap
Command line option 'g' [from -get] is not understood in combination with the other options.
evaluation order in compiler design
program to implement stack for book details(book no, book name). implement push and display operation
pool map iterator
map a multi dimensional array
reverse transfer impedence
linkedhashmap vs linkedhashset
freecodecamp intermediate algorithm scripting sum all numbers in a range
n/3 number appears in array elements
searching for goodness
primitive (immutable) data types:
is array a stack data structure
how to calculate the hascode of a hash table
lol how to get out of low priority queue
Given a integer h, find the possible number of balanced binary trees of height h.
isempty for arrays
merge two binary trees
Which search is complete and optimal when h(n) is consistent?
vector with initial value
can we make foreach nested
int amount=list.getmembers(amount)
how to sort an arraylist by subclases
code for showing a number divisible by 3 in an array
lexicographically smaller
Duplicates in binary tree
Replace each element of an array with product of every other element without using division operator
how to get last element of an array in swifg
find weight of lasgerst indepent set array
recursive function to find the sum of the nth term
assigning an array with random numbers
sort hash ruby
how to embed element in to array
Floyd-Warshall’s
singly even magic square creation algorithm
new listnode(0) meaning
tree
left shift vs multiple by 2
int to int64 golang
fisher yates algorithm
const arr = new Uint8Array(fileReader.result).subarray(0, 4);
recursion function r
cheat seet traversy
can we calculate the intersection of duplicate ele
array prime numbers
unshift
linked hashmap
How can I do a foreach loop for an array of booleans
how to get index 2 online
how to default sort item in primegng
Intermediate Algorithm Scripting: DNA Pairing
Given a list of file paths, print them out in a hierarchal way
segment tree
jedis.zrangeByLex() get names with prefix
how to turn index(_rawBits:)into int in siwft
order delivery route leetcode
duplicate
sort
list memberlist=list.get(count)
compare two dates and sort array of objects
sort by highest number postgres
lexicographically
implementing euclid's extended algorithm
is a hashset slower than a treeset
size of unordered_map
A frog jumps either 1, 2 or 3 steps to go to the top. In how many ways can it reach the top, given the number of steps to reach the top.
how to break out of parallel.foreach
binary tree in ds
what is treemap
b tree
return a chest board array
how to call last stored variable in an array?
Write a program to simulate the following non-preemptive CPU scheduling algorithms to find turnaround time and waiting time for the above problem for round robin
template for min heap competetive programming
Provided is a list of numbers. For each of the numbers in the list, determine whether they are even. If the number is even, add True to a new list called is_even. If the number is odd, then add False.
how to find the R packages and versions
Write a recursive function to reverse a list
How to implement reverse-lookup in enum?
order by 2 desc
Count of pairs whose bitwise AND is a power of 2 O(n)
partition array for maximum sum
Special Stack geeksforgeeks
intersectiom of two arrays
given an array a of n non-negative integers, count the number of unordered pairs
sort an array of struct in golang
difference between size in main and in fuction size = sizeof(arr) / sizeof(arr[0])
how to pass list in function with modified list
can we do post order traversal using morris algo
index arrayformula
Longest Repeating Subsequence
linkedhashset first element
cgi-sys 669 787 489 list
stupid sort
first fit memory allocation program in c
how to make unordered_map empty
incalit index
void InsertionSort(int * a, int n) { int i, j, x; for (i = 1; i < n; i++) { x = a[i]; j = i - 1; while (j >= 0 && a[j] > x) { a[j + 1] = a[j]; j = j - 1; } a[j + 1] = x; } }
how to run two files together in repl.it
divide each element of numpy array
how to print an arraylist in a specifc format
level order traversal in spiral form Using deque
bfs with backtracing
why are essential matrix and fundamental matrix
insertion in binary tree
tcl get array size
a binary tree has 20 leaves
sliding window maximum using queue
merge sort algorithm
merge sort
flatMap
hashmap
binary tree in c implementation
preorder traversal
detect cycle in undirected graph
stack data structure
sorting in data structure
symmetric matrix
put array in alphabetical order
tree listing in mac
hashing in data structure
binary tree vs binary search tree
tree ds
how to delete an element from a n arry using filter
array vs vector
bubble sort integers
sorting algorithm with merge sort
how to add all the element in a list
BFS AND DFS IN C
difference between breadth first traversal and depth first traversal
rotate array
intertools combinations implementation
array_merge
merge sort complexity
CountVectorizer
difference between hashmap and linkedhashmap
annotations order in testng
heap sort name meaning
Find maximum product of two integers in an array
cohen sutherland algorithm
how to find last element in array
how to use loop in 2d array
using sort function in c
when to use list set map
list how to find index of contained item
does std::map include multiple apearances?
assassin's creed valhalla
how to track Number of times the loop was executed while
how to get the last element of an array
dijkstra algorithm time complexity
Swap two numbers without using a third variable ( All possible ways ).
how to perform hdfs string search recursively in hdfs
TCL get array indices
handlesubmit is not a function grocery list
gc_collect_cycles
the ordered_array has a maximum size known as
fold_tree ocaml
rating 5 star compute
poosible two pairs of a number
Given an array of integers arr, write a function that returns true if and only if the number of occurrences of each value in the array is unique. using hashmap
Given a square matrix list[ ] [ ] of order 'n'. The maximum value possible for 'n' is 20.
Implementation of Strassen’s algorithm to multiply two square matrices
sra list of accessio numbers
combine two coordiante matrixs into 1
array sort by key value grepper
bit operation loop without loop
Write a program, which takes an array of characters from users. Your task is to check either it is palindrome or not.
how to get the lowest price quantopian
how long has non binary been around
Find duplicate rows in a 5x5 Matrix.
write the operator overloading declaration for a stream extraction (>>) operator for an integer.
partition sml
golang + sigs.k8s.io/structured-merge-diff/v3/value
how to randomly seletct in length of array
splice machine s3 access
Code the AddWord method such that, WordCount will contain the number of times the AddWord method has been called with a given word. For example, if AddWord is called twice with the word "rock" then WordCount["rock"] will be 2.
collections.Counter(string).most_common
array(['Mr.', 'Mrs.', 'Miss.', 'Master.', 'Uncommon', 'Dr.', 'Ms.', 'Lady.', 'Col.', 'Capt.'], dtype=object)
For any given N, u = [1,2,3,...,N] and v = [2017,2018,2019,...,2017+N-1]. Write a function that returns a vector that contains the following sequence: [1*2017, 2*2018, 3*2019,...,N*(2017+N-1)]. Hint: you might want to create vectors u and v.
xml array of objects
php vs python speed
webdriver manager update
print("Minus - 12")
tutorials on appsync graphql transformation
self.new_from_db
google traduction
oversetter
translate néerlandais
google translate to japanese
trad
english to japanese
English to french
google trad
traduction espagnol
google transalte
japanese to english
googl trad
traduction
statistiques Coronavirus France
corona statistics
corona tracker
failed to push some refs to
eclipse
attack on titan season 4
npm clear cache
download android studio
solid
degrees to radians
float right bootstrap
create postgres database
bootstrap tooltip
table in markdown
create a venv
which sign is greater than
flutter grid view
clear cmd
body-parser npm
google.com
show collections in mongodb
npm font awesome
jupyter notebook cmd
http status codes
proptypes oneof
Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
play tic tac toe
predicted growt before
ngif else
earth population
bootstrap 4.5 cdn
todays corona numbers
covid stats
20 minute timer
import axios
text input max
flutter navigate to new screen
create db user postgres
Could not find module "@angular-devkit/build-angular
password regex
vi line number
windows keyboard shortcut switch desktop
fontawesome.com search icon
ng add @angular/material
open previous closed tab chrome
build apk flutter command
PIL module not detected
window host file
latex sum
cloudflare dns
docker tag
download ram
mac active developer path
import error no module pip
acid properties in dbms
itemize latex
OR gate
minecraft server startup code
flutter raisedbutton example
conda write environment.yml
Module not found: Can't resolve 'react-router-dom'
prettier don't format line
find gameobject with tag
fontawesome.com mak icon bigger
markdown embed image
select search bootstrap
less equal latex
latex bullet points
how to know the ip address of my pc using cmd
clear terminal windows
electromagnetic spectrum
delete docker containers
composer require with version
gson dependency android
league of legends download
headers in axios.get
npm webpack
switch case in flutter
ipl match today
npm start reset cache
encapsulation programming
in url
Flutter give image rounded corners
m to cm
font awesome 4 animated loading spinner
adb is not recognized
add migration ef core
run chrome without cors
what is the code for red color
cool math games
This is probably not a problem with npm. There is likely additional logging output above lite server
dummy paragraph
url in latex
image center in div
Toast messag eandroid
nodemon global instal npm
steins gate
material box shadow
docker remove container
put vs patch
dogecoin
wordpress docker compose
nvm set default
font awesome cdn
contains text xpath
splash screen android studio
how to add image overleaf
systemctl reload nginx
carpe diem
bootstrap 4 cdn
soap vs rest
full beacon size
textfield border flutter
docker get container ip
how to check flutter version
New Year's Eve
mongodb exists
list overleaf
generate apk ionic 1
shared prefs flutter
fontawsome flutter
how to make toast in android
Hex editor
corona rates
No module named 'sklearn'
update all dependencies with npm
bootstrap font asesome cdn
windows shutdown command timer
vim replace command
clearfix hack
flutter alertdialog
sha1 flutter
add image in markdown
latex new line
text-align: center;
gentoo
what is grepper
hcmc stock
how to farm grepper likes
how to center a form in bootstrap
how to convert a base 64 to blob
iphone 12 release date
1 day in seconds
whitespace regex
typing racer
cuda version
word reference
npm handlebars
circle imageview dependency in android
update pip module
inspect chrome mobile
pm2 start npm start
Cannot read property 'length' of undefined
text-decoration:none; bootstrap
select2.org events
stan lee
gravity forms shortcode
how to activate a venv in windows
run cron every hour
no module named pip
mongoose find not equal to
conda create env from yml
arduino code analogwrite
give space in latex
purple hex code
Can't bind to 'ngModel' since it isn't a known property of 'input'
reformat intellij
centos 7 port open
infinity war
assembly
.
|
https://www.codegrepper.com/code-examples/whatever/How+can+I+do+a+foreach+loop+for+an+array+of+booleans
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
Please describe the question in detail and share your code, configuration, and other relevant info.My issue with
ionic capacitor build --prod ios is that my working components starting with the likes of
<ion-content> <ion-grid <ion-row> <ion-col>
fail build with
'ion-grid' is not a known element: 1. If 'ion-grid' is an Angular component, then verify that it is part of this module. 2. If 'ion-grid' is a Web Component then add 'CUSTOM_ELEMENT
Without the
--prod it works fine. However, without the
--prod I am not able to sucessfully run
import { Component, enableProdMode } from '@angular/core'; ... try{ enableProdMode(); }catch{ this.logger.logError('Prod Mode Failed'); }
I managed to get
--prod to build with setting
aot and
buildOptimizer in
angular.json to false
"aot": false, "extractLicenses": true, "vendorChunk": false, "buildOptimizer": false,
However, this seems to make the
--prod irrelevant? At least
enableProdMode is still failing… ?
Adding
CUSTOM_ELEMENTS_SCHEMA like proposed here has not made any difference either.
|
https://forum.ionicframework.com/t/ionic5-prod-build-fails-with-not-a-known-element-in-components/207874
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
I have a question about best practices with the Module Design Pattern. The code below is an example of the way that some of our Components are written (we use ExtJs but that shouldn't matter too much). We build a lot of our components like this and I know that this doesn't match best practices exactly. Have any thoughts to clean up the code?
Ext.ns("TEAM.COMPONENT"); function Foo() { // Private vars var privateNumber=0, myButton, privateInternalObject; var numberField = new Ext.form.NumberField({ label : 'A NumberField!', listeners : { 'change' : function(theTextField, newVal, oldVal) { console.log("You changed: " + oldVal + " to: " + newVal); } } }); // Some private methods function changeNumField(someNumber) { numberField.setValue(someNumber); } // Some public methods this.publicFunctionSetNumToSomething() { changeNumField(privateNumber); } /** * Initializes Foo */ function init() { // Do some init stuff with variables & components myButton = new Ext.Button({ handler : function(button, eventObject) { console.log("Setting " + numberField + " to zero!"); changeNumField(0); }, text : 'Set NumberField to 0' }); privateInternalObject = new SomeObject(); word = "hello world"; privateNumber = 5; } init(); return this; };
I'm wondering a few things about this and wanted to ask and get conversation going:
fooobject needs to be set back to it's originals
Update 2012-05-24 I just wanted to add, I think this question ( Extjs: extend class via constructor or initComponent? ) is pretty relevant to the conversation, especially considering that the top voted answer is from a "former Ext JS co-founder and core developer"
Update 2012-05-31 One more addition, this question should also be linked ( Private members when extending a class using ExtJS ). Also, here is my favorite implementation to date:
/*jshint smarttabs: true */ /*global MY, Ext, jQuery */ Ext.ns("MY.NAMESPACE"); MY.NAMESPACE.Widget = (function($) { /** * NetBeans (and other IDE's) may complain that the following line has * no effect, this form is a useless string literal statement, so it * will be ignored by browsers with implementations lower than EcmaScript 5. * Newer browsers, will help developers to debug bad code. */ "use strict"; // Reference to the super "class" (defined later) var $superclass = null; // Reference to this "class", i.e. "MY.NAMESPACE.Widget" var $this = null; // Internal reference to THIS object, which might be useful to private methods var $instance = null; // Private member variables var someCounter, someOtherObject = { foo: "bar", foo2: 11 }; /////////////////////// /* Private Functions */ /////////////////////// function somePrivateFunction(newNumber) { someCounter = newNumber; } function getDefaultConfig() { var defaultConfiguration = { collapsible: true, id: 'my-namespace-widget-id', title: "My widget's title" }; return defaultConfiguration; } ////////////////////// /* Public Functions */ ////////////////////// $this = Ext.extend(Ext.Panel, { /** * This is overriding a super class' function */ constructor: function(config) { $instance = this; config = $.extend(getDefaultConfig(), config || {}); // Call the super clas' constructor $superclass.constructor.call(this, config); }, somePublicFunctionExposingPrivateState: function(clientsNewNumber) { clientsNewNumber = clientsNewNumber + 11; somePrivateFunction(clientsNewNumber); }, /** * This is overriding a super class' function */ collapse: function() { // Do something fancy // ... // Last but not least $superclass.collapse.call(this); } }); $superclass = $this.superclass; return $this; })(jQuery);?
First, this isn't specifically a module design pattern as I know it, this is a general constructor pattern. The module pattern I know is a singleton, but here you could have many instances of Foo(). That being said...
Q: How important is it to initialize variables when they're declared (i.e. at the top of Foo)
Declaring them at the top is important for clarity, but initializing them isn't as important here since you're doing so in the init. If you weren't doing this, initializing them prevents you from having to do an undefined check before testing the variable later:
var x; function baz(){ if (typeof(x) === 'undefined') { // init } else { if (x > 0) { blah } else { blah blah } } }
Q: How might I re-initialize part of this object if a client of this Module gets to a state that it's foo object needs to be set back to it's originals
Is there something wrong with creating a public reset method? It will have access to the private variables.
function Foo() { // ... this.reset = function () { privateNumber = 0; // etc }; // ... }
Q: What sort of memory issues might this design lead to and how can I refactor to mitigate that risk?
I don't know.
Q: Where can I learn more? Are there any articles that address this without relying too much on the latest and greatest of EcmaScript 5 ?
Here's a good read about the Javascript module (and other) pattern(s):
|
https://javascriptinfo.com/view/155961/extjs-javascript-module-design-pattern-best-practices
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
DEVELOPER BLOG
grCUDA: A Polyglot Language Binding for CUDA in GraalVM
Integrating GPU-accelerated libraries into existing software stacks can be challenging, in particular, for applications that are written in high-level scripting languages. Although CUDA-bindings already exist for many programming languages, they have different APIs and vary in functionality. Some are simple wrappers around the CUDA Runtime, others provide higher-level abstractions.
In this blog, I present grCUDA, an open-source solution that simplifies the integration of CUDA into script languages of the Oracle GraalVM, such as Python, JavaScript, R, and Ruby. This multi-language support, called polyglot, allows developers to select the best language for a task. While GraalVM can be regarded as the “one VM to rule them all“, grCUDA is the “one GPU binding to rule them all“. Developers can efficiently share data between GPUs and GraalVM languages (R, Python, JavaScript) with grCUDA and launch GPU kernels.
As a motivating use case let us consider a fictional company Acme, Inc. This company is offering a business management solution that provides CRM, inventor, and supply-chain management to international enterprises. As a large organically grown software product, this platform employs components written in different languages, such as Java for the business logic, JavaScript for the front-end, and R for the statistical and modelling functionality, all of which are tightly integrated inside the GraalVM ecosystem. As a business management platform, however, machine and deep learning workloads are getting more and more important to Acme’s customers who want to run GPU-accelerated data analytics, recommender systems, or NLP workloads. grCUDA provides a framework that allows Acme Inc to seamlessly incorporate CUDA kernels and CUDA-accelerated libraries, such as RAPIDS and cuDNN, into their business management suite, offering the highly requested functionality to their customers in a timely manner.
In this blog, I will show how you can access GPU-visible memory from your script and how to launch GPU kernels. Finally, we will study two examples that, in spirit of the Ame Inc company, use grCUDA to accelerate an R script and Node.js web application.
grCUDA
grCUDA is an expression-oriented language and is, like the other GraalVM languages, implemented in the Truffle Language Implementation Framework. Figure 1 shows the architecture of grCUDA in the GraalVM stack. We will be using the built-in functions of grCUDA. The pattern will always be the same: we will write a grCUDA expression that returns the function as a callable object back to the GraalVM language and then invoke these functions. We use the polyglot interface of GraalVM to evaluate the grCUDA expressions as shown below:
# Python import polyglot result = polyglot.eval(language=’grcuda’, string=’grCUDA expression’) // JavaScript let result = Polyglot.eval(’grcuda’, ’grCUDA expression’) # R result <- eval.polyglot(’grcuda’, ’grCUDA expression’) # Ruby result = Polyglot.eval(”grcuda”, ”grCUDA expression”)
Listing 1: Evaluation of grCUDA expression from different GraalVM languages
Device Array
A Device Array is a wrapper around GPU-accessible memory which grCUDA exposes as multi-dimensional array to the GraalVM host language. The initial version of grCUDA a device array is backed by CUDA Unified Memory. Unified memory can be accessed by both host and device. You can find more details in the Developer Blog on Unified Memory.
JavaScript Code:
// get DeviceArray constructor function const DeviceArray = Polyglot.eval(’grcuda’, ’DeviceArray’) // create 1D array: 1000000 int32-elements const arr = DeviceArray(’int’, 1000000) // create 2D array: 1000 x 100 float32-elements in row-major order const matrix = DeviceArray(’float’, 1000, 100) // fill array for (let i = 0; i < length(arr); ++i) { arr[i] = i } // fill matrix for (let i = 0; i < length(matrix); ++i) { for (let j = 0; j < length(matrix[0]); ++i) { arr[i][j] = i + j
Listing 2: Allocating and accessing device arrays from JavaScript
In Listing 1,
arr is a one-dimensional array that can hold one million signed 32-bit integers.
matrix is a two-dimensional 1000×100 array whose elements are stored in row-major order. We can access both device arrays like ordinary JavaScript arrays using the [] operator. Since JavaScript does not support multi-dimensional arrays directly, grCUDA exposes them as arrays of arrays.
grCUDA maintains consistent semantics for device arrays across languages. This means that the behavior for element access slightly deviates from that of arrays native in the respective GraalVM languages. For example, device arrays are typed and, unlike in JavaScript, bounded. grCUDA performs type and bound checks for every access and applies permitted type coercions. The important point here is that these checks come at negligible cost because grCUDA leverages the GraalVM Compiler, which is able to perform runtime specialization using profiling information and can hoist checks out of loops. In many cases, the array access is even faster than for arrays native in the GraalVM language as the performance measurements in Figure 2 shows.
Launching GPU Kernels
grCUDA can launch GPU kernels from existing CUDA code, e.g., from compiled PTX or cubin files. Launching kernels is a two-step process. First, we bind a kernel form the binary to a callable object, using the
bindkernel built-in function of grCUDA. Then we launch the kernel by invoking the callable. In the following, we show how launch an existing CUDA kernel from a Python script that is executed by the GraalVM Python interpreter. Consider to following simple CUDA kernel that performs an element-wise scale-and-add operation, also known as SAXPY, on a vector.
extern ”C” __global__ void saxpy(int n, float alpha, float *x, float *y) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < n) { y[i] = alpha * x[i] + y[i]; } }
Listing 3: Simple CUDA Kernel that performs an element-wise scale-and-add operation
The CUDA kernel is compiled using the nvcc CUDA compiler into a PTX binary:
nvcc -ptx saxpy.cu
The nvcc compiler by default mangles the symbols according to the C++ rules, which means that the mangled identifier (e.g.,
_Z5saxpyifPfS_ for this kernel) must be specified as the name of the symbol in the subsequent
bindkernel. The “
extern “
C”“ declaration prevents name mangling.
Next, we write a short Python script that binds the kernel, creates two arrays and launches the kernel.
import polyglot # bind kernel from PTX file to callable bindkernel = polyglot.eval(language='grcuda', string='bindkernel') kernel = bindkernel('saxpy.ptx', 'saxpy', 'sint32, float, pointer, pointer') # create and initialize two device arrays devicearray = polyglot.eval(language='grcuda', string='DeviceArray') n = 1_000_000 arr_x = devicearray('float', n) arr_y = devicearray('float', n) for i in range(n): arr_x[i] = i arr_y[i] = 1 # launch kernel kernel(80, 128)(n, 2, arr_x, arr_y) first10 = ', '.join(map(lambda x: str(x), arr_y[0:10])) print(first10, '...')
Listing 4: GraalVM Python script that binds the symbol of compiled kernel to a callable object and launches the kernel.
First, we obtain the
bindkernel function from grCUDA. We then invoke the function, pass the name of the PTX file and give it the name of the kernel. The last argument ‘
sint32,
float,
pointer,
pointer‘ is the signature of the kernel. It specifies the types of the arguments that the kernel function expects. The signature is needed to bridge between dynamically typed GraalVM languages and statically typed CUDA code. GraalVM will coerce a value from a source language type to the type required by the signature if permitted by the coercion rules. grCUDA supports the simple types from the Truffle Native Function Interface (NFI). Note that a device memory pointer, e.g.,
float *x, in the CUDA code, maps to the untyped pointer type
pointer in the signature string. Typically, device array objects are provided for such pointer arguments. grCUDA extracts the underlying pointer of the array and provides it as an argument to the kernel.
Next, we create two device arrays and fill them from Python for-loop. Finally, we launch the kernel in line:
kernel(80, 128)(n, 2, arr_x, arr_y)
The kernel executable can be regarded as a function with two arguments lists. The first argument list contains the kernel configuration that corresponds to the argument inside <<< … >>> in CUDA.
The second argument list contains the actual arguments that are passed to the kernel. grCUDA will automatically unbox the memory pointers from the
arr_x and
arr_y device arrays passed to the kernel. For comparison, the equivalent kernel launch statement in CUDA is:
saxpy<<<80, 128>>>(n, 2, arr_x, arr_y); // arr_x and arr_y are float * device pointers.
We obtain the following output when we run the script in the Python interpreter of GraalVM.
$ graalpython --polyglot --jvm saxpy.py 1.0, 3.0, 5.0, 7.0, 9.0, 11.0, 13.0, 15.0, 17.0, 19.0 ...
Runtime Kernel Compilation
grCUDA can also compile CUDA kernel at runtime. It leverages NVRTC Runtime Compilation to create kernel from a source-code string. In order to use NVRTC in grCUDA, simply use the
buildkernel function instead
bindkernel and provide the kernel source as a string instead of the name of the binary file as we did above.
The corresponding code in Python snippet is:
# kernel source code in CUDA C kernel_source = """__global__ void saxpy(int n, float alpha, float *x, float *y) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < n) { y[i] = alpha * x[i] + y[i]; } } """ # build kernel from source and create callable buildkernel = polyglot.eval(language='grcuda', string='buildkernel') kernel = buildkernel(kernel_source, 'saxpy', 'sint32, float, pointer, pointer')
Listing 5: Runtime compilation of CUDA Kernels
Note that “
extern “
C”” declaration is not needed as grCUDA can retrieve the mangled kernel name from NVRTC directly.
Examples
We are going to discuss two examples that illustrate how to call a GPU-accelerated library from R and how to use grCUDA from an Express.js web application.
Example: RAPIDS cuML from R
This example illustrates how grCUDA can call functions of RAPIDS cuML from an R script. Even though cuML does not currently offer an R binding, grCUDA makes the native library accessible to GraalVM languages. The following R snippet applies DBSCAN [Ester et al. 1996], a density-based clustering algorithm on a two-dimensional dataset and identifies clusters as well as outlier points.
# install.packages('seriation') library(seriation) library(ggplot2) data('Chameleon') n_rows <- nrow(chameleon_ds4) n_cols <- ncol(chameleon_ds4) device_array <- eval.polyglot('grcuda', 'DeviceArray') data <- device_array('double', n_rows, n_cols) for (r in 1:n_rows) { for (c in 1:n_cols) { data[r, c] <- chameleon_ds4[r, c] } } labels <- device_array('int', n_rows) # obtain cuML DBSCAN callable as a pre-registered function in grCUDA dbscan_fit <- eval.polyglot('grcuda', 'ML::cumlDpDbscanFit') eps <- 5 # max distance between points to be connected min_samples <- 15 # min number of points in a cluster max_bytes_per_batch <- 0 verbosity <- 0 dbscan_fit(data, n_rows, n_cols, eps, min_samples, labels, max_bytes_per_batch, verbosity) chameleon_ds4$label <- labels[1:n_rows] print( ggplot(chameleon_ds4, aes(x,y, color=factor(label))) + geom_point() + scale_colour_viridis_d( name='Cluster', labels=c('outlier', '0', '1', '2', '3', '4', '5')))
Listing 6: R script invokes DBSCAN from RAPIDS cuML.
In the example, we use the ‘CHAMELEON’ dataset from the
seriation package. This is a synthetic two-dimensional dataset that was used to evaluate the clustering algorithms. The dataset is available in the R data frame
chameleon_ds4. It contains 8,000 data points (rows) and two columns. In the first step, we copy it into a two-dimensional device array
data. We allocate a suitably sized device array and copy the elements from the data frame. We create a second device array, called
labels, into which DBSCAN will write the cluster index for each of the 8,000 points.
Next, we obtain the fit function from RAPIDS cuML through grCUDA. Several functions from RAPIDS are registered in grCUDA in the
ML namespace. We are working to eventually expose all algorithms from cuML. We obtain the callable object and store it in
dbscan_fit. When the function is invoked, we specify the two device arrays and a number of hyper parameters.
We add the label device vector as a column to the data frame. The slice expression
labels[1:n_rows] copies into an R integer vector, which is then added as a column to the data frame.
Finally, we display the points with cluster assignment with
ggplot. The resulting plot is shown in Figure 3. The black points represent outliers that were not associated with any cluster by DBSCAN.
Example: CUDA from a Node.JS/Express Web Application
GPU acceleration can be added with a few lines of code. This example illustrates with a Node.JS web application that is written in Express.js. The application computes the Mandelbrot Set as ASCII art — the same way it was shown in the first published picture by [Brooks and Matelski 1978]. Here, we are accelerating the computation with a GPU. The crucial point in this example is that the whole web application can be written in less than 50 lines of JavaScript code, including 17 lines for the CUDA kernel source code. The full Node.js source is shown in Listing 7.
const express = require('express') const app = express() const kernelSrc = ` __global__ void mandelbrot(int *img, int width_pixel, int height_pixel, float w, float h, float x0, float y0, int max_iter) { const int x = blockIdx.x * blockDim.x + threadIdx.x; const int y = blockIdx.y * blockDim.y + threadIdx.y; float c_re = (w / width_pixel) * (x - width_pixel / 2) + x0; float c_im = (h / height_pixel) * (height_pixel / 2 - y) + y0; float z_re = 0, z_im = 0; int iter = 0; while ((z_re * z_re + z_im * z_im <= 4) && (iter < max_iter)) { float z_re_new = z_re * z_re - z_im * z_im + c_re; z_im = 2 * z_re * z_im + c_im; z_re = z_re_new; iter += 1; } img[y * width_pixel + x] = (iter == max_iter); }` const port = 3000 const widthPixels = 128 const heightPixels = 64 const buildkernel = Polyglot.eval('grcuda', 'buildkernel') const DeviceArray = Polyglot.eval('grcuda', 'DeviceArray') const kernel = buildkernel(kernelSrc, 'mandelbrot', 'pointer, sint32, sint32, float, float, float, float, sint32') const blockSize = [32, 8] const grid = [widthPixels / blockSize[0], heightPixels / blockSize[1]] const kernelWithGrid = kernel(grid, blockSize) app.get('/', (req, res) => { const img = DeviceArray('int', heightPixels, widthPixels) kernelWithGrid(img, widthPixels, heightPixels, 3.0, 2.0, -0.5, 0.0, 255) var textImg = '' for (var y = 0; y < heightPixels; y++) { for (var x = 0; x < widthPixels; x++) { textImg += (img[y][x] === 1) ? '*' : ' ' } textImg += '\n' } res.setHeader('Content-Type', 'text/plain') res.send(textImg) }) app.listen(port, () => console.log(`Mandelbrot app listening on port ${port}!`))
Listing 7: Node.js web application with Express.js that produces the Mandelbrot set as ASCII art.
The source code of the CUDA kernel is provided as a multi-line template string. It computes for every grid point within the rectangle (-2, -1) to (1, 1) whether it belongs to the Mandelbrot set using the escape time algorithm. Each CUDA kernel is responsible for one grid point. If the point is determined to be in the set (precisely most likely to be in the set), the threads writes 1 into the
img array for this point, otherwise 0.
We build the kernel during startup of the application and set the launch grid that consists of 4×8 blocks of 32×8 threads (
kernelWithGrid). For every requests, we allocate a device array and launch the kernel. Afterwards, we create the resulting ASCII art graphics from the device array and return it to the browser. Figure 4 shows the resulting Mandelbrot image in a browser window. The application returns the same image for every request; however, one can easily extend the application such that it accepts zoom parameters as part of the web request.
Getting Started with grCUDA
In this blog, I showed that using CUDA from within script languages (Python, JavaScript and R) is not difficult with grCUDA. grCUDA provides a uniform binding to all GraalVM languages for data exchange and code execution. Existing precompiled GPU code can be used as well as CUDA kernels that are dynamically compiled from language strings. One example illustrated how to use an accelerated machine learning algorithm from RAPIDS cuML in R, for which an R binding currently does not exist.
All code of the examples in this blog is available on GitHub:.
So how can you get started with grCUDA in your own applications you run on GraalVM?
- Make sure that you have the CUDA Toolkit 10 or 10.1 installed on your Linux system with GPU with Maxwell architecture or later. If you want to use RAPIDS cuML your GPU must be of the Pascal generation or later.
- Download and install GraalVM, the community edition is sufficient
- Install grCUDA as described in the README.
- For running the RAPIDS cuML example above, follow the instructions in the README in the GitHub repository for this blog.
Next Steps
grCUDA is a collaborative effort between Oracle Labs and NVIDIA, open-sourced under a BSD-3-Clause license. grCUDA is under active development. The next features we are planning to add are (1) grCUDA-managed arrays that can be partitioned between device and host or replicated for fast simultaneous read-only access, (2) asynchronous execution of operations, (3) automatic transformations of host-language objects to a CUDA-friendly representation during data transfers.
We would like to learn about your use cases. So please feel free to open an issue on GitHub if you miss a feature or, better yet, get involved and open a pull request with your code.
For additional information, you can also watch our talk about grCUDA, “Simplifying NVIDIA GPU Access: A Polyglot Binding for GPUs with GraalVM“.
References
[Ester et al. 1996] Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu: A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise, KDD 1994
[Brooks and Matelski 1978] Robert Brooks, J. Peter Matelski: The dynamics of 2-generator subgroups of PSL(2,C). Stony Brook Conference 1978
|
https://developer.nvidia.com/blog/grcuda-a-polyglot-language-binding-for-cuda-in-graalvm
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
VerifyDiagnosticConsumer - Create a diagnostic client which will use markers in the input source to check that all the emitted diagnostics match those expected. More...
#include "clang/Frontend/VerifyDiagnosticConsumer.h"
VerifyDiagnosticConsumer - Create a diagnostic client which will use markers in the input source to check that all the emitted diagnostics match those expected.
INVOKING THE DIAGNOSTIC CHECKER:
VerifyDiagnosticConsumer is typically invoked via the "-verify" option to "clang -cc1". "-verify" is equivalent to "-verify=expected", so all diagnostics are typically specified with the prefix "expected". For example:
Custom prefixes can be specified as a comma-separated sequence. Each prefix must start with a letter and contain only alphanumeric characters, hyphens, and underscores. For example, given just "-verify=foo,bar", the above diagnostic would be ignored, but the following diagnostics would be recognized:
Multiple occurrences accumulate prefixes. For example, "-verify -verify=foo,bar -verify=baz" is equivalent to "-verify=expected,foo,bar,baz".
SPECIFYING DIAGNOSTICS:
Indicating that a line expects an error or a warning is simple. Put a comment on the line that has the diagnostic, use:
to tag if it's an expected error, remark or warning, and place the expected text between {{ and }} markers. The full text doesn't have to be included, only enough to ensure that the correct diagnostic was emitted.
Here's an example:
You can place as many diagnostics on one line as you wish. To make the code more readable, you can use slash-newline to separate out the diagnostics.
Alternatively, it is possible to specify the line on which the diagnostic should appear by appending "@<line>" to "expected-<type>", for example:
The line number may be absolute (as above), or relative to the current line by prefixing the number with either '+' or '-'.
If the diagnostic is generated in a separate file, for example in a shared header file, it may be beneficial to be able to declare the file in which the diagnostic will appear, rather than placing the expected-* directive in the actual file itself. This can be done using the following syntax:
The path can be absolute or relative and the same search paths will be used as for include directives. The line number in an external file may be substituted with '*' meaning that any line number will match (useful where the included file is, for example, a system header where the actual line number may change and is not critical).
As an alternative to specifying a fixed line number, the location of a diagnostic can instead be indicated by a marker of the form "#<marker>". Markers are specified by including them in a comment, and then referenced by appending the marker to the diagnostic with "@#<marker>":
The name of a marker used in a directive must be unique within the compilation.
The simple syntax above allows each specification to match exactly one error. You can use the extended syntax to customize this. The extended syntax is "expected-<type> <n> {{diag text}}", where <type> is one of "error", "warning" or "note", and <n> is a positive integer. This allows the diagnostic to appear as many times as specified. Example:
Where the diagnostic is expected to occur a minimum number of times, this can be specified by appending a '+' to the number. Example:
In the first example, the diagnostic becomes optional, i.e. it will be swallowed if it occurs, but will not generate an error if it does not occur. In the second example, the diagnostic must occur at least once. As a short-hand, "one or more" can be specified simply by '+'. Example:
A range can also be specified by "<n>-<m>". Example:
In this example, the diagnostic may appear only once, if at all.
Regex matching mode may be selected by appending '-re' to type and including regexes wrapped in double curly braces in the directive, such as:
Examples matching error: "variable has incomplete type 'struct s'"
VerifyDiagnosticConsumer expects at least one expected-* directive to be found inside the source code. If no diagnostics are expected the following directive can be used to indicate this:
Definition at line 186 of file VerifyDiagnosticConsumer.h.
Definition at line 233 of file VerifyDiagnosticConsumer.h.
Definition at line 250 of file VerifyDiagnosticConsumer.h.
Definition at line 309 of file VerifyDiagnosticConsumer.h.
Create a new verifying diagnostic client, which will issue errors to the currently-attached diagnostic client when a diagnostic does not match what is expected (as indicated in the source file).
Definition at line 661 of file VerifyDiagnosticConsumer.cpp.
Definition at line 670 of file VerifyDiagnosticConsumer.cpp.
References clang::DiagnosticsEngine::ownsClient().
Callback to inform the diagnostic client that processing of a source file is beginning.
Note that diagnostics may be emitted outside the processing of a source file, for example during the parsing of command line options. However, diagnostics with source range information are required to only be emitted in between BeginSourceFile() and EndSourceFile().
Reimplemented from clang::DiagnosticConsumer.
Definition at line 681 of file VerifyDiagnosticConsumer.cpp.
References clang::DiagnosticConsumer::BeginSourceFile(), and clang::Preprocessor::getSourceManager().
Callback to inform the diagnostic client that processing of a source file has ended.
The diagnostic client should assume that any objects made available via BeginSourceFile() are inaccessible.
Reimplemented from clang::DiagnosticConsumer.
Definition at line 702 of file VerifyDiagnosticConsumer.cpp.
References clang::DiagnosticConsumer::EndSourceFile().
HandleComment - Hook into the preprocessor and extract comments containing expected errors and warnings.
Implements clang::CommentHandler.
Definition at line 764 of file VerifyDiagnosticConsumer.cpp.
References clang::SourceRange::getBegin(), clang::SourceRange::getEnd(), clang::Preprocessor::getSourceManager(), ParseDirective(), SM, and string().
Handle this diagnostic, reporting it to the user or capturing it to a log as needed.
The default implementation just keeps track of the total number of warnings and errors.
Reimplemented from clang::DiagnosticConsumer.
Definition at line 722 of file VerifyDiagnosticConsumer.cpp.
References clang::HeaderSearch::findModuleForHeader(), clang::SourceManager::getExpansionLoc(), clang::SourceManager::getFileEntryForID(), clang::SourceManager::getFileID(), clang::Preprocessor::getHeaderSearchInfo(), clang::Diagnostic::getLocation(), clang::Diagnostic::getSourceManager(), clang::Diagnostic::hasSourceManager(), clang::SourceManager::isLoadedFileID(), IsUnparsed, IsUnparsedNoDirectives, clang::SourceLocation::isValid(), and UpdateParsedFileStatus().
Update lists of parsed and unparsed files.
Definition at line 1022 of file VerifyDiagnosticConsumer.cpp.
References findDirectives(), clang::FileID::isInvalid(), IsParsed, IsUnparsedNoDirectives, and SM.
Referenced by HandleDiagnostic().
|
https://clang.llvm.org/doxygen/classclang_1_1VerifyDiagnosticConsumer.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
If you are new to WPF, you must know that XAML can define or declare any object under your local / global namespace just like your code does. It has full capability to load up your types based on the namespace you provide in your XAML.
Lets look how you can load your own namespace in XAML.
<Window x: </Window>
This is the basic
XAML that appear from template when you just create a new project. Now if you see minutely, you can see xmlns is there which points to some url of presentation layer. This will ensure that all the references of WPF objects is itself available to your XAML and you do not need to refer its namespace. Now what if you need to use something other than this ? As I told you that XAML can load any external type. You need to add a namespace to the XAML, which will instruct the Loader to load the CLR assembly and expose the Type that you use in the
XAML.
For simplicity lets add a string as resource in the Window.
<Window x: <Grid> <Grid.Resources> <System:String x:This is a string</System:String> </Grid.Resources> <TextBox Text="{StaticResource strString}" /> </Grid> </Window>
So here the
xmlns:System will load the assembly
mscorlib and expose the namespace System to your XAML. Clr-namespace :System specifies the namespace and assembly=mscorlib refer the assembly. You should remember that the assembly should exist on GAC or local bin folder to make sure the XAML can load it properly.
After you load the assembly, you can refer to the Types as shown in the XAML, the textbox loads up the string as text from Resource.
Similar to this, you can also load your own custom type and use it in XAML.
Lets define a class :
namespace MyNamespace.Extension { public class MyClass { public string X { get; set; } public string Y { get; set; } } }
Now to refer to this class in XAML you can use :
<Window x: <Grid> <Grid.Resources> <System:String x:This is a string</System:String> <local:MyClass </Grid.Resources> </Grid> </Window>
Thus you can see I can refer to my own type using clr-namespace attribute. You can specify the assembly here too, but it is not mandatory, as when you do not specify it, it will point to the local assembly.
Local:MyClass can now refer to my Type with X and Y properties exposes as well.
I hope this will help you.
Thanks for reading.
|
https://dailydotnettips.com/1377/
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
OS-dependent allocation and deallocation of locked/pinned memory pages. More...
#include <lockedpool.h>
OS-dependent allocation and deallocation of locked/pinned memory pages.
Abstract base class.
Definition at line 19 of file lockedpool.h.
Definition at line 22 of file lockedpool.h.
Allocate and lock memory pages.
If len is not a multiple of the system page size, it is rounded up. Returns nullptr in case of allocation failure.
If locking the memory pages could not be accomplished it will still return the memory, however the lockingSuccess flag will be false. lockingSuccess is undefined if the allocation fails.
Implemented in PosixLockedPageAllocator.
Unlock and free memory pages.
Clear the memory before unlocking.
Implemented in PosixLockedPageAllocator.
Get the total limit on the amount of memory that may be locked by this process, in bytes.
Return size_t max if there is no limit or the limit is unknown. Return 0 if no memory can be locked at all.
Implemented in PosixLockedPageAllocator.
|
https://doxygen.bitcoincore.org/class_locked_page_allocator.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
Configurable custom overlay box that can be used to show overlay windows. The overlays can also be switched to display differently on small screens.
Check out how to include Origami components in your project to get started with
o-overlay.
A new overlay may be created imperatively with JavaScript, without any markup. Alternatively, define an overlay declaratively by adding a template of overlay content and a button to open the overlay.
To define a template add the following
script tag with the content of your overlay. Ensure you specify a unique
id for your template:
<script type="text/template" id="overlay1-content"> <p>Content of overlay</p> </script>
Then add a trigger
button with the class
o-overlay-trigger, which will open the overlay on click. Connect the trigger to your overlay by specifying the template
id in
data-o-overlay-src with a
# (in the format of an id selector). Then name your overlay with a unique id using the
data-o-overlay-id attribute.
<button class="o-overlay-trigger" data-Open!</button>
Configure overlay options by adding them as data attributes to the trigger element,
data-o-overlay-[OPTION]. For instance add
data-o-overlay-compact="true" for a compact overlay:
<button class="o-overlay-trigger" data-Open!</button>
Include the
oOverlay mixin to output style for all
o-overlay features:
@include oOverlay();
.o-overlay { /* styles */ } .o-overlay--compact { /* styles */ } .o-overlay--full-screen { /* styles */ } /* etc. */
To specify specific variants to output styles for pass options (see variants for available options).
For example, to output only the base styles and the
compact variant, ignoring other variants:
@include oOverlay($opts: ( 'variants': ('compact') ));
.o-overlay { /* styles */ } .o-overlay--compact { /* styles */ }
This table outlines all of the possible variants you can request in the
oOverlay mixin:
JavaScript is initialised on
o-overlay elements automatically for Origami Build Service users. If your project is using a manual build process, initialise
o-overlay manually.
For example call the
init method to initialise all
o-overlay instances in the document:
import oOverlay from 'o-overlay'; oOverlay.init();
Or pass an element to initialise a specific
o-overlay instance:
import oOverlay from 'o-overlay'; const oOverlayElement = document.getElementById('#my-o-overlay-element'); oOverlay.init(oOverlayElement);
You may also construct a new overlay without existing
o-overlay elements. The constructor. Note: for this to work properly, the
heading-shadedvariant must be included with the CSS (it is by default). Note: for this to work properly, the
full-screenvariant must be included with the CSS (it is by default)
compact: Boolean. If true, the
.o-overlay--compactclass will be added to the overlay that reduces heading font-size and paddings in the content. Note: for this to work properly, the
compactvariant must be included with the CSS (it is by default) is dispatched when the overlay is loaded in the DOM.
The event detail has the following properties:
detail.instancethe initialised
o-overlayinstance
oOverlay.destroy is dispatched when the overlay is removed from the DOM
.focus()function on an element when
oOverlay.readyis dispatched to simulate the behaviour.
If you have any questions or comments about this component, or need help using it, please either raise an issue, visit #origami-support or email Origami Support.
This software is published by the Financial Times under the MIT licence.
|
https://registry.origami.ft.com/components/o-overlay@3.1.0/readme?brand=master
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
The author has certain language skills, so the basic knowledge is not involved in detail. It belongs to advanced notes. If you are a beginner, you can learn it by combining with the following Python documents
Naming rules of identifier
The rest of the identifier consists of letters, numbers, and underscores
The first character must be a letter in the alphabet or an underline
Identifiers are case sensitive
In general, there is no difference between variables and constants in Python. In order to improve the recognition, all constants use capital letters
At the same time, it should be noted that the variable constant should not have the same name as the keyword
keyword
Enter the following code in the interactive window to see all the keywords in Python
>>>import keyword >>>keyword.kwlist
notes
In Python, single line comments begin with #
Multi line comments are enclosed in three pairs of single quotation marks or three pairs of double quotation marks, as follows:
#Single-Line Comments """ multiline comment """
Different from C language
In terms of intuition, the biggest difference between Python and C language is that Python uses indentation to represent code blocks, and semicolons (;) are not needed after the end of statements. Of course, semicolons can also be used, just as multiple statements are used in a line
print("First sentence");print("The second sentence");print("The third sentence")
print("hello world")
At the same time, the above code is a must knock code for beginners. When typing, you will find that the default output of print is newline. If you want to implement no newline, you need to add end = "at the end of the variable
Waiting for user input
You can end the program with the following statement
input("Press enter Key to exit.")
Basic data type
Note: Although there are basic data types, in Python, variables do not need to declare their data types when they are defined, they are defined directly by assignment statements
There are six standard data types in Python:
Number
String (string)
List (list)
Tuple (tuple)
Set
Dictionary (Dictionary)
About strings:
Single quotation marks and double quotation marks are used exactly the same
R "this is a line with" indicates that the natural string will not wrap
In Python, there is no single character type. A single character is a string of length 1.
For more details, please refer to the Python documentation.
operator
not and or operators are different from C and Python
In addition, it also includes some new operators: bit operator, member operator and identity operator
Data reference (can be understood as labeling)
- The concept of variables in Python is similar to the pointer in C + + language. Variables hold the address of recorded data, which is called reference
- You can use the id() function to see the address of the data stored in the variable
- In Python, parameter passing and return value of function are realized by reference
Note: if a variable has been defined, when changing the assignment of a variable, the data reference is actually modified.
Variable and immutable types
Immutable refers to whether the data in memory can be modified
- Immutable type: number type, string, tuple
- Variable types: list, dictionary, set
Note: the key in the key value pair must be of immutable type.
Some special functions and methods
Hash (hash)
Python has a built-in function called hash(), which is an algorithm for extracting the signature of immutable type data
Round (first parameter, second parameter)
The function is to intercept a few decimal places
The first parameter is the received variable, and the second parameter is the number of decimal places reserved
strip()
String built-in strip() method, used to remove the beginning and end of the string space, newline, tab and so on
dir()
The built-in dir() function finds all the names defined in the module
Local variable and global variable
Local variables are defined inside functions and can only be used inside functions
Global variables are defined externally by real functions, and all functions can be used
- After the function is executed, the local variables inside the function will be recycled by the system
- Local variables with the same name can be defined in different functions without conflict
Note: in other development languages, it is not easy to use global variables. The variable range is too large, which leads to the decrease of program maintainability. At the same time, g should be added when naming global variables_ Or gl_ The prefix of.
It is a mistake to modify the global variable inside the function. The modification in the following case only defines a local variable with the same name inside the function
#Define a global variable num = 10 def demo1(): #You want to modify the value of the global variable inside the function num = 99 print("num = %d" % num) def demo2(): print("num = %d" % num)
The correct way to modify a global variable in a function is to use the global + variable name for declaration. In this case, when the assignment statement is used inside the function, the local variable will not be created
Function advanced
If you want the function to return more than one value at a time, you can use tuples and omit parentheses
def measure(): """Measuring temperature and humidity""" print("Start measuring") temp = 20 wetness = 50 print("End of measurement") return temp,wetness result = measure() print(result)
The method of exchanging two variables
#Use other variables temp = a a = b b = temp #No other variables are used a = a + b b = a - b a = a - b #Use tuples a,b = (b,a)#Parentheses can be omitted
If the parameter passed is a variable type, the data content will be modified by using methods inside the function, and the external data will also be affected
Note: in Python, the list variable call + = essentially executes the method extend and does not modify the variable reference. At the same time, list += list is not equal to list = list + list. The former is a method operation and the latter is an assignment operation.
Default parameters
When defining a function, parameters with default values are called default parameters. Generally, common values are set as the default parameters of parameters to simplify the function call
gl_list = [6,3,9] #Sort by using the sort method, and sort by default in ascending order gl_list.sort() print(gl_list) #If you need to sort in descending order, you need to specify the reverse parameter gl_list.sort(reverse = True) print(gl_list)
The reverse parameter here is the default parameter
So, how to specify the default parameters of a function? You can use the assignment statement in the parameter list
def test(parameter_1,parameter_2 = ×)
Notes for default parameters:
1) You must ensure that default parameters with default values are defined at the end of the parameter list
2) When calling a function, if there are multiple default parameters, you need to specify the parameter name when assigning values
Multivalued parameter
Sometimes, the number of parameters that a function can handle is uncertain, so multi valued parameters are needed
There are two kinds of multivalued parameters in Python:
*args -- storing tuple parameters
**kwargs -- store dictionary parameters
Unpacking syntax: it can simplify the transmission of tuple variables and dictionary variables
Recursion of functions
Features: a function calls itself internally, that is, taowa
Code features:
1. The code inside the function is the same, but the result is different for different parameters
2. When a parameter satisfies a condition, the function will no longer be executed, which is usually called the exit of recursion, which is very important, otherwise there will be a dead loop
Process oriented and object oriented
Process oriented: all the steps to complete a certain requirement are gradually implemented from beginning to end. According to the development requirements, some functional independent codes are encapsulated into functions, and the final code is to call different functions in sequence.
Object oriented: compared with function, object-oriented is a larger encapsulation. Before completing a certain requirement, the first thing to determine is the responsibility and object. Different objects are determined, and different methods are encapsulated in the object. The final code is to make different objects call different methods in sequence.
Classes and objects
Class and object are two core concepts of object-oriented programming
A class is like a drawing for making an airplane. It is a template and is responsible for creating objects. Its three elements are class name, attribute and method. The type meets the camel hump nomenclature, such as CapWords
Objects are like airplanes made from drawings. They are concrete entities created by classes
In Python, objects are ubiquitous, such as variables, data, and functions. There are two ways to view all the properties and methods of specific objects:
1) Enter after the identifier. Then press TAB
2) Using the Python built-in function dir()
Define a simple class
#Define class class Class name: #The first argument to a method must be self def Method 1(self,parameter list): pass #create object Object variable = Class name()
Note: self is the reference of the object which calls the method
class Cat: """This is a cat""" def eat(self): print("The cat wants to eat fish") def drink(self) print("Kittens need water") tom = Cat() tom.eat() tom.drink() print(tom)#Output<__ main__ . cat object at a hexadecimal address >
In object-oriented development, the concept of reference is also applicable!
1) % d can output numbers in decimal and% x can output numbers in hexadecimal
2) The function id() can accept the address of a variable
It can be used after a specific object outside the class. The property name assignment statement adds a property to the object, but it is not recommended. It is not encapsulated inside the class
class Cat: def _init_(self): print("This is an initialization method") #When you create an object with the class name (), the initialization method is called automatically tom = Cat()
When you create a specific object with the class name (), the following actions are automatically performed:
1) Allocate space in memory for objects
2) Call initialization method_ init_ Set initial value
Three built-in methods in class
1) When an object is created, it is called automatically_ init_ method
2) Called automatically when an object is destroyed from memory_ del_ method
3) If you want print (object name) to output some custom content, you can use_ str_ Methods, need to pay attention to_ str_ Method must return a string
Inside the method of an object, you can directly access its properties
class Cat: def _init_(self,name): self.name = name print("initialization") def _del_(self): print("end") def _str_(self): return "What's the name of the kitten%s" % self.name tom = Cat("Tom") print(tom) del tom
1) A property of an object can be an object created by another class
2) When defining an attribute, if you don't know what initial value to set, you can set it to None, and the None keyword can be assigned to any variable
Is and is not are identity operators used to compare the memory addresses of two objects
is used to determine whether two variable reference objects are the same
==Used to determine whether the values of referenced variables are equal
if self.gun == None: pass if self.gum is None: pass #The latter is better
Private properties and methods
In the actual development, some properties or methods of the object only want to be used inside the object, and do not want to be accessed outside
Private properties are properties that an object does not want to expose and can only be accessed inside the object
Private methods are methods that objects don't want to expose
Definition method: when defining a property or method, add 2 underscores before the property name or method name to define a private property or method
However, the private mentioned above is only pseudo private in Python, or can we use syntax_ Class name__ Access to the name, do not use in actual development
class Women: def __init__(self,name): self.name = name #Private property self.__age = 18 #Private method def __secret(self): print("%s What's your age%d" % (self.name,self.__age)) xiaofang = Women("Xiaofang") #The following operations will report an error print(xiaofang.age) xiaofang.__secret() #Private properties and methods can still be accessed by the following operations xiaofang._Women__age xiaofang._Women__secrt()
inherit
Three characteristics of object oriented
- Encapsulation: encapsulating properties and methods into an abstract class based on responsibilities
- Inheritance: to achieve code reuse, the same code does not need to be written repeatedly
- Polymorphism: different objects call the same method, produce different results, increase the flexibility of the code
The concept of inheritance: subclasses have all the properties and methods of the parent class
Inherited syntax:
class Class name(Parent class name): pass
#case class Animal: def eat(self): print("eat") def drink(self): print("drink") def sleep(self): print("sleep") def run(self): print("run") class Dog(Animal): def bark(self): print("Woof, woof, woof") dahuang = Dog() dahuang.eat() dahuang.bark() #The subclass inherits from the parent class and can enjoy the encapsulated methods in the parent class without redevelopment #A word class should encapsulate its unique properties and methods according to its responsibilities
In the above cases, the Dog class is a subclass of the Animal class, the Animal class is the parent class of the Dog class, and the Dog class inherits from the Animal class; it can also be said that the Dog class is a derived class of the Animal class, the Animal class is the base class of the Dog class, and the Dog class derives from the Animal class
Inheritance is transitive: a popular explanation is that family property, the property of grandparents and grandchildren can be used
When inheriting, when the method of the parent class cannot meet the needs of the subclass, you can rewrite the method in the subclass in the following ways:
1) To override the method of the parent class, a method with the same name as the parent class is defined in the subclass
2) Expand the parent class method, that is, rewrite the parent class method in the subclass, and use super() where necessary. The parent class method calls the execution of the parent class method, and other locations can be written according to the specific needs
Private ownership and inheritance
1) Subclasses cannot access private properties and methods in their own methods, but can access common properties and methods in their own methods
2) If a public method in a parent class calls a private property or method, the subclass can also access it indirectly
class Father: def __init__(self): #Public property self.gongzi = 10000 #Private property self.__sifangqian = 2000 #Private method def __secert(self): print("I have private money",end='') #public Method def public(self): self.__secret() print("yes%s element" % self.__sifangqian) class Son(Father): pass son = Son() son.public()
Multiple inheritance
A subclass can have more than one parent class and have all the properties and methods of the parent class
Grammar:
class Subclass name(Parent class name 1, parent class name 2,...): pass
But there is a problem with multiple inheritance, as shown in the figure below
If there is a method with the same name between multiple inherited parent classes, which parent class method will be called when it is called. This leads to confusion. This situation should be avoided in actual programming
Solution to the above situation: the Python interpreter has a built-in property for the class__ mro__ , which is mainly used to determine the path of methods and attributes in multi inheritance
class A: def test(self): print("A-test") def demo(self): print("A-demo") class B: def test(self): print("B-test") def demo(self): print("B-demo") class C(A,B): pass c = C() c.test() c.demo() print(C.__mro__) #(<class '__main__.C'>, <class '__main__.A'>, <class '__main__.B'>, <class 'object'>)
New type and old type
New class: class based on object, recommended
Old class: a class not based on object
The interpreter of ipython3 creates a new class by default
The dir() function looks at properties and methods in an object
polymorphic
Polymorphism: different subclass objects call the same parent class method to produce different results and increase the flexibility of the code
It's like:
The premise of polymorphism: Inheriting and overriding parent class methods
Class properties and class methods
Class is a special object. When the program is running, the class will also be loaded into memory, and the class object has only one share in memory. Therefore, the class object can also have its own properties and methods
You can access the properties and methods of a class by using the class name
Define class attribute syntax:
#Tools class Tool(object): #Use the assignment statement below the class name to define the class properties and record the relevant characteristics of the class count = 0 def _init_(self,name): self.name = name Tool.count += 1 #You can record how many object instances have been created in this way
Property acquisition mechanism in Python
Define class method syntax:
@classmethod def Class method name(cls): pass #Inside the method, you can access the properties of the class and call other class methods through cls
class Tool(object): #Class properties count = 0 #Class method @classmethod def show_tool_count(cls): print("Current number of tools%d" % cls.count) def __init__(self,name): self.name = name Tool.count += 1 tool_1 = Tool("axe") tool_2 = Tool("Pliers") tool_3 = Tool("Wrench") Tool.show_tool_count()
Static method
In actual development, if you want to encapsulate a method in a class, this method:
There is no need to access instance properties or call instance methods
There is no need to access class properties or call class methods
This method can be encapsulated as a static method with the following syntax:
@staticmethod def Static method name(): pass #Static method can be called by class name
case
class Game(object): #Class properties top_score = 0 #Instance properties are defined in the initialization method def __init__(self,player_name): self.player_name = player_name #Static method @staticmethod def show_help(): print("Help information: plant and prevent zombies from entering your house") #Class method @classmethod def show_top_score(cls): print("Highest score in history %d" % cls.top_score ) #Example method def start_game(self): print("Welcome players%s,The game is about to begin" % self.player_name) #1. View the game help information Game.show_help() #2. View the highest score of the game Game.show_top_score() #3. Create game objects zwdzjs = Game("player1") zwdzjs.start_game()
analysis:
Design a class, need to be clear
1) Properties:
Class properties
Instance properties
2) Methods:
Static method
Class methods: accessing class properties
Instance method: access instance properties
Note: if a method needs to access both instance properties and class properties, it is defined as an instance method
Design pattern
Design pattern is the summary and refinement of previous work, and the solution to a specific problem
The purpose of using design patterns is to reuse code, make it easier for others to understand and ensure the reliability of code
Singleton design pattern
Purpose: let the object created by class have only one instance in the system
For example: This is like a music player that can only play one piece of music at a time
Each time the class name () is executed, the memory address of the returned object is the same
When you create an object with the class name (), the Python interpreter calls the__ new__ This built-in static method is used to:
1) Allocate space for objects in memory
2) Returns a reference to an object
After the Python interpreter gets a reference to the object, it passes this as the first parameter to the__ init__ method
Yes__ new__ Methods to re implement the singleton design, based on the above__ new__ The function of the method should be noted when rewriting
Be sure to return super()__ new __ (cls)
Otherwise, the Python interpreter will not get the object reference after space allocation, and will not call the initialization method of the object
Note:__ new__ Is a static method that needs to actively pass cls parameters when calling
class MusicPlayer(object): #Rewrite__ new__ method def __new__(cls, *args, **kwargs): print("new method") #Allocates space for the instantiated object and returns a reference return super().__new__(cls) def __init__(self): print("Initialization process") wangyiyun = MusicPlayer() print(wangyiyun)
The idea and code implementation of singleton design pattern are as follows:
class MusicPlayer(): #Record the reference to create the first instance instance = None #Rewrite__ new__ method def __new__(cls, *args, **kwargs): #1. Judge whether the class property instance is empty if cls.instance is None: #Calling the__ new__ Method to allocate space for the first object cls.instance = super().__new__(cls) #Returns the object reference saved by the class property return cls.instance wangyiyun = MusicPlayer() print(wangyiyun) qqyinyun = MusicPlayer() print(qqyinyun) #The running result shows that the memory address is the same
If the initialization action is only executed once in the above code, the code is as follows:
class MusicPlayer(): #Record the reference to create the first instance instance = None #Record whether the initialization method has been executed init_flag = False #Rewrite__ new__ method def __new__(cls, *args, **kwargs): #1. Judge whether the class property instance is empty if cls.instance is not None: return cls.instance #Calling the__ new__ Method to allocate space for the first object cls.instance = super().__new__(cls) #Returns the object reference saved by the class property return cls.instance def __init__(self): #Determine whether the initialization action has been performed if MusicPlayer.init_flag: return #If not, then execute print("initialization") #Modify the tag of a class property MusicPlayer.init_flag = True
abnormal
When the program is running, if the Python interpreter encounters an error, it will stop the execution of the program and prompt an error message, which is an exception
The action of a program stopping execution and prompting an error message is called throwing an exception
It is very difficult to deal with all the special situations in the process of program development. Through the abnormal replenishment, we can deal with the emergencies in a centralized way, so as to ensure the stability and robustness of the program
The simplest syntax format for catching exceptions:
try: Not sure if the code was executed correctly, the code you are trying to execute except: Error handling code, if there is no error, skip #The details are as follows try: num = int(input("Please enter an integer:")) except: print("Please enter the correct integer")
However, when we encounter different types of errors, we need to make different responses to different types of errors, that is, error type capture
Oh, forget to say, what is wrong type
The error type is the first word in the last line of the error message when the interpreter throws an exception
The processing syntax is as follows:
try: except Error type 1: #Handling of error type 1 pass except (Error type 1, error type 2): #Handling of error types 2 and 3 pass #Catch unknown error except Exception as result: print("unknown error %s" % result) else: #Code that executes only when there are no exceptions pass finally: #Code that executes regardless of whether there are exceptions pass
Exception passing
When an exception occurs in a function / method, the exception will be passed to the function / method calling party. If the exception is passed to the main program and there is no exception handling, the program will be terminated
Using the transitivity of exception, we only need to capture and handle the exception in the main program, so we don't need to add a lot of exception capture in the code to ensure the neatness of the code
Throw raise exception
In the development, we can also actively throw exceptions according to the special business needs. For example, when developing the user login interface, if the password entered by the user is not long enough, we can actively throw exceptions
1) Create an Exception object, which is a special Exception class in Python
2) Using raise keyword to throw exception object exception object ex = Exception("The password is not long enough") raise ex input_password()
The above code when the password length, will take the initiative to throw an error
Of course, we can actively catch the exceptions we throw
an exception object -- you can use the error message string as an argument ex = Exception("The password is not long enough") raise ex try: input_password() except Exception as result: print("%s" % result)
When the password length is not enough, the console output is as follows:
modular
Module is a core concept of Python program architecture
Every Pyhthon source code file that ends with the extension. py is a module
The module name is also an identifier. The global variables, functions and classes defined in the module are tools for external use
There are two ways to import modules
1)
import module name 1, module name 2 (yes, but not recommended)
Recommended syntax for importing multiple modules:
import module name 1
import module name 2
2)
If you only want to import some tools from the module, you can use syntax
from module name import tool name
The advantage is that you can use the tool directly and no longer need the module name
3)
You can use from module name import * to import all the tools of the module without module name
This method is not recommended. If the module is covered by the same tool name, it is difficult to find this method
Note: if the name of the module is too long, we can use as to specify the name of the module for easy use in the code, or the function name introduced from the module is too complex and needs to be used frequently, so we can give it a local name
import Module name as Module alias #Function name = new function name fib = fibo.fib
If the same function in multiple modules is imported, the later imported function will cover the previous imported function
from module1 import func from module2 import func #Func of module 2 covers func of module 1
__ name__ Attribute takes into account both test and import modes
The code format of many Python files is as follows:
#Import module #Define global variables #Define class #Define function #At the bottom of the code: def main(): #... pass #According to__ name__ Property to determine whether to execute the following code if __name__ == "__main__": main()
package
A package is a special directory containing multiple modules. There is a special file in this directory__ init__.py
Packages are named in the same way as variables, using lowercase letters+_
Advantage: all modules in the package can be imported at one time by using the import package name
init.py
To use the modules in the package externally, you need to__ init__.py specifies the list of modules provided to the outside world. The syntax is as follows
#Import module list from current directory from . import Module name 1 from . import Module name 2
Release module
If you want to share your developed modules, you can make and publish a compressed package according to the following steps:
1) Create setup.py
2) Building blocks
$ python3 setup.py build
3) Generate and publish compressed package
$ python3 setup.py sdist
Module installation
$ tar - zxvf hm_message-1.0.tar.gz
$ sudo python setup.py install
Unload module
Directly from the installation directory, the installation module directory can be deleted
sudo rm - r hm_message*
Using pip to install the third party module
Third party modules usually refer to Python packages / modules developed by well-known third-party teams and widely used by programmers
pip is a modern and general Python package management tool
file
ASC Ⅱ code and UNICODE
# *_* coding:utf-8 *_* #The u before the string tells the interpreter that this is a UTF-8 format string hello_str = u"hello world" for i in hello_str: print(i)
eval function
|
https://www.fatalerrors.org/a/0N982DA.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
supported by GraphQL API.
- Client applications(consumers of GraphQL API) can give instructions to GraphQL API about the response data.
Code First vs Schema Approach:
Code First Approach:
In Code First Approach we use Typescript class to which need to apply GraphQL decorator on top of classes and also on top of properties inside of the class. These decorators help to auto-generate the raw GraphQL Schema. So in Code First Approach we no need to learn or concentrate to write the GraphQL Schema.
Schema First Approach:
GraphQL SDL(schema definition language) is a new syntactic query type language, in this approach which we need to learn new kinds of syntax. Click here to explore NestJS GraphQL Schema First Approach.
Create NestJS Application:
We will walk through with a simple NestJS Application to integrate GraphQL API in Code First Approach.
NestJS CLI Installation Command: npm i -g @nestjs/cli
NestJS Application Creation Command: nest new your-project-name
Install GraphQL Packages:
Let's install the following GraphQL supporting packages for the NestJS application.
1. npm i --save @nestjs/graphql
2. npm i --save graphql-tools
3. npm i --save graphql
4. npm i apollo-server-express
Define Object Type:
In Code First Approach we define Object Types which will generate the GraphQL Schema automatically. Each Object Type we define should represent the application domain object(means table of our database).
Let's define an Object Type class in our application as below.
src/cow.breeds/cow.breeds.ts:
import { ObjectType, Field, Int } from '@nestjs/graphql'; @ObjectType() export class CowBreed { @Field(type => Int) id: number; @Field() name: string; @Field(type => Int, { nullable: true }) maxYield?: number; @Field(type => Int, { nullable: true }) minYield?: number; @Field({ nullable: true }) state?: string; @Field({ nullable: true }) country?: string; @Field({ nullable: true }) Description?: string; }
- By decorating class with '@ObjectType()' decorator makes class as GraphQL Object Type. Every property to be registered with '@Fied()' decorator.
- This '@Field()' decorator comes with different overloaded options which help to explicitly define property type and whether the property is nullable or not.
- TypeScript types like 'string' and 'boolean' are similar to GraphQL types, so for those types we no need to explicitly defined the type of the property in '@Field()' decorator, but other TypeScript types like number, complex type are not understand by the GraphQL '@Field()' decorator, in that case, we need to explicitly pass the type using arrow function like '@Field(type => Int)'.
- Similarly explicitly defining property is nullable by inputting as an option to decorator like '@Field({nullable:true})'.
- GraphQL Schema will be generated based on this ObjectType class automatically which we explore in upcoming steps.
Service Provider:
In general service provider contains logic to communicate with the database and fetches the data to serve. In this sample, we are not going to implement any database communication, but we will maintain some static data in the service provider class as below.
src/cow.breeds/cow.breed.service.ts:
import { CowBreed } from './cow.breed'; export class CowBreedService { cowBreeds: CowBreed[]=[{ id : 1, name: "Gir", maxYield:6000, minYield:2000, country: "India", state: "Gujarat", Description:"This breed produces the highest yield of milk amongst all breeds in India. Has been used extensively to make hybrid v varieties, in India and in other countries like Brazil" }]; // while testing add more items in the list getAllCows(){ return this.cowBreeds; } getCowById(id:number){ return this.cowBreeds.filter(_ => _.id == id)[0]; } }Now register this newly created service provider in the app.module.ts file.
import { CowBreedService } from './cow.breeds/cow.breed.service'; @Module({ providers: [CowBreedService], }) export class AppModule {}
Resolver:
Resolver is used to implement GraphQL operations like fetching and saving data, based default GraphQL Object Types like 'Query' Object Type(contains methods for filtering and fetching data), 'Mutation' Object Type(contains methods for creating or updating data).
Let's create a resolver class with Query Object Type as below.
src/cow.breeds/cow.breed.resolver.ts:
import {Resolver, Query, Args, Int} from '@nestjs/graphql'; import { CowBreedService } from './cow.breed.service'; import {CowBreed} from './cow.breed'; @Resolver() export class CowBreedResolver { constructor(private cowBreedService: CowBreedService) {} @Query(returns => [CowBreed]) async getAllCows(){ return await this.cowBreedService.getAllCows(); } @Query(returns => CowBreed) async getCowById(@Args('id',{type:() => Int}) id:number){ return await this.cowBreedService.getCowById(id); } }To make typescript class resolver need to decorate the class with '@Resolver'. '@Query()' decorates defines the method as Query Object Type methods. Here we decorated '@Query()' to the methods which return the data. In Code First Approach inside '@Query()' decorator needs to pass the return type of the data([CowBreed] represents GraphQL array type return the array of CowBreed). Inside the resolver, methods input parameters decorated with '@Args' type to capture the client passed values.
Now register resolver entity in providers array in app.module.ts file.
app.module.ts:
import {CowBreedResolver} from './cow.breeds/cow.breed.resolver'; @Module({ providers: [CowBreedResolver], }) export class AppModule {}
Import GraphQLModule:
Let's import GraphQLModule into the app.module.ts.
app.module.ts:
import { GraphQLModule } from '@nestjs/graphql'; import {join} from 'path'; @Module({ imports: [ GraphQLModule.forRoot({ autoSchemaFile:(join(process.cwd(),'src/schema.gql')) }) ], }) export class AppModule {}Inside GrphQLModule pass the schema file path('src/schema.gql') where all our application auto-generated schema will be stored. So we need to create a schema file 'schema.gql' as below.
Let's run the NestJS application
NestJS Development Run Command: npm run start:devNow very the way GraphQL Schema Generated automatically as below
GraphQL UI PlayGround:
GraphQL UI Playground is the page which helps as a tool to query our GraphQL API. This Playground gives Schema information, GraphQL Documentation, Intelligence to type the queries more quickly. GraphQL constructed only one endpoint and supports only Http Post verb, so to access this UI Playground simply use the same endpoint in the browser directly.
Fields:
GraphQL is about asking for specific fields on objects on the server. Only requested fields will be served by the server.
Let's query a few fields as follows.
query{ getAllCows{ name state country } }
- 'query' keyword to identify the request is for fetching or querying data based on 'Query Object Type' at the server.
- 'getAllCows' keyword to identify the definition or method inside of the 'Query Object Type'.
- 'name', 'state', 'country' are fields inside of the 'CowBreed' GraphQL object type we created.
Input Argument To Filter Data:
In GraphQL we can pass arguments to filter the Data. Let's construct the query as below with a filtering argument.
query{ getCowById(id:2){ id, name, state, country, Description } }
Aliases:
While querying the GraphQL API with schema definition names like 'getAllCows', 'getCow{ Cows:getAllCows{ name state, country, Description } }
Fragments:
Fragments in GraphQL API mean comparison between two or more records. A comparison between 2 or more items is very easy in GraphQL.
query{ Cows1:getCowById(id:1){ name state, country, Description } Cows2:getCowById(id:1){ name state, country, Description } Cows3:getCowById(id:1){ name } to create an argument object, we need to construct an object type as follows.
src/cow.breeds/cow.breed.input.ts:
import { Int, Field, InputType } from '@nestjs/graphql'; @InputType() export class CowBreedInput { @Field(type => Int) id: number; @Field() name: string; @Field(type => Int, { nullable: true }) maxYield?: number; @Field(type => Int, { nullable: true }) minYield?: number; @Field({ nullable: true }) state?: string; @Field({ nullable: true }) country?: string; @Field({ nullable: true }) Description?: string; }
In NestJS GraphQL using '@Args()' decorator can capture the input values from the client, but in 'Mutation Object Type' queries involved in the creation or update of data which involves in more number of properties. So to capture the complex object of data using '@Args()' we need to create a class and it should be decorated with '@InputType'.
Hint: 'ObjectType()' decorator are only used to represent the domain object of the application(database tables). 'ObjectType()' can not used to capture the data from client. So to capture the data from the client we need to use 'InputType()' Object Types.Let's update service provider logic to save a new item as below.
src/cow.breeds/cow.breed.service.ts:
export class CowBreedService { // hidden code for display purpose addCow(newItem:any):CowBreed{ this.cowBreeds.push(newItem); return newItem; } }Now define a 'Mutation Object Type' query in resolver class as follows.
src/cow.breeds/cow.breed.resolver.ts:
import {CowBreedInput} from './cow.breed.Input'; @Resolver() export class CowBreedResolver { // hidden code for display purpose @Mutation(returns => CowBreed) async addCow(@Args('newCow') newCow:CowBreedInput){ var cow = await this.cowBreedService.addCow(newCow); return cow; } }Let's run the application and check autogenerated GraphQL Schema for Mutation Object Type.
Now to post the data from the client to GraphQL API, the client uses a syntax called GraphQL variables, these variables take JSON Object as input data.
mutation($newCow:CowBreedInput!){ addCow(newCow:$newCow){ id, name, maxYield, minYield, state, country, Description } }
{ "newCow": { "id": 4, "name": "Hallikar", "maxYield": null, "minYield": null, "state": "Karnataka", "country": "India", "Description": "Draught breed both used for road and field agricultural operations. Closely related to Amrit Mahal. However, are much thinner and produce low yields of milk." } }
Wrapping Up:
Hopefully, this article will help to understand the GraphQL API integration in the NestJS application using Code First Approach. I love to have your feedback, suggestions, and better techniques in the comment section.
|
https://www.learmoreseekmore.com/2020/05/nestjs-graphql-codefirst.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
A
The master branch has modules for GitHub, BitBucket, and Kijiji as well.
$ java -jar shard-1.2.jar -l
Available modules:
Examples
Given a username and password shard will attempt to authenticate with multiple sites:
To test multiple credentials supply a filename. By default this expects one credential per line in the format
$:
This method takes a Credentials object and returns a boolean indicating a successful login. I recommend using the TwitterModule as an template.
def tryLogin(creds: Credentials): Boolean
Dependencies:
- JSoup is used for HTTP communication and HTML parsing
- spray-json is used for handling json
Audit Passwords Capture Facebook Password java Linux Mac shard Shared Passwords Twitter Password Windows
|
https://amp.kitploit.com/2016/07/shard-command-line-tool-to-detect.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
#include <wx/log.h>
This is the default log target for the GUI wxWidgets applications.
Please see Logging Customization for explanation of how to change the default log target.
An object of this class is used by default to show the log messages created by using wxLogMessage(), wxLogError() and other logging functions. It doesn't display the messages logged by them immediately however but accumulates all messages logged during an event handler execution and then shows them all at once when its Flush() method is called during the idle time processing. This has the important advantage of showing only a single dialog to the user even if several messages were logged because of a single error as it often happens (e.g. a low level function could log a message because it failed to open a file resulting in its caller logging another message due to the failure of higher level operation requiring the use of this file). If you need to force the display of all previously logged messages immediately you can use wxLog::FlushActive() to force the dialog display.
Also notice that if an error message is logged when several informative messages had been already logged before, the informative messages are discarded on the assumption that they are not useful – and may be confusing and hence harmful – any more after the error. The warning and error messages are never discarded however and any informational messages logged after the first error one are also kept (as they may contain information about the error recovery). You may override DoLog() method to change this behaviour.
At any rate, it is possible that that several messages were accumulated before this class Flush() method is called. If this is the case, Flush() uses a custom dialog which shows the last message directly and allows the user to view the previously logged ones by expanding the "Details" wxCollapsiblePane inside it. This custom dialog also provides the buttons for copying the log messages to the clipboard and saving them to a file.
However if only a single message is present when Flush() is called, just a wxMessageBox() is used to show it. This has the advantage of being closer to the native behaviour but it doesn't give the user any possibility to copy or save the message (except for the recent Windows versions where
Ctrl-C may be pressed in the message box to copy its contents to the clipboard) so you may want to override DoShowSingleLogMessage() to customize wxLogGui – the dialogs sample shows how to do this.
Default constructor.
Presents the accumulated log messages, if any, to the user.
This method is called during the idle time and should show any messages accumulated in wxLogGui::m_aMessages field to the user.
Reimplemented from wxLog.
Returns wxICON_ERROR, wxICON_WARNING or wxICON_INFORMATION depending on the current maximal severity.
This value is suitable to be used in the style parameter of wxMessageBox() function.
Returns the appropriate title for the dialog.
The title is constructed from wxApp::GetAppDisplayName() and the severity string (e.g. "error" or "warning") appropriate for the current wxLogGui::m_bErrors and wxLogGui::m_bWarnings values.
All currently accumulated messages.
This array may be empty if no messages were logged.
The severities of each logged message.
This array is synchronized with wxLogGui::m_aMessages, i.e. the n-th element of this array corresponds to the severity of the n-th message. The possible severity values are
wxLOG_XXX constants, e.g. wxLOG_Error, wxLOG_Warning, wxLOG_Message etc.
The time stamps of each logged message.
The elements of this array are time_t values corresponding to the time when the message was logged.
True if there any error messages.
True if there any messages to be shown to the user.
This variable is used instead of simply checking whether wxLogGui::m_aMessages array is empty to allow blocking further calls to Flush() while a log dialog is already being shown, even if the messages array hasn't been emptied yet.
True if there any warning messages.
If both wxLogGui::m_bErrors and this member are false, there are only informational messages to be shown.
|
https://docs.wxwidgets.org/3.0/classwx_log_gui.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
0.33 2020-04-04 - Fix broken DEBE 0.32 2019-08-31 - Added DOLUDOLU back, just in case - Updated with correct link of DEBE 0.31 2019-08-31 - DEBE is back! - Add CoC. 0.30 2017-12-27 - Remove unnecessary dependency 0.29 2017-12-27 - Replace online test with an offline one 0.28 2017-12-25 - Replace smartmatch with any (breaks on 5.27.8) 0.27 2017-07-19 - Minor fix & some boring talk on timezones 0.26 2017-01-22 - Added DOLUDOLU (alternative to DEBE) 0.25 2017-01-20 - Fix failing tests - Use GitHub issue tracker 0.24 2017-01-15 - Fix version number! 0.23 2017-01-15 - Fix failing tests 0.22 2017-01-15 - Fix typo 0.21 2017-01-15 - Use WWW::Lengthen (WWW::Expand has failing tests) 0.20 2017-01-15 - Renamed to WWW::Eksi (WWW::Eksisozluk is now an alias) - Removed DEBE (no more published since 2017-01-13) - Added GHEBE (top entries of last week) - Added politeness delay option - Replaced most regexps with DOM 0.13 2016-09-23 - Added deprecation warning to WWW::Eksisozluk 0.12 2015-02-27 - Updated regexps to match new eksisozluk style - increased default sleep time to 15 - converted tabs to spaces in "changes" - this version is not published to cpan yet 0.11 2015-04-27 - Trying to fix 'decreasing version number' problem 0.10 2015-04-26 - move to dist:zilla - entry->number_in_topic is deprecated (as it is removed by eksisozluk) - entry->date_accessed is deprecated - entry->date_published, is_modified, date_modified are deprecated. - entry->date_print is renamed to entry->date - gifs are no more embedded automatically in entry->body - popular is renamed. now you need to call topiclist with argument popular - list of today's topics is added (call topiclist with argument today) 0.09 2014-11-09 - A semicolon was missing on dependency list 0.08 2014-11-09 - List of popular topics (%popular) 0.07 2014-08-03 - author_link is added - style max-width from body's img is removed 0.06 2014-07-22 - Changed namespace from "Net" to "WWW" as proposed by PrePAN community - get_entry_by_id is renamed as entry - get_current_debe is renamed as debe_ids - debe_ids returns in 0..49, it was 1..50 where [0] was a dummy -1 - Partial list problem is handled at which you get 60 entries. Now you don't. It simply doesn't re-add already added value. - Script now has a object oriented interface. You can call my $eksi = WWW::Eksisozluk->new(); and work from there. 0.05 2014-07-21 - get_entry_by_id($id) - get_current_debe 0.01 2014-07-08 - original version; created by h2xs 1.23 with options - AX Net::Eksisozluk
|
https://metacpan.org/changes/distribution/WWW-Eksi
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
Server-Side Swift with MongoDB: Getting Started
In this Server-Side Swift tutorial you will learn how to setup MongoDB and use MongoKitten to run basic queries, build Aggregate Pipelines and store files with GridFS.
Version
- Swift 5, macOS 10.15, Xcode 11
MongoDB is a document-oriented database server. It does not use the SQL syntax for queries and does not enforce a schema. For this reason, MongoDB classifies as a NoSQL database.
By design, MongoDB resembles how applications store information.
By the end of this tutorial, you’ll know how to set up MongoDB and run basic queries using MongoKitten in Server-Side Swift. You’ll also learn how to use GridFS and Aggregate Pipelines, two powerful features in MongoDB.
Getting Started
Use the Download Materials button at the top or the bottom of this tutorial to download the files you’ll need for this tutorial.
- Xcode 11 and Swift 5.2.
- Docker: If you don’t have Docker yet, visit Docker install for Mac.
- Access to a MongoDB server: You can either set one up on your machine or make use of a cloud service such as MongoDB Atlas.
The project in the starter folder uses the Swift Package Manager. It consists of a Vapor application with a Leaf website.
To begin, double-click Package.swift in the starter folder. Wait as Xcode opens the file and downloads all the project’s dependencies.
Next, expand the Sources/App folder to see the files you’ll modify for this project. Note that the project follows the standard Vapor hierarchy.
Setting Up MongoDB
As noted above, you’ll need to have access to a MongoDB server for this project. Once you do, open Terminal and navigate to the starter project’s directory. From within this directory, execute the following command to set up MongoDB.
docker-compose up
This command reads docker-compose.yaml and uses that configuration file to set up a Replica Set. A Replica Set is a group of servers that maintains the same data set. Each Replica Set member should be a different machine. The setup may take a few minutes.
The members of this replica set are three servers and one arbiter. The arbiter is necessary for the stability of a cluster, should one of the other members go down.
The three servers expose themselves at ports
27017,
27018 and
27019. The default port for MongoDB is
27017.
Connecting to MongoDB
Before creating a connection to a deployment, you need to create a connection string URI. Open another Terminal window,
cd to your project folder and run the following commands:
cd # <Drag the 'starter' folder in here> nano .env # This opens an editor # Add the following line to this file: MONGODB=mongodb://localhost:27017,localhost:27018,localhost:27019/socialbird # save the file by pressing ctrl-o # and exit using ctrl-x
.env. Filenames with a leading dot may not be visible in Finder, but the Terminal command
ls -alists it.
You’ve created a file named
.env to store your environment values and stored your connection string URI in the environment value
MONGODB.
Piece by piece, here’s how you’ve constructed the URI:
- To connect to the local cluster, you used a standard connection string. This format starts with
mongodb://.
- After this, you would put the relevant credentials formatted as
<username>:<password>@. However, the cluster set up by Docker Compose does not use authentication.
- Next, you added the hosts, separated by commas. All three servers expose themselves on localhost. By supplying all replica set hosts, MongoKitten can take advantage of high availability.
- Finally, you added
/socialbirdto specify the selected database. A single deployment has many databases, each database serving a single application.
mongodb+srv://and connect to cloud hosted clusters.
Creating a Connection
Now that you’ve created the connection string, it’s time to connect your application.
First, close the project then reopen it by double-clicking Package.swift. Wait for Xcode to resolve all dependencies specified in Package.swift. After that, make sure that the selected scheme is SocialBird and the destination is My Mac.
Next, Option + Click the scheme to edit it. In the Options tab, enable Use custom working directory. Click the folder icon to set this to the project folder of your project. This is necessary for Vapor to find your
.env file and the Leaf templates used by this project.
Now open Sources/App/MongoKitten+Application.swift to see how the project uses MongoKitten with Vapor. The file contains the following code:
import Vapor import MongoKitten // 1 private struct MongoDBStorageKey: StorageKey { typealias Value = MongoDatabase } extension Application { // 2 public var mongoDB: MongoDatabase { get { // Not having MongoDB would be a serious programming error // Without MongoDB, the application does not function // Therefore force unwrapping is used return storage[MongoDBStorageKey.self]! } set { storage[MongoDBStorageKey.self] = newValue } } // 3 public func initializeMongoDB(connectionString: String) throws { self.mongoDB = try MongoDatabase.connect(connectionString, on: self.eventLoopGroup).wait() } } extension Request { // 4 public var mongoDB: MongoDatabase { // 5 return application.mongoDB.hopped(to: eventLoop) } }
The code above:
- Defines a storage key associated with MongoDatabase.
- Adds a getter and setter on a Vapor Application to provide a MongoDB connection.
- Connects to MongoDB and stores the connection in the Application.
- Accesses the application’s MongoDB connection.
- Changes the connection handle to reply on the Request EventLoop. This is a critical step that, if omitted, will crash your application.
Configuring the Application
To finish setting up the connection, add the following code to App.swift above the
return statement.
// 1 guard let connectionString = Environment.get("MONGODB") else { fatalError("No MongoDB connection string is available in .env") } // 2 try app.initializeMongoDB(connectionString: connectionString) // 3 try createTestingUsers(inDatabase: app.mongoDB)
Here’s what this code does:
- Reads the connection string from the created .env file.
- Initializes the connection to MongoDB.
- Creates an initial dataset containing users and posts.
Your connection is now ready! Build and run and you’ll connect to MongoDB. You should see the following console output:
Server starting on
Visit the app in your web browser and you’ll see the login page.
Logging In
This application is already configured to handle password hashing and authorization using JWT. The entire application relies on
Repository, in Repository.swift, for database operations.
Stop the app. Before you can enable logging in, you need to add the code for fetching users. Open User.swift and take note of the
User type.
User has a static property containing the collection name for this model. This prevents you from mistyping the name.
_id holds the model’s identifier. MongoDB requires the use of this key. Unlike most databases, MongoDB does not support auto-incrementing for integers. The type of identifier used is
ObjectId, which is both compact and scalable.
Next, you’ll see that MongoDB allows you to store information in the database exactly like your Swift structs. The
profile and
credentials fields contain grouped information.
Finally, the user stores an array of identifiers in
following. Each identifier refers to another user that this user follows.
Fetching a User
To log into the application, the login route in Routes.swift makes a call to
findUser(byUsername:inDatabase:) in Repository.swift. Within each of the repository’s methods, you have access to the MongoDB database.
First, select the collection containing users, using a subscript. In Repository.swift within
Repository, add the following line to
findUser(byUsername:inDatabase:), before the
return statement:
let usersCollection = database[User.collection]
Here, you’re subscripting the database with a string to access a collection. Also, notice that you don’t need to create collections. MongoDB creates them for you when you need them.
To query the collection for users, you’ll use a
MongoCollection find operation. And because the repository is looking for a specific user, use
findOne(_:as:).
Find operations can accept an argument for the filters that the result has to match. In MongoKitten, you can represent the query as a Document or as a MongoKittenQuery. MongoKittenQuery is more readable but does not support all features.
To create a document query, replace the
return statement below the
usersCollection line with the following code:
return usersCollection.findOne( "credentials.username" == username, as: User.self )
To query the username, you first refer to the value by the whole path separated by a dot (
. ). Next, you use this keypath with the
== operator to create an equality filter. Finally, you provide the value to compare to.
If a document matches the provided filter, MongoKitten will attempt to decode it as a
User. After this, you return the resulting
User.
To check that it works, build and run the application again.
You already have an example user from the first time the application launched. The username is me and the password is opensesame.
Visit the app in your browser and log in using the credentials above.
If you see the following error, you’re logged in!
{"error":true,"reason":"unimplemented"}
Stop the app.
Now that you can log in, it’s time to to generate the feed for the current user.
To do so, find the users that this user is following. For this use case, you’ll use the
Repository method
findUser(byId:inDatabase:). The implementation is similar to what you did in
findUser(byUsername:inDatabase:), so give this a try first.
How’d you do? Does your
findUser(byId:inDatabase:) read like this?
let users = database[User.collection] // 1 return users .findOne("_id" == userId, as: User.self) // 2 .flatMapThrowing { user in // 3 guard let user = user else { throw Abort(.notFound) } return user }
In the code above, you:
- Get
userscollection.
- Find
userwith
id.
- Unwrap the user or throw an error if nil.
To build up the feed, you add this list of user identifiers. Next, you add the user’s own identifier so that users see their own posts.
Replace the
return statement in
getFeed(forUser:inDatabase:) with the following:
return findUser(byId: userId, inDatabase: database) .flatMap { user in // Users you're following and yourself var feedUserIds = user.following feedUserIds.append(userId) let followingUsersQuery: Document = [ "creator": [ "$in": feedUserIds ] ] // More code coming. Ignore error message about return. }
The $in filter tests if the creator field exists in
feedUserIds.
With a find query, you can retrieve a list of all posts. Because most users are only interested in recent posts, you need to set a limit. A simple find would be perfect for returning an array of
TimelinePost objects. But in this function, you need an array of
ResolvedTimelinePost objects.
The difference between both models is the
creator key. The entire user model is present in
ResolvedTimelinePost. Leaf uses this information to present the author of the post.
A lookup aggregate stage is a perfect fit for this.
Creating Aggregate Pipelines
An aggregate pipeline is one of the best features of MongoDB. It works like a Unix pipe, where each operation’s output becomes the input of the next. The entire collection functions as the initial dataset.
To create an aggregate query, add the following code to
getFeed(forUser:inDatabase:) under the comment
More code coming.:
return database[TimelinePost.collection].buildAggregate { // 1 match(followingUsersQuery) // 2 sort([ "creationDate": .descending ]) // 3 limit(10) // 4 lookup( from: User.collection, localField: "creator", foreignField: "_id", as: "creator" ) // 5 unwind(fieldPath: "$creator") // 6 } .decode(ResolvedTimelinePost.self) // 7 .allResults() // 8
Here’s what you’re doing:
- First, you create an aggregate pipeline based on function builders.
- Then, you filter the timeline posts to match
followingUsersQuery.
- Next, you sort the timeline posts by creation date, so that recent posts are on top.
- And you limit the results to the first 10 posts, leaving the 10 most recent posts.
- Now, you look up the creators of this post.
localFieldrefers to the field inside
TimelinePost. And
foreignFieldrefers to the field inside
User. This operation returns all users that match this filter. Finally, this puts an array with results inside
creator.
- Next, you limit
creatorto one user. As an array,
creatorcan contain zero, one or many users. But for this project, it must always contain exactly one user. To accomplish this,
unwind(fieldPath:)outputs one timeline post for each value in
creatorand then replaces the contents of creator with a single entity.
- You decode each result as a
ResolvedTimelinePost.
- Finally, you execute the query and return all results.
The homepage will not render without suggesting users to follow. To verify that the feed works, replace the
return statement of
findSuggestedUsers(forUser:inDatabase:) with the following:
return database.eventLoop.makeSucceededFuture([])
Build and run, load the website, and you’ll see the first feed!
Suggesting Users
The homepage only shows the posts by Ray Wenderlich and yourself. That happens because you’re only following Ray Wenderlich. To discover other users, you’ll create another pipeline in
findSuggestedUsers(forUser:inDatabase:).
This pipeline will fill the sidebar with users you’re not following yet. To do this, you’ll create a filter that only shows people you’re not following. You’ll use the same filter as above, but reversed. Replace the
return statement in
findSuggestedUsers(forUser:inDatabase:) with the following:
let users = database[User.collection] // 1 return findUser(byId: userId, inDatabase: database).flatMap { user in // 2 // 3 var feedUserIds = user.following feedUserIds.append(userId) // 4 let otherUsersQuery: Document = [ "_id": [ "$nin": feedUserIds ] ] // Continue writing here }
In the code above, you:
- Get
userscollection.
- Find
userby
userId.
- List the user identifiers whose profiles are not shown.
- Construct a filter that looks for users where their identifier is Not IN the array.
If you use a simple filter, users will always see the same suggestions. But you’ll do better than that! Instead of a simple filter, you’ll suggest users based on the people you’re following, using a Graph Lookup. This works in the same way as a lookup stage, but recursively.
While a graph lookup provides more relevant results, it doesn’t show suggestions when a user doesn’t follow anyone. Therefore, use a sample stage instead.
sample(_:) creates a random set of entities from the input results.
Add this code below the
Continue writing here comment:
return users.buildAggregate { match(otherUsersQuery) // 1 sample(5) // 2 sort([ "profile.firstName": .ascending, "profile.lastName": .ascending ]) // 3 }.decode(User.self).allResults() // 4
In this code, you:
- Find all users excluding yourself and those you’re following.
- Select up to five random users.
- Sort them by
firstName, then
- Decode the results as a
Userand return all results.
Build and run the application, reload the page and ta-da! Your sidebar now contains users that you can follow.
Following Users
To make the follow button work, replace the
return statement in
return database[User.collection].updateOne( // 1 where: "_id" == follower._id, // 2 to: [ "$push": [ "following": account._id ] ] // 3 ).map { _ in }
In this code, you’re:
- Getting the users collection.
- Specifying which user to update.
- Updating the user.
Creating Posts
Now that you’re able to see posts, it’s time to create some of your own!
To create a post, you’ll change
createPost. In MongoDB, the
insert operation creates new entities in a collection. MongoKitten even provides a helper for encodable types such as
TimelinePost.
Replace the
return statement in
createPost(_:inDatabase:) with the following, to add the post to its collection:
return database[TimelinePost.collection].insertEncoded(post).map { _ in }
This line will insert the new, encoded post into the timelineposts collection.
Create a post and refresh the website. Your new post will be on the top! But you’re not done yet.
As you might have noticed, users can upload images as part of their posts. Any service should store the files on the disk. However, this can be a complex task if you want to do it right. You’ll need to take care of access control and high availability, to name a few.
MongoDB already supports high availability and replication. And the code that calls MongoDB already makes access checks. To take advantage of this, you’ll use GridFS. GridFS is a standard that can store small files into MongoDB. Preferably you’ll keep the files as small as possible. It’s great for storing images or PDF invoices.
To use GridFS, you’ll need to create a
GridFSBucket. You can then upload or download files from this bucket. To allow uploading files, replace the
return statement in
uploadFile(_:inDatabase:) with the following:
let id = ObjectId() // 1 let gridFS = GridFSBucket(in: database) // 2 return gridFS.upload(file, id: id).map { // 3 return id // 4 }
In the code above, you:
- Generate the file’s identifier.
- Open the GridFS bucket in the selected database.
- Upload the user’s file to GridFS.
- Return the generated identifier.
Reading Uploaded Files
To share the file with users, the repository uses
readFile(byId:inDatabase:). Like the example above, you’ll access file storage through a GridFS bucket. But instead of uploading a file, you’ll read the contents instead.
First, you must fetch the file from the GridFSBucket. The fetched GridFSFile does not contain the contents of the file data. Instead, it contains all metadata related to this file. This includes the filename, size and any custom data.
After fetching the GridFSFile, read the file’s contents using the file’s reader. Replace the
return statement in
readFile(byId:inDatabase:) with the following:
let gridFS = GridFSBucket(in: database) // 1 return gridFS.findFile(byId: id) // 2 .flatMap { file in guard let file = file else { // 3 return database.eventLoop.makeFailedFuture(Abort(.notFound)) } return file.reader.readData() // 4 }
In the code above, you:
- Get the Bucket
- Find file by Id
- Unwrap file, else throw an error
- Read data and return
Build and restart the application. Then, create a new post and don’t forget to attach an image. You’ll see your new post in the timeline, including the image.
Congratulations! You’ve completed your first app with MongoDB and Server-Side Swift.
Where to Go From Here?
You can download the completed project files by clicking the Download Materials button at the top or bottom of the tutorial.
MongoDB has a lot of powerful features to offer. If you’re looking to expand upon those, I recommend reading the official documentation at mongodb.com.
If you want to learn more about developing web services in Server-Side Swift, our Server-Side Swift With Vapor book is a good starting place.
MongoDB is a powerful database. Are you excited to explore it more? Feel free to share your thoughts on MongoDB and MongoKitten in the discussion below!
|
https://www.raywenderlich.com/10521463-server-side-swift-with-mongodb-getting-started
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
When a React component is created, a number of functions are called:
React.createClass(ES5), 5 user defined functions are called
class Component extends React.Component(ES6), 3 user defined functions are called
getDefaultProps()(ES5 only)
This is the first method called.
Prop values returned by this function will be used as defaults if they are not defined when the component is instantiated.
In the following example,
this.props.name will be defaulted to
Bob if not specified otherwise:
getDefaultProps() { return { initialCount: 0, name: 'Bob' }; }
getInitialState()(ES5 only)
This is the second method called.
The return value of
getInitialState() defines the initial state of the React component.
The React framework will call this function and assign the return value to
this.state.
In the following example,
this.state.count will be intialized with the value of
this.props.initialCount:
getInitialState() { return { count : this.props.initialCount }; }
componentWillMount()(ES5 and ES6)
This is the third method called.
This function can be used to make final changes to the component before it will be added to the DOM.
componentWillMount() { ... }
render()(ES5 and ES6)
This is the fourth method called.
The
render() function should be a pure function of the component's state and props. It returns a single element which represents the component during the rendering process and should either be a representation of a native DOM component (e.g.
<p />) or a composite component. If nothing should be rendered, it can return
null or
undefined.
This function will be recalled after any change to the component's props or state.
render() { return ( <div> Hello, {this.props.name}! </div> ); }
componentDidMount()(ES5 and ES6)
This is the fifth method called.
The component has been mounted and you are now able to access the component's DOM nodes, e.g. via
refs.
This method should be used for:
componentDidMount() { ... }
If the component is defined using ES6 class syntax, the functions
getDefaultProps() and
getInitialState() cannot be used.
Instead, we declare our
defaultProps as a static property on the class, and declare the state shape and initial state in the constructor of our class. These are both set on the instance of the class at construction time, before any other React lifecycle function is called.
The following example demonstrates this alternative approach:
class MyReactClass extends React.Component { constructor(props){ super(props); this.state = { count: this.props.initialCount }; } upCount() { this.setState((prevState) => ({ count: prevState.count + 1 })); } render() { return ( <div> Hello, {this.props.name}!<br /> You clicked the button {this.state.count} times.<br /> <button onClick={this.upCount}>Click here!</button> </div> ); } } MyReactClass.defaultProps = { name: 'Bob', initialCount: 0 };
getDefaultProps()
Default values for the component props are specified by setting the
defaultProps property of the class:
MyReactClass.defaultProps = { name: 'Bob', initialCount: 0 };
getInitialState()
The idiomatic way to set up the initial state of the component is to set
this.state in the constructor:
constructor(props){ super(props); this.state = { count: this.props.initialCount }; }
|
https://riptutorial.com/reactjs/example/9240/component-creation
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
Introduction:
I am a huge chicken, trying to write a blog for the first time, I hope you can be more inclusive
There are errors and we hope that the giants will come forward. I will correct them in time. Thank you.
At your best age, find future goals
There is still one year to struggle, next year to interview internship, go on!
As usual, first look at the title requirements:
Enter the root node of a binary tree to determine if it is a balanced binary tree.
If the left and right subtrees of any node in a binary tree have no more than 1 depth difference,
So it's a balanced binary tree.
Example:
Given a binary tree [3,9,20,null,null,15,7,null,null]
Answer: true
Looking at the title, my first reaction was:
According to the topic, he said that to judge an AVL tree, you must ensure that each node is a left subtree whose depth differs from that of a right subtree by no more than 1.
So, the first idea I came up with was to write a function to determine depth (depthdiff), so I would go through each node from top to bottom, and when I reached a node, I would pass his left son node to depthdiff to calculate depth, and the right son would do the same, and then compare the two again.
This is more complex, and the time complexity is O(nlogn).
The detailed code is commented out as follows:
#include <iostream> #include <queue> #include <cmath> using namespace std; struct TreeNode { int val; TreeNode *left; TreeNode *right; TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} }; class Solution {//This is a top-down strategy //By careful analysis, you can see that when you have judged the root node, what is actually happening to the underlying node is already known, so the efficiency is too low, there are too many repetitive operations, so the improvement public: bool isBalanced(TreeNode* root) { if (root != nullptr) {//Just follow the title here bool judge = false; int left = depthdiff(root->left); int right = depthdiff(root->right); if (abs(left - right) >= 0 && abs(left - right) <= 1) {// judge = true; bool judge1 = isBalanced(root->left); bool judge2 = isBalanced(root->right); if (judge == true && judge1 == true && judge2 == true) {//The current node must be available so that the left and right sons can make true return true;//But anything goes wrong } } return false;//Current node does not work, then false must be returned } else {//If the tree itself is empty, there is no need to judge return true; } } int depthdiff(TreeNode* root) { if (root == nullptr) {//When the incoming node is an empty one return 0;//That means there are no layers, return 0 } else { queue<TreeNode*> tar; tar.push(root);//Up First Entry Root Node int cnt = 1;//Represents the current number of layers int cntTemp = 1;//Represents how many nodes there are in the layer while (!tar.empty()) { int cnt0 = 0;//Record how many nodes are in the current layer to prepare for the next big cycle for (int i = 1; i <= cntTemp; i++) {//How many nodes are on the previous level and cntTemp is present, how many times is this done if (tar.front()->left != nullptr) {//In fact, every time I take out my opponent to see if there are left or right sons and join the team cnt0++; tar.push(tar.front()->left); } if (tar.front()->right != nullptr) { cnt0++; tar.push(tar.front()->right); } tar.pop();//Throw this away when you're finished } cntTemp = cnt0;//Wait until the next layer is finished, how many records are in that layer, and then update the data if (!tar.empty()) { cnt++;//Think about extreme situations, not that there must be a son node, and if there are no sons, the number of layers cannot increase } } return cnt; } } };
(All code is running correctly on the power button)
The problem with method one is still brutal:
The first is obvious, using a top-down, top-down strategy
But as we know by drawing with a pen and paper, this method is very inefficient
I finished all the nodes below her when I was at the root node, and then I worked all over again
What's the most annoying thing about our programmed apes?Is it not just repeating meaningless work?
We're all bored. The boss and the interviewer are even more bored!
Let's imagine a job, a job doesn't have to be done from start to finish. You can have more than one division of work
When you calculate one more step, you only need to know the results of the predecessors.
So the optimization scheme comes
The second optimization scheme --- remember your thoughts (you're blind, big guys don't mind):
Before saying that, the viewer can draw a tree simply. It is not difficult to find that the depth of a tree is the maximum of two depths of the left and right son nodes of the root node+1
With this formula we'll be fine
Because it's a bottom-up strategy, we have to go to the bottom first before we start, so we take a back-root traversal
Because it is equivalent to traversing from bottom to top, the time complexity is O(N).
The code is as follows:
#include <iostream> #include <queue> #include <cmath> using namespace std; struct TreeNode { int val; TreeNode *left; TreeNode *right; TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} }; class Solution { public: bool isBalanced(TreeNode* root) { return (depthdiff(root) != -1);//Since all the data are numbers >=0, we stipulate that when he is negative, he is not qualified and the qualified cases return layers >=0 } int depthdiff(TreeNode* root) { if (root == nullptr) {//The boundary must be taken into account when designing the program, once empty it reaches the next level of the leaf node return 0;//So it's zero without layers } int left = depthdiff(root->left);//Heel int right = depthdiff(root->right);//Rear Root if (left == -1 || right == -1) { return -1;//Here you will remember, because once there is an unavailable point, the judgement of the other points is meaningless and unnecessary, so prune, quickly return to -1 and exit }//So here I want to reserve a quick exit to prune on my behalf, which is no longer required else { if (abs(left - right) >= 0 && abs(left - right) <= 1) {//Judge as required return max(left, right) + 1;//Calculate Depth by Formula } else { return -1;//This node is not allowed to prune immediately, take the fast track to exit } } } };
(All code is running correctly on the power button)
Summary:
1. In the first method, the most critical is the operation on the queue.The queue is a highly flexible thing, there is no need to go through one at a time and then into their two sons at the traditional level. This is not easy to count the number of layers, so we don't have to be stuck here. We can operate one layer at a time, one layer at a time, one layer at a time, and then see if the queue is empty, not ++,Conversely, ++, so be flexible with column operations, one layer at a time.
2. In the second optimization method, the idea is very common and important, for example, the optimization of Fibonacci series is a bottom-up strategy. This strategy is very fast and convenient, I do not need to do a lot of repeated operations, the latter uses the data of the former!
When you find that your program is doing a lot of repetitive work over and over again, you have to think about using memory algorithms to free up the workforce faster
When you find that your program has too many things to consider from top to bottom, you may want to turn it upside down, traverse through the roots to find the lowest point, then move up one by one, and you will find that the latter uses the results of the former, and the number of overall situations decreases.
3. When a certain condition is found to be triggered which causes a large number of nodes of this tree to be useless, a quick exit channel can be opened to achieve the effect of pruning.
4 When writing program design algorithms, we must focus on the boundary, consider the boundary, consider the boundary (important things say three times), and design a good boundary handling method
|
https://www.fatalerrors.org/a/vibrant-brush-force-buckle.html
|
CC-MAIN-2021-17
|
en
|
refinedweb
|
Connector/C++ 8.0 implements the X DevAPI, as described in the X DevAPI User Guide. The X DevAPI allows one to work with MySQL Servers implementing a document store via the X Plugin. One can also execute plain SQL queries using this API.
To get started, check out some of the main X DevAPI classes:
Sessionobject. Keep in mind that
Sessionis not thread safe!
Collectionor a
Tableobject using methods
getCollection()or
getTable()of a
Schemaobject obtained from the session.
Collectionor
Tableclass, such as
find(). They are executed with method
execute().
DocResultor
RowResultinstances returned from
execute()method. Method
fetchOne()fetches the next item (a document or a row) from the result until there are no more items left. Method
fetchAll()can fetch all items at once and store them in an STL container.
DbDocand
Rowinstances, respectively.
A more complete example of code that access MySQL Document Store using the X DevAPI is presented below. See also the list of X DevAPI classes.
The following Connector/C++ application connects to a MySQL Server with X Plugin, creates a document collection, adds a few documents to it, queries the collection and displays the result. The sample code can be found in file
testapp/devapi_test.cc in the source distribution of Connector/C++ 8.0. See Using Connector/C++ 8.0 for instructions on how to build the sample code.
Code which uses X DevAPI should include the
mysql_devapi.h header. The API is declared within the
mysqlx namespace:
To create an
Session object, specify DNS name of a MySQL Server, the port on which the plugin listens (default port is 33060) and user credentials:
Another way of specifying session parameters is by means of a
mysqlx connection string like
"mysqlx://mike:s3cr3t!@localhost:13009". Once created, the session is ready to be used. If the session can not be established, the
Session constructor throws an error.
To manipulate documents in a collection, create a
Collection object, first asking session for that collection's
Schema:
The
true parameter to
createCollection() method specifies that collection should be re-used if it already exists. Without this parameter an attempt to create an already existing collection produces an error. It is also possible to create a
Collection object directly, without creating the
Schema instance:
Before adding documents to the collection, all the existing documents are removed first using the
Collection::remove() method (expression "true" selects all documents in the collection):
Note that the
remove() method returns an operation that must be explicitly executed to take effect. When executed, operation returns a result (ignored here; the results are used later).
To insert documents use the
Collection::add() method. Documents are described by JSON strings using the same syntax as MySQL Server. Note that double quotes are required around field names and they must be escaped inside C strings, unless the new C++11
R"(...)" string literal syntax is used as in the example below:
Result of the
add() operation is stored in the
add variable to be able to read identifiers of the documents that were added. These identifiers are generated by the connector, unless an added document contains an
"_id" field which specifies its identifier. Note how internal code block is used to delete the result when it is no longer needed.
add()calls as follows:
coll.add(doc1).add(doc2)...add(docN).execute(). It is also possible to pass several documents to a single
add()call:
coll.add(doc1, ..., docN).execute(). Another option is to pass to
Collection::add()an STL container with several documents.
To query documents of a collection use the
Collection::find() method:
The result of the
find() operation is stored in a variable of type
DocResult which gives access to the returned documents that satisfy the selection criteria. These documents can be fetched one by one using the
DocResult::fetchOne() method, until it returns a null document that signals end of the sequence:
Given a
DbDoc object it is possible to iterate over its fields as follows:
Note how the
operator[] is used to access values of document fields:
The value of a field is automatically converted to a corresponding C++ type. If the C++ type does not match the type of the field value, conversion error is thrown.
Fields which are sub-documents can be converted to the
DbDoc type. The following code demonstrates how to process a
"date" field which is a sub-document. Note how methods
DbDoc::hasField() and
DbDoc::fieldType() are used to examine existence and type of a field within a document.
In case of arrays, currently no conversion to C++ types is defined. However, individual elements of an array value can be accessed using
operator[] or they can be iterated using range for loop.
Any errors thrown by Connector/C++ derive from the
mysqlx::Error type and can be processed as follows:
The complete code of the example is presented below:
A sample output produced by this code:
|
https://dev.mysql.com/doc/dev/connector-cpp/8.0/devapi_ref.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
WebEngine Widgets Maps Example
Demonstrates how to handle geolocation requests.
Qt Location:
#include <QMainWindow> #include <QWebEngineView> class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); private: QWebEngineView *m_view; };
In the constructor we first set up the QWebEngineView as the central widget:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , m_view(new QWebEngineView(this)) { setCentralWidget(m_view);
We then proceed to connect a lambda function to the QWebEnginePage::featurePermissionRequested signal:
QWebEnginePage *page = m_view->page(); connect(page, &QWebEnginePage::featurePermissionRequested, [this, page](const QUrl &securityOrigin, QWebEnginePage::Feature feature) {
This signal is emitted whenever a web page requests to make use of a certain feature or device, including not only location services but also audio capture devices or mouse locking, for example. In this example we only handle requests for location services:
if (feature != QWebEnginePage::Geolocation) return;
Now comes the part where we actually ask the user for permission:
QMessageBox msgBox(this); msgBox.setText(tr("%1 wants to know your location").arg(securityOrigin.host())); msgBox.setInformativeText(tr("Do you want to send your current location to this website?")); msgBox.setStandardButtons(QMessageBox::Yes | QMessageBox::No); msgBox.setDefaultButton(QMessageBox::Yes); if (msgBox.exec() == QMessageBox::Yes) { page->setFeaturePermission( securityOrigin, feature, QWebEnginePage::PermissionGrantedByUser); } else { page->setFeaturePermission( securityOrigin, feature, QWebEnginePage::PermissionDeniedByUser); } });
Note that the question includes the host component of the web site's URI (
securityOrigin) to inform the user as to exactly which web site will be receiving their location data.
We use the QWebEnginePage::setFeaturePermission method to communicate the user's answer back to the web page.
Finally we ask the QWebEnginePage to load the web page that might want to use location services:
page->load(QUrl(QStringLiteral(""))); }
Example project @ code.qt.io
See also Qt WebEngine HTML5 Geolocation and Qt.
|
https://doc.qt.io/qt-5/qtwebengine-webenginewidgets-maps-example.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Given the scalability of Amazon QuickSight to hundreds and thousands of users, a common use case is to monitor QuickSight group and user activities, analyze the utilization of dashboards, and identify usage patterns of an individual user and dashboard. With timely access to interactive usage metrics, business intelligence (BI) administrators and data team leads can efficiently plan for stakeholder engagement and dashboard improvements. For example, you can remove inactive authors to reduce license cost, as well as analyze dashboard popularity to understand user acceptance and stickiness.
This post demonstrates how to build an administrative console dashboard and serverless data pipeline. We combine QuickSight APIs with AWS CloudTrail logs to create the datasets to collect comprehensive information of user behavior and QuickSight asset usage patterns.
This post provides a detailed workflow that covers the data pipeline, sample Python code, and a sample dashboard of this administrative console. With the guidance of this post, you can configure this administrative console in your own environment.
Let’s look at Forwood Safety, an innovative, values-driven company with a laser focus on fatality prevention. An early adopter of QuickSight, they have collaborated with AWS to deploy this solution to collect BI application usage insights.
“Our engineers love this admin console solution,” says Faye Crompton, Leader of Analytics and Benchmarking at Forwood. “It helps us to understand how users analyze critical control learnings by helping us to quickly identify the most frequently visited dashboards in Forwood’s self-service analytics and reporting tool, FAST.”
Solution overview
The following diagram illustrates the workflow of the solution.
The workflow involves the following steps:
- The AWS Lambda function
Data_Prepareis scheduled to run hourly. This function calls QuickSight APIs to get QuickSight namespace, group, user, and assets access permissions information and saves the results to an Amazon Simple Storage Service (Amazon S3) bucket.
- CloudTrail logs are stored in S3 bucket.
- Based on the file in Amazon S3 that contains user-group information, the QuickSight assets access permissions information, as well as view dashboard and user login events in CloudTrail logs. Three Amazon Athena tables and several views are created. Optionally, the BI engineer can combine these two tables with employee information tables to display human resource information of the users.
- Two QuickSight datasets fetch the data in the Athena tables created in Step 3 through SPICE mode. Then, based on these datasets, a QuickSight dashboard is created.
Prerequisites
For this walkthrough, you should have the following prerequisites:
- An AWS account
- Access to the following AWS services:
- Amazon QuickSight
- Amazon Athena
- AWS Lambda
- Amazon S3
- Basic knowledge of Python
- Optionally, Security Assertion Markup Language 2.0 (SAML 2.0) or OpenID Connect (OIDC) single sign-on (SSO) configured for QuickSight access
Creating resources
Create your resources by launching the following AWS CloudFormation stack:
After the stack creation is successful, you have one Amazon CloudWatch Events rule, one Lambda function, one S3 bucket, and the corresponding AWS Identity and Access Management (IAM) policies.
To create the resources in a Region other than
us-east-1, download the Lambda function.
Creating Athena tables
The
Data_Prepare Lambda function is scheduled to run hourly with the CloudWatch Events rule
admin-console-every-hour. This function calls the QuickSight APIs
list_namespaces,
list_users,
list_user_groups,
list_dashboards,
list_datasets,
list_datasources,
list_analyses,
list_themes,
describe_data_set_permissions,
describe_dashboard_permissions,
describe_data_source_permissions,
describe_analysis_permissions, and
describe_theme_permissions to get QuickSight users and assets access permissions information. Finally, this function creates two files,
group_membership.csv and
object_access.csv, and saves these files to an S3 bucket.
Run the following SQL query to create two Athena tables (
group_membership and
object_access):
The following screenshot is sample data of the
group_membership table.
The following screenshot is sample data of the
object_access table.
For instructions on building an Athena table with CloudTrail events, see Amazon QuickSight Now Supports Audit Logging with AWS CloudTrail. For this post, we create the table
cloudtrail_logs in the default database.
Creating views in Athena
Now we have the tables ready in Athena and can run SQL queries against them to generate some views to analyze the usage metrics of dashboards and users.
Create a view of a user’s role status with the following code:
Create a view of
GetDashboard events that happened in the last 3 months with the following code:
In the preceding query, the conditions defined in the where clause only fetch the records of
GetDashboard events of QuickSight.
How can we design queries to fetch records of other events? We can review the CloudTrail logs to look for the information. For example, let’s look at the sample
GetDashboard CloudTrail event:
With
eventSource=“
quicksight.amazonaws.com” and
eventName=“
GetDashboard”, we can get all the view QuickSight dashboard events.
Similarly, we can define the condition as
eventname = ‘
AssumeRoleWithSAML‘ to fetch the user login events. (This solution assumes that the users log in to their QuickSight account with identity federation through SAML.) For more information about querying CloudTrail logs to monitor other interesting user behaviors, see Using administrative dashboards for a centralized view of Amazon QuickSight objects.
Furthermore, we can join with employee information tables to get a QuickSight user’s human resources information.
Finally, we can generate a view called
admin_console with QuickSight group and user information, assets information, CloudTrail logs, and, optionally, employee information. The following screenshot shows an example preview.
Creating datasets
With the Athena views ready, we can build some QuickSight datasets. We can load the view called
admin_console to build a SPICE dataset called
admin_console and schedule this dataset to be refreshed hourly. Optionally, you can create a similar dataset called
admin_console_login_events with the Athena table based on
eventname = ‘
AssumeRoleWithSAML‘ to analyze QuickSight users log in events. According to the usage metrics requirement in your organization, you can create other datasets to serve the different requests.
Building dashboards
Now we can build a QuickSight dashboard as the administrative console to analyze usage metrics. The following steps are based on the dataset
admin_console. The schema of the optional dataset
admin_console_login_events is the same as
admin_console. You can apply the same logic to create the calculated fields to analyze user login activities.
- Create parameters.
For example, we can create a parameter called
InActivityMonths, as in the following screenshot.
Similarly, we can create other parameters such as
InActivityDays,
Start Date, and
End Date.
- Create controls based on the parameters.
- Create calculated fields.
For instance, we can create a calculated field to detect the active or inactive status of QuickSight authors. If the time span between the latest view dashboard activity and now is larger or equal to the number defined in the
Inactivity Months control, the author status is
Inactive. The following screenshot shows the relevant code.
According to end user’s requirement, we can define several calculated fields to perform the analysis.
- Create visuals.
For example, we create an insight to display the top three dashboards view by readers and a visual to display the authors of these dashboards.
- We can add URL action to define some extra features to email inactive authors or check details of users.
The following sample code defines the action to email inactive authors:
The following screenshots show an example dashboard that you can make using our data.
The following is the administrative console landing page. We provide the overview, terminology explanation and thumbnails of the other two tabs in this page.
The following screenshots show the User Analysis tab.
The following screenshots show the Dashboards Analysis tab.
You can interactively play with the sample dashboard in the following Interactive Dashboard Demo.
You can reference to public template of the preceding dashboard in
create-template,
create-analysis, and
create-dashboard API calls to create this dashboard and analysis in your account. The public template of this dashboard with the template ARN is
'TemplateArn': 'arn:aws:quicksight:us-east-1:889399602426:template/admin-console'.
Additional usage metrics
Additionally, we can perform some complicated analysis to collect advanced usage metrics. For example, Forwood Safety raised a unique request to analyze the readers who log in but don’t do any viewing of dashboard actions (see the following code). This helps their clients identify and prevent any wasting of reader sessions fees. Leadership teams value the ability to minimize uneconomical user activity.
Cleaning up
To avoid incurring future charges, delete the resources you created with the CloudFormation template.
Conclusion
This post discussed how BI administrators can use QuickSight, CloudTrail, and other AWS services to create a centralized view to analyze QuickSight usage metrics. We also presented a serverless data pipeline to support the administrative console dashboard.
You can request a demo of this administrative console to try for yourself.
About the Authors
Ying Wang is a Data Visualization Engineer with the Data & Analytics Global Specialty Practice in AWS Professional Services.
Jill Florant manages Customer Success for the Amazon QuickSight Service team
|
https://awsfeed.com/whats-new/big-data/building-an-administrative-console-in-amazon-quicksight-to-analyze-usage-metrics
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Answering Questions with Transformers
Want to share your content on python-bloggers? click here.
In a previous post, we showed how we could do text summarization with transformers. Here, we will provide you an example, of how we can use transformers for question answering. We will work with the huggingface library. We will work with Google Colab, so the example is reproducible. First, we need to install the libraries:
!pip install transformers !pip install torch
Now, we are ready to work with the transformers.
Example of Answering Questions
For an AI algorithm to be able to answer questions from an input text, it means that it is able to “understand”, that is why we call it Natural Language Understanding (NLU) and to be able to respond by generating a text, that is why we call it Natural Language Generation (NLG). The process that we will follow for question answering is described in hugging face documentation:
- Start the BERT model
- Provide the input text and the required questions
- Iterate over the questions and build a sequence from the text and the current question, with the correct model-specific separators token type ids and attention masks
- Pass this sequence through the model. This outputs a range of scores across the entire sequence tokens (question and text), for both the start and end positions.
- Compute the softmax of the result to get probabilities over the tokens
- Fetch the tokens from the identified start and stop values, convert those tokens to a string.
- Print the results
Note that I took the code from the Hugging Face documentation, and I made a change because there was a bug. I changed the:
answer_start_scores, answer_end_scores = model(**inputs)
with:
answer_start_scores, answer_end_scores = model(**inputs)[0], model(**inputs)[1]
In our example, I provide the following input text:
My name is George Pipis and I work as a Data Scientist at Persado. My personal blog is called Predictive Hacks which provides tutorials mainly in R and Python
And I will make the following questions:
- What is my name?
- What is George Pipis job?
- What is the name of the blog?
- What is the blog about?
- Where does George work?
Let’s see how well can transformers answer those questions above.
from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") text = r""" My name is George Pipis and I work as a Data Scientist at Persado. My personal blog is called Predictive Hacks which provides tutorials mainly in R and Python. """ questions = [ "What is my name?", "What is George Pipis job?", "What is the name of the blog?", "What is the blog about?", "Where does George work?" ] for question in questions: inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs)[0], model(**inputs)[1]])) print(f"Question: {question}") print(f"Answer: {answer}\n")
Output:
Question: What is my name? Answer: george pipis Question: What is George Pipis job? Answer: data scientist Question: What is the name of the blog? Answer: predictive hacks Question: What is the blog about? Answer: tutorials Question: Where does George work? Answer: persado
Not bad at all! The model was able to provide good answers
Want to share your content on python-bloggers? click here.
|
https://python-bloggers.com/2021/01/answering-questions-with-transformers/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
One.
Go ahead and open the project from where we left off, or download the completed project from last week here. I have also prepared an online repository here for those of you familiar with version control.
Draw Cards Action
All reactions are actually just actions that are played as a triggered event instead of being played manually by a user interaction. This means that I will create it as a subclass of GameAction just like we did before with the ChangeTurnAction. The purpose of the action is to serve as a context for the system that will apply it. What information is necessary? In this case, we will need to know which player is performing the action and how many cards should be drawn. After the action is performed, it would also be nice if the action could indicate which cards were drawn, so that a view somewhere else could display the drawn cards correctly.
public class DrawCardsAction : GameAction { public int amount; public List<Card> cards; public DrawCardsAction(Player player, int amount) { this.player = player; this.amount = amount; } }
In the code above, I used a simple “amount” field to hold the target amount of cards that a player is “supposed” to draw. I then have a List of “cards” to hold the result – whatever was successfully drawn. Note that the count of drawn “cards” may not match the target “amount” desired, for example if you tried to draw a card, but your deck was already empty.
Player System
Next we will create a system to handle the application of the Game Action onto the Game model itself. You could choose to organize this in a variety of ways. I decided to put it in a system for the Player, because when I describe the action itself, I would describe it as a “Player draws a card”, and so if the Player is the one performing the action, then the Player system makes the most sense as the location it occurs. Should any one system become too long (such as more than a few 100 lines of code) I would of course reconsider the location of each bit of code and potentially create additional systems with smaller focus.
public class PlayerSystem : Aspect, IObserve { // ... Add Code Here }
The Player System class inherits from Aspect. You should know by now that this superclass allows the system to be attached to the same container object as all of our other systems so that they can work together if needed. We also implement the IObserve interface, which will allow it to register and unregister for notifications at appropriate times.
public void Awake () { this.AddObserver (OnPerformChangeTurn, Global.PerformNotification<ChangeTurnAction> (), container); this.AddObserver (OnPerformDrawCards, Global.PerformNotification<DrawCardsAction> (), container); } public void Destroy () { this.RemoveObserver (OnPerformChangeTurn, Global.PerformNotification<ChangeTurnAction> (), container); this.RemoveObserver (OnPerformDrawCards, Global.PerformNotification<DrawCardsAction> (), container); }
In this case, the notifications will be used initially to listen for the ChangeTurnAction which it will use as the trigger to initiate its own DrawCardsAction. It will also listen for the performance of its own action in order to actually apply the logic at the correct time.
void OnPerformChangeTurn (object sender, object args) { var action = args as ChangeTurnAction; var match = container.GetAspect<DataSystem> ().match; var player = match.players [action.targetPlayerIndex]; DrawCards (player, 1); }
In the notification handler for the performance of changing turns, we figure out which player is becoming the active player and pass it along to another method that handles creating the actual action context, using the correct player, and a fixed number of cards to draw based on changing turns.
void DrawCards (Player player, int amount) { var action = new DrawCardsAction (player, amount); container.AddReaction (action); }
The “DrawCards” method was separated on its own because there are likely to be many “triggers” for actually drawing a card(s) and this will allow me to keep my code a little more DRY. The parameters we will need to create the DrawCardsAction are passed directly to the method. Once the action is created, it is also automatically added as a reaction to the action system via the extension method on the container.
void OnPerformDrawCards (object sender, object args) { var action = args as DrawCardsAction; action.cards = action.player [Zones.Deck].Draw (action.amount); action.player [Zones.Hand].AddRange (action.cards); }
Finally we have the notification handler for actually applying the logic of drawing a card(s). I determine the number of cards to draw based on the action’s context. Then I use another extension method on a List to handle randomly taking elements from a collection (I will show the code for this next). Note that I assign them to the action itself so that views and/or reactions can know “what” cards were taken. Next, we add the drawn cards to the player’s “hand”.
There are at least two additional points that will need to be considered here in the future. First, what happens when you successfully draw a card but your hand is full? It could be that the card is “destroyed” – moved to the discard pile. The other issue is to consider what happens when you try to draw a card(s) and did not have enough left in your deck to draw? It could be that the player takes some sort of penalty, such as fatigue damage. In both cases, these should be considered addional reactions to the intended action of drawing a card.
List Extensions
In the Player System, you may have wondered how I was using a “Draw” method on the List class. I did this by adding a new extension in my pre-existing “Common/Extensions/ListExtensions” class. The methods follow:
public static T Draw<T> (this List<T> list) { if (list.Count == 0) return default(T); int index = UnityEngine.Random.Range (0, list.Count); var result = list [index]; list.RemoveAt (index); return result; } public static List<T> Draw<T> (this List<T> list, int count) { int resultCount = Mathf.Min (count, list.Count); List<T> result = new List<T> (resultCount); for (int i = 0; i < resultCount; ++i) { T item = list.Draw (); result.Add (item); } return result; }
There are two overloaded implementations of the Draw method. The first does not accept a “count” parameter, assuming that you only want to draw one card. It can be convenient because the result does not need to be wrapped by another object (a List). The second version does take a “count” of cards to try to take. The final results are returned in a List – which could be empty if there were no cards to draw. This allows the call to be safe in that you don’t need to worry about out of bounds errors on the collection you are drawing from.
One interesting note about these methods: because the items are drawn at random from the entire collection, the collection itself never needs to be shuffled. This is an on-going truth. For example, you might have imagined needing to shuffle a deck both at the beginning of a game as well as if a game action caused cards to be added to the deck during gameplay. In either case, by using the “Draw” method, each card has the same chance of being picked. Later on for demonstration purposes, I will name the cards, in order, so that it is more evident that random cards can be drawn while no shuffling is necessary.
Game Factory
Don’t forget – because we added a new system, we need to add it to the factory in order for it to be included as part of the container.
game.AddAspect<PlayerSystem> ();
Board View
In the scene hierarchy, I have added a component marking where the concept of a board would appear. In this case I decided to add a reference at this level to one of my reusable scripts called a SetPooler. A pooler is something I created to aid in the reuse of expensive objects (GameObjects) rather than needing to constantly destroy and re-create them. Without using a pooler, a battle could easily instantiate many cards, but if you use a pooler, you can limit that number because cards that have been discarded can be reused to display newer cards in the future. All I need to add to this script is a new field:
public SetPooler cardPooler;
I then created a new child GameObject in the scene (in edit mode) called the “Card Pooler” that had a SetPooler component attached. I used the “Card View” prefab as the reference assigned to this pooler. All of the other settings can be left at default, although you may wish to pre-populate it with a few instances – I set mine at 10. Finally I manually connect the BoardView’s “cardPooler” reference to the component instance just created.
Deck View
Because I have a visual reference to the concept of a player’s deck, I also want to be able to visually approximate how many cards remain in the deck. In other words, as a player draws cards, the width of the deck should slowly shrink until no cards remain. To handle this, I created a method called “ShowDeckSize” that expects a normalized value (0-1) indicating how much of the deck should be visible:
public void ShowDeckSize (float size) { squisher.localScale = Mathf.Approximately (size, 0) ? Vector3.zero : new Vector3 (1, size, 1); }
Card View
I also added a bunch of functionality to the view for displaying the cards themselves. For example, I added a reference to the Card model that needs to be displayed. When drawing cards, I want to support the ability to see both the back and front of the card. While a card is on the deck, it should be face-down, and I should only see it as face-up, if it is a card I am drawing. When my opponent draws a card, I should not be able to see it until he plays it.
public bool isFaceUp { get; private set; } public Card card; private GameObject[] faceUpElements; private GameObject[] faceDownElements; void Awake () { faceUpElements = new GameObject[] { cardFront.gameObject, healthText.gameObject, attackText.gameObject, manaText.gameObject, titleText.gameObject, cardText.gameObject }; faceDownElements = new GameObject[] { cardBack.gameObject }; Flip (isFaceUp); } public void Flip (bool shouldShow) { isFaceUp = shouldShow; var show = shouldShow ? faceUpElements : faceDownElements; var hide = shouldShow ? faceDownElements : faceUpElements; Toggle (show, true); Toggle (hide, false); Refresh (); } void Toggle (GameObject[] elements, bool isActive) { for (int i = 0; i < elements.Length; ++i) { elements [i].SetActive (isActive); } } void Refresh () { if (isFaceUp == false) return; manaText.text = card.cost.ToString (); titleText.text = card.name; cardText.text = card.text; var minion = card as Minion; if (minion != null) { attackText.text = minion.attack.ToString (); healthText.text = minion.maxHitPoints.ToString (); } else { attackText.text = string.Empty; healthText.text = string.Empty; } }
Draw Cards View
Next, we need to add the code that observes the DrawCardsAction notification and presents the results to our users. Note that this could have been placed just about anywhere, such as in the BoardView, or PlayerView component scripts. Adding it to the PlayerView might be the most intuitive since it is the PlayerSystem that performs the logic. However, since there are two PlayerView instances in a scene then we would need multiple listeners and would need to add extra code to “ignore” the action where the player didn’t match. The BoardView might have been another good choice, because it could listen to the notification one time for all players, and then just trigger the matching player to take over. I sort of liked that idea as well, but imagined that the BoardView may end up responsible for far too many tasks. In the end I decided to simply add a new component specific to this action. I created a new script in the Components folder called “DrawCardsView”. I also attached this new component to the same GameObject that the BoardView is attached to, so that I could easily get a reference to the board and its children player views.
public class DrawCardsView : MonoBehaviour { void OnEnable () { this.AddObserver (OnPrepareDrawCards, Global.PrepareNotification<DrawCardsAction> ()); } void OnDisable () { this.RemoveObserver (OnPrepareDrawCards, Global.PrepareNotification<DrawCardsAction> ()); } void OnPrepareDrawCards (object sender, object args) { var action = args as DrawCardsAction; action.perform.viewer = DrawCardsViewer; } IEnumerator DrawCardsViewer (IContainer game, GameAction action) { yield return true; // perform the action logic so that we know what cards have been drawn var drawAction = action as DrawCardsAction; var boardView = GetComponent<BoardView> (); var playerView = boardView.playerViews [drawAction.player.index]; for (int i = 0; i < drawAction.cards.Count; ++i) { int deckSize = action.player[Zones.Deck].Count + drawAction.cards.Count - (i + 1); playerView.deck.ShowDeckSize ((float)deckSize / (float)Player.maxDeck); var cardView = boardView.cardPooler.Dequeue ().GetComponent<CardView> (); cardView.card = drawAction.cards [i]; cardView.transform.ResetParent (playerView.hand.transform); cardView.transform.position = playerView.deck.topCard.position; cardView.transform.rotation = playerView.deck.topCard.rotation; cardView.gameObject.SetActive (true); var showPreview = action.player.mode == ControlModes.Local; var addCard = playerView.hand.AddCard (cardView.transform, showPreview); while (addCard.MoveNext ()) yield return null; } } }
Because this is a MonoBehaviour, I can simply use the “OnEnable” and “OnDisable” methods to add and remove notification listeners. In this case I am observing the “prepare” phase of the DrawCardsAction as an opportunity to attach a “viewer” to the “perform” phase of the same action.
In the viewer method itself I make the very first statement a return statement with a value of “true” – which causes the “perform” key frame to trigger. This means that the Player System would apply the logic, and the DrawCardsAction should have its “cards” field updated so we know which cards have successfully been drawn.
Next, I can cache some references such as getting the correct PlayerView which matches the player who is actually drawing cards. I then loop over the number of cards that need to be drawn. Within each loop I determine how many cards are left in the deck and scale the deck view appropriately. Then I use my SetPooler to “Dequeue” a new card view instance (automatically creating new objects if necessary). I parent the view instance to the GameObject in the scene that represents the location for the player’s hand, but I set its world position and rotation to match the top of the player’s deck. You can think of this as the first keyframe in a tween so that we can animate the card from the deck to our hand. However, the animation needs to be different depending on whether or not it is the local player or the opponent that is drawing the card. Furthermore, there are several actions that can cause a card to need to be put in a players hand besides drawing a card from a deck. Therefore, I implemented the rest of this viewer’s animation in another script called the HandView.
Hand View
In the Hand View, I created a public method “AddCard” which accepts a transform reference of a GameObject (that should also have a CardView component attached). It also takes a parameter called “Show Preview” that indicates whether or not the card should animate straight into the player’s hand, or if it should take a small detour so that the player can see what was drawn.
public IEnumerator AddCard (Transform card, bool showPreview) { if (showPreview) { var preview = ShowPreview (card); while (preview.MoveNext ()) yield return null; } cards.Add (card); var layout = LayoutCards (); while (layout.MoveNext ()) yield return null; }
I created a “ShowPreview” method to handle the display of a drawn card to a user before sliding a card into place among the other cards in the hand. It Tweens the card view from “wherever” it currently is (in our case it will be located at the deck), and animates it moving to the same position as another GameObject that appears in the scene hierarchy – we have cached a reference to it called the “activeHandle”. While a card is between the deck and the display location, we will be checking its rotation. Whenever we determine that the card is physically face-up based on the rotation angle, we tell the CardView component to update itself appropriatly. After reaching the activeHandle position, the card remains still at that location for a second to give a user plenty of time to see it.
IEnumerator ShowPreview (Transform card) { Tweener tweener = null; card.RotateTo (activeHandle.rotation); tweener = card.MoveTo (activeHandle.position, Tweener.DefaultDuration, EasingEquations.EaseOutBack); var cardView = card.GetComponent<CardView> (); while (tweener != null) { if (!cardView.isFaceUp) { var toCard = (Camera.main.transform.position - card.position).normalized; if (Vector3.Dot (card.up, toCard) > 0) cardView.Flip (true); } yield return null; } tweener = card.Wait (1); while (tweener != null) yield return null; }
Finally, I have added a “Layoutcards” method which adjusts the position of all of the cards in the hand so that they can make room for the newly drawn card.
IEnumerator LayoutCards (bool animated = true) { var overlap = 0.2f; var width = cards.Count * overlap; var xPos = -(width / 2f); var duration = animated ? 0.25f : 0; Tweener tweener = null; for (int i = 0; i < cards.Count; ++i) { var canvas = cards [i].GetComponentInChildren<Canvas> (); canvas.sortingOrder = i; var position = inactiveHandle.position + new Vector3 (xPos, 0, 0); cards [i].RotateTo (inactiveHandle.rotation, duration); tweener = cards [i].MoveTo (position, duration); xPos += overlap; } while (tweener != null) yield return null; }
Game View System
There is one final step to do before trying everything out. At the moment, we haven’t actually created a deck of cards for either player. We need some sort of placeholder data for now, and I want it to have random values for all of the card fields like mana cost, attack and health, etc so that we can see whether or not the views update properly. In addition, I want to name each card in order so that we can see that cards are drawn as if the deck was shuffled, even though we didn’t need to shuffle it. Open up the GameViewSystem script and update the Temp_SetupSinglePlayer method as follows:
void Temp_SetupSinglePlayer() { var match = container.GetMatch (); match.players [0].mode = ControlModes.Local; match.players [1].mode = ControlModes.Computer; foreach (Player p in match.players) { for (int i = 0; i < Player.maxDeck; ++i) { var card = new Minion (); card.name = "Card " + i.ToString(); card.cost = Random.Range (1, 10); card.maxHitPoints = card.hitPoints = Random.Range (1, card.cost); card.attack = card.cost - card.hitPoints; p [Zones.Deck].Add (card); } } }
Demo
At this point we have successfully added everything necessary to implement drawing a card, both for the model behind the scenes and for the view to display it to the user. Save your scene and project and then press “Play” for the “Game” scene. When you press the “End Turn” button you should see the opponent draw a card before it is your turn again. When your turn begins you will also draw a card, but the animation should be different – you should see a preview of your card before it lands face up in your own hand. You wont be able to change turns again until the entire action sequence has played out at which point the game state will be idle again. You can keep drawing cards as long as you like, and should notice the size of the deck shrink over time. Eventually the deck will be depleted and the view for the deck will disappear from the screen. You can continue changing turns, but no additional cards will be drawn.
Summary
In this lesson, we created our first action-sequence, by causing a new action to occur as a result of another action. Now, whenever a turn is changed, a player will draw a card. We implemented everything necessary on the models, systems and views to make a complete and playable demo. – Drawing Cards”
Hi John, amazing tutorial so far. I’m struggling to understand the loop of this archtecture, do you have a link or maybe another post on the blog that explains the workflow for the Game Systems, Actions and notifications? I kinda understand the notification system, but after rereading the code it’s getting harder to follow up with so many systems working simultaneously. Cheers from Brazil!
Much of the game systems in this tutorial are unique to this game, due to the complexity of its design. If there is something particular you are struggling with, feel free to ask here or on my forums and I will try to elaborate. I have talked about actions and notifications in other blog posts such as this mini 3 part series where I originally created the notification center:
I have also used similar architecture patterns in other tutorial project series. If something doesn’t click in one, maybe it would help to try another such as the Tactics RPG. Good luck!
|
https://theliquidfire.com/2017/10/02/make-a-ccg-drawing-cards/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
In the previous post we built a basic Pyramid application: a foundation for a RESTful API. For simplicity, I left out many details. Today, we will transform that application into something more close a real API. At the end of this article, we will have developed Pyramid API that handles single resource persisted with MongoDB database and managed by Pyramid’s traversal routing mechanism.
Views
Let’s start with the view layer. A RESTful API should provide up to four operations to interact with each resource: Create, Retrieve, Update and Delete (CRUD). According to REST guidelines, URLs identify resources while HTTP verbs are used to specify actions on these resources. As a result we end up with unified and consisted naming approach.
A RESTful API needs to provide only two URLs per resource: one for a collection and one for a specific resource. Collections will be referred by plural names while resources by identifiers (
ObjectId in case of MongoDB).
/cities # HTTP verbs allowed: GET, POST /cities/:id # HTTP verbs allowed: GET, PUT, DELETE
Operations
update,
retrieve (single result) and
delete are bound to a single resource i.e. City (specified as
context).
@view_config(request_method=‘PATCH', context=City, renderer='json') def update_city(context, request): r = context.update(request.json_body, True) return Response( status='202 Accepted', content_type='application/json; charset=UTF-8') @view_config(request_method='GET', context=City, renderer='json') def get_city(context, request): r = context.retrieve() if r is None: raise HTTPNotFound() else: return r @view_config(request_method=‘DELETE', context=City, renderer='json') def delete_city(context, request): context.delete() return Response( status='202 Accepted', content_type='application/json; charset=UTF-8’)
For operations
create and
retrieve (listing) we define
context as a resource collection, i.e.
Cities.
@view_config(request_method=‘PUT', context=Cities, renderer='json') def create_city(context, request): r = context.create(request.json_body) return Response( status='201 Created', content_type='application/json; charset=UTF-8') @view_config(request_method='GET', context=Cities, renderer='json') def list_cities(context, request): return context.retrieve()
Error handling is still very basic. Pyramid allows us to redefine most common errors in the view with a custom handler. We will use that feature to redefine
notfound handler so it returns data in JSON format.
@notfound_view_config() def notfound(request): return Response( body=json.dumps({'message': 'Custom `Not Found` message'}), status='404 Not Found’, content_type='application/json’)
Finally, we can optionally add a view for the root of our API; it will be called
@view_config(renderer='json', context=Root) def home(context, request): return {'info': 'City API’}
Persistance Layer
We will be using PyMongo to talk to our database; it’s a lightweight Python driver supported by 10gen. You can install it into current virtual environment using
pip.
pip install pymongo
We need to also add
pymongo to the list of
requires inside
setup.py.
requires = [ … pymongo ]
Within
development.ini, under
[app:main], we specify our database connection string.
mongo_uri = mongodb://127.0.0.1:27017/cityz
Now, we are ready to write the code that connects to the database. It will be stored in
db.py.
def includeme(config): settings = config.registry.settings # Store DB connection in registry db_url = urlparse(settings['mongo_uri']) conn = pymongo.Connection(host=db_url.hostname, port=db_url.port) settings['db_conn'] = conn # Make DB connection accessible as a request property def _get_db(request): settings = request.registry.settings db = settings['db_conn'][db_url.path[1:]] if db_url.username and db_url.password: db.authenticate(db_url.username, db_url.password) return db config.set_request_property(_get_db, 'db', reify=True)
This code parses the configuration file and adds MongoDB connection as a request property. Lastly, we include that file in
__init__.py using
config.include(‘.db’).
Traversal
The second major change will be transforming our routing mechanism: traditional URL dispatch approach will be replaced by traversal, Pyramid’s unique feature.
In a nutshell, the idea behind traversal is to build a tree structure out of possible paths which the application can respond to; e.g.
/country/us/cities/sf,
/country/france/cities/paris,
/country/france/cities/nancy form a tree with three nodes. When a request reaches the application, its path is being compared with each branch from that tree (in other words, the tree is being traversed to find a matching branch). If a branch matches the requested path, the associated logic is applied.
Such approach goes well with document-oriented approach of MongoDB database; it allows to map application’s routes hierarchy directly into a hierarchy of the underlaying data store. For the previous example,
/country segment can be mapped to
countries collection in a MongoDB database while
/city segment could be associated with embedded
cities collection. Obviously, this only makes sense if it’s the natural way to present the data, i.e. a city cannot belong to the same country at the same time.
When using traversal, we don't need the code that dispatches requests (let’s remove that from
__init__.py). The dispatch process is being handled by Pyramid’s resource layer - each Pyramid’s resource represent a node in a virtual tree that maps to the structure of a route.
Let’s start with the resource abstraction that is built around
dict. It knows what is its parent and how to bind another resource node to it via
add_child method.
class Resource(dict): def __init__(self, ref, parent): self.__name__ = ref self.__parent__ = parent def __repr__(self): # use standard object representation (not dict's) return object.__repr__(self) def add_child(self, ref, klass): resource = klass(ref=ref, parent=self) self[ref] = resource
In the next step, we are going to focus on a persistence abstraction and separate MongoDB collection from MongoDB document representation. A collection should only know how to fetch its all elements (its
retrieve method) and how to add a new element to that collection (its
create method).
class MongoCollection(Resource): @property def collection(self): root = find_root(self) request = root.request return request.db[self.collection_name] def retrieve(self): return [elem for elem in self.collection.find()] def create(self, document): object_id = self.collection.insert(document) return self.resource_name(ref=str(object_id), parent=self)
A single document abstraction operates on « itself » and should be able (1) to return an element for a particular identifier, (2) to update an element with a particular identifier or (3) to delete an element with a particular identifier; this identifier is stored inside
ref for a given resource.
class MongoDocument(Resource): def __init__(self, ref, parent): Resource.__init__(self, ref, parent) self.collection = parent.collection self.spec = {'_id': ObjectId(ref)} def retrieve(self): return self.collection.find_one(self.spec) def update(self, data, patch=False): if patch: data = {'$set': data} self.collection.update(self.spec, data) def delete(self): self.collection.remove(self.spec)
update method is able to different between a partial update (when
patch is
True) and full update (a resource is fully replaced).
With those persistence abstraction, we can now construct resources that will correspond to our City resource, i.e. its collection (named
Cities) and its single document (named
City).
class City(MongoDocument): def __init__(self, ref, parent): MongoDocument.__init__(self, ref, parent) class Cities(MongoCollection): collection_name = 'cities' resource_name = City def __getitem__(self, ref): return City(ref, self)
Cities collection delegates the task to
City document when an identifier is provided.
Now, we are ready to assemble it all together using
Root resource.
class Root(Resource): def __init__(self, request): Resource.__init__(self, ref='', parent=None) self.request = request self.add_child('cities', Cities)
Test it
With everything put in place, we are ready to run our application and see how it behaves. First, let’s create several cities
curl -XPOST -d ‘{ “name”: “Poznan”, “population”: “550,742" }' localhost:6543/cities curl -XPOST -d ‘{ “name”: “Paris”, “population”: “2,234,105" }' localhost:6543/cities curl -XPOST -d ‘{ “name”: “San Francisco”, “population”: “825,865" }' localhost:6543/cities
Let’s verify if were persisted and are available from the API.
curl localhost:6543/cities [{"_id": {"$oid": "5292826fa0022dde6b80ebf0"}, "name": "Paris", "population": "2,234,105"}, {"_id": {"$oid": "52928283a0022dde6b80ebf1"}, "name": "San Francisco", "population": "825,865"}, {"_id": {"$oid": "5292abce643a0851b949db22"}, "name": "Poznan", "population": "550,742"}]%
There is a small mistake in the name of Poznań city. The last letter is a special character. Let’s amend it by updating that particular city.
curl -XPUT -d '{"name": "Poznań"}' localhost:6543/cities/5292abce643a0851b949db22
Let’s see if that changed has been taken into account by retrieving that particular city.
curl localhost:6543/cities/5292abce643a0851b949db22 {"_id": {"$oid": "5292abce643a0851b949db22"}, "name": "Pozna\u0144", "population": "550,742"}%
Lastly, we should be able to remove any city.
curl -XDELETE localhost:6543/cities/5292826fa0022dde6b80ebf0 curl localhost:6543/cities [{"_id": {"$oid": "52928283a0022dde6b80ebf1"}, "name": "San Francisco", "population": "825,865"}, {"_id": {"$oid": "5292abce643a0851b949db22"}, "name": "Poznan", "population": "550,742"}]%
Summary
Our application looks now much more like a real RESTful API. We have implemented all CRUD operations backed with a MongoDB database. We have also switched the routing mechanism into traversal. The application, however, is not very generic: there are many code repetitions, error handling is rudimentary and we haven’t written a single test. In the next article, I will show you how to solve these problems.
The code of this tutorial is available on Github under
persist-part2 tag.
My Tech Newsletter
Get emails from me about programming & web development. I usually send it once a month
|
https://zaiste.net/posts/building-restful-api-pyramid-resource-traversal/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
from thinkbayes2 import Pmf, Cdf, Suite import thinkplot
Different species of flea beetle can be distinguished by the width and angle of the aedeagus. The data below includes measurements and know species classification for 74 specimens.
Suppose you discover a new specimen under conditions where it is equally likely to be any of the three species. You measure the aedeagus and width 140 microns and angle 15 (in multiples of 7.5 degrees). What is the probability that it belongs to each species?
This problem is based on this data story on DASL
Datafile Name: Flea Beetles
Datafile Subjects: Biology
Story Names: Flea Beetles
Reference: Lubischew, A.A. (1962) On the use of discriminant functions in taxonomy. Biometrics, 18, 455-477. Also found in: Hand, D.J., et al. (1994) A Handbook of Small Data Sets, London: Chapman & Hall, 254-255.
Authorization: Contact Authors
Description: Data were collected on the genus of flea beetle Chaetocnema, which contains three species: concinna (Con), heikertingeri (Hei), and heptapotamica (Hep). Measurements were made on the width and angle of the aedeagus of each beetle. The goal of the original study was to form a classification rule to distinguish the three species.
Number of cases: 74
Variable Names:
Width: The maximal width of aedeagus in the forpart (in microns)
Angle: The front angle of the aedeagus (1 unit = 7.5 degrees)
Species: Species of flea beetle from the genus Chaetocnema
We can read the data from this file:
import pandas as pd df = pd.read_csv('../data/flea_beetles.csv', delimiter='\t') df.head()
Here's what the distributions of width look like.
def plot_cdfs(df, col): for name, group in df.groupby('Species'): cdf = Cdf(group[col], label=name) thinkplot.Cdf(cdf) thinkplot.decorate(xlabel=col, ylabel='CDF', loc='lower right')
plot_cdfs(df, 'Width')
And the distributions of angle.
plot_cdfs(df, 'Angle')
I'll group the data by species and compute summary statistics.
grouped = df.groupby('Species')
<pandas.core.groupby.groupby.DataFrameGroupBy object at 0x7f33a1dd59b0>
Here are the means.
means = grouped.mean()
And the standard deviations.
stddevs = grouped.std()
And the correlations.
for name, group in grouped: corr = group.Width.corr(group.Angle) print(name, corr)
Con -0.193701151757636 Hei -0.06420611481268008 Hep -0.12478515405529574
Those correlations are small enough that we can get an acceptable approximation by ignoring them, but we might want to come back later and write a complete solution that takes them into account.
To support the likelihood function, I'll make a dictionary for each attribute that contains a
norm object for each species.
from scipy.stats import norm dist_width = {} dist_angle = {} for name, group in grouped: dist_width[name] = norm(group.Width.mean(), group.Width.std()) dist_angle[name] = norm(group.Angle.mean(), group.Angle.std())
Now we can write the likelihood function concisely.
class Beetle(Suite): def Likelihood(self, data, hypo): """ data: sequence of width, height hypo: name of species """ width, angle = data name = hypo like = dist_width[name].pdf(width) like *= dist_angle[name].pdf(angle) return like
The hypotheses are the species names:
hypos = grouped.groups.keys()
dict_keys(['Con', 'Hei', 'Hep'])
We'll start with equal priors
suite = Beetle(hypos) suite.Print()
Con 0.3333333333333333 Hei 0.3333333333333333 Hep 0.3333333333333333
Now we can update with the data and print the posterior.
suite.Update((140, 15)) suite.Print()
Con 0.9902199258865487 Hei 0.009770186966082915 Hep 9.887147368342703e-06
Based on these measurements, the specimen is very likely to be an example of Chaetocnema concinna.
|
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/flea_beetle_soln.ipynb
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Chapter 41. Consumer Interface
Abstract
This chapter describes how to implement the Consumer interface, which is an essential step in the implementation of a Apache Camel component.
41.1. The Consumer Interface
Overview
An instance of org.apache.camel.Consumer type represents a source endpoint in a route. There are several different ways of implementing a consumer (see Section 38.1.3, “Consumer Patterns and Threading”), and this degree of flexibility is reflected in the inheritance hierarchy ( see Figure 41.1, “Consumer Inheritance Hierarchy”), which includes several different base classes for implementing a consumer.
Figure 41.1. Consumer Inheritance Hierarchy
Consumer parameter injection
For consumers that follow the scheduled poll pattern (see the section called “Scheduled poll pattern”), Apache Camel provides support for injecting parameters into consumer instances. For example, consider the following endpoint URI for a component identified by the
custom prefix:
custom:destination?consumer.myConsumerParam
Apache Camel provides support for automatically injecting query options of the form
consumer.\*. For the
consumer.myConsumerParam parameter, you need to define corresponding setter and getter methods on the Consumer implementation class as follows:
public class CustomConsumer extends ScheduledPollConsumer { ... String getMyConsumerParam() { ... } void setMyConsumerParam(String s) { ... } ... }
Where the getter and setter methods follow the usual Java bean conventions (including capitalizing the first letter of the property name).
In addition to defining the bean methods in your Consumer implementation, you must also remember to call the
configureConsumer() method in the implementation of
Endpoint.createConsumer(). See the section called “Scheduled poll endpoint implementation”). Example 41.1, “FileEndpoint createConsumer() Implementation” shows an example of a
createConsumer() method implementation, taken from the
FileEndpoint class in the file component:
Example 41.1. FileEndpoint createConsumer() Implementation
... public class FileEndpoint extends ScheduledPollEndpoint { ... public Consumer createConsumer(Processor processor) throws Exception { Consumer result = new FileConsumer(this, processor); configureConsumer(result); return result; } ... }
At run time, consumer parameter injection works as follows:
- When the endpoint is created, the default implementation of
DefaultComponent.createEndpoint(String uri)parses the URI to extract the consumer parameters, and stores them in the endpoint instance by calling
ScheduledPollEndpoint.configureProperties().
- When
createConsumer()is called, the method implementation calls
configureConsumer()to inject the consumer parameters (see Example 41.1, “FileEndpoint createConsumer() Implementation”).
- The
configureConsumer()method uses Java reflection to call the setter methods whose names match the relevant options after the
consumer.prefix has been stripped off.
Scheduled poll parameters
A consumer that follows the scheduled poll pattern automatically supports the consumer parameters shown in Table 41.1, “Scheduled Poll Parameters” (which can appear as query options in the endpoint URI).
Table 41.1. Scheduled Poll Parameters
Converting between event-driven and polling consumers
Apache Camel provides two special consumer implementations which can be used to convert back and forth between an event-driven consumer and a polling consumer. The following conversion classes are provided:
org.apache.camel.impl.EventDrivenPollingConsumer— Converts an event-driven consumer into a polling consumer instance.
org.apache.camel.impl.DefaultScheduledPollConsumer— Converts a polling consumer into an event-driven consumer instance.
In practice, these classes are used to simplify the task of implementing an Endpoint type. The Endpoint interface defines the following two methods for creating a consumer instance:
package org.apache.camel; public interface Endpoint { ... Consumer createConsumer(Processor processor) throws Exception; PollingConsumer createPollingConsumer() throws Exception; }
createConsumer() returns an event-driven consumer and
createPollingConsumer() returns a polling consumer. You would only implement one these methods. For example, if you are following the event-driven pattern for your consumer, you would implement the
createConsumer() method provide a method implementation for
createPollingConsumer() that simply raises an exception. With the help of the conversion classes, however, Apache Camel is able to provide a more useful default implementation.
For example, if you want to implement your consumer according to the event-driven pattern, you implement the endpoint by extending
DefaultEndpoint and implementing the
createConsumer() method. The implementation of
createPollingConsumer() is inherited from
DefaultEndpoint, where it is defined as follows:
public PollingConsumer<E> createPollingConsumer() throws Exception { return new EventDrivenPollingConsumer<E>(this); }
The
EventDrivenPollingConsumer constructor takes a reference to the event-driven consumer,
this, effectively wrapping it and converting it into a polling consumer. To implement the conversion, the
EventDrivenPollingConsumer instance buffers incoming events and makes them available on demand through the
receive(), the
receive(long timeout), and the
receiveNoWait() methods.
Analogously, if you are implementing your consumer according to the polling pattern, you implement the endpoint by extending
DefaultPollingEndpoint and implementing the
createPollingConsumer() method. In this case, the implementation of the
createConsumer() method is inherited from
DefaultPollingEndpoint, and the default implementation returns a
DefaultScheduledPollConsumer instance (which converts the polling consumer into an event-driven consumer).
ShutdownPrepared interface
Consumer classes can optionally implement the
org.apache.camel.spi.ShutdownPrepared interface, which enables your custom consumer endpoint to receive shutdown notifications.
Example 41.2, “ShutdownPrepared Interface” shows the definition of the
ShutdownPrepared interface.
Example 41.2. ShutdownPrepared Interface
package org.apache.camel.spi; public interface ShutdownPrepared { void prepareShutdown(boolean forced); }
The
ShutdownPrepared interface defines the following methods:
prepareShutdown
Receives notifications to shut down the consumer endpoint in one or two phases, as follows:
- Graceful shutdown — where the
forcedargument has the value
false. Attempt to clean up resources gracefully. For example, by stopping threads gracefully.
- Forced shutdown — where the
forcedargument has the value
true. This means that the shutdown has timed out, so you must clean up resources more aggressively. This is the last chance to clean up resources before the process exits.
ShutdownAware interface
Consumer classes can optionally implement the
org.apache.camel.spi.ShutdownAware interface, which interacts with the graceful shutdown mechanism, enabling a consumer to ask for extra time to shut down. This is typically needed for components such as SEDA, which can have pending exchanges stored in an internal queue. Normally, you would want to process all of the exchanges in the queue before shutting down the SEDA consumer.
Example 41.3, “ShutdownAware Interface” shows the definition of the
ShutdownAware interface.
Example 41.3. ShutdownAware Interface
// Java package org.apache.camel.spi; import org.apache.camel.ShutdownRunningTask; public interface ShutdownAware extends ShutdownPrepared { boolean deferShutdown(ShutdownRunningTask shutdownRunningTask); int getPendingExchangesSize(); }
The
ShutdownAware interface defines the following methods:
deferShutdown
Return
truefrom this method, if you want to delay shutdown of the consumer. The
shutdownRunningTaskargument is an
enumwhich can take either of the following values:
ShutdownRunningTask.CompleteCurrentTaskOnly— finish processing the exchanges that are currently being processed by the consumer’s thread pool, but do not attempt to process any more exchanges than that.
ShutdownRunningTask.CompleteAllTasks— process all of the pending exchanges. For example, in the case of the SEDA component, the consumer would process all of the exchanges from its incoming queue.
getPendingExchangesSize
- Indicates how many exchanges remain to be processed by the consumer. A zero value indicates that processing is finished and the consumer can be shut down.
For an example of how to define the
ShutdownAware methods, see Example 41.7, “Custom Threading Implementation”.
41.2. Implementing the Consumer Interface
Alternative ways of implementing a consumer
You can implement a consumer in one of the following ways:
Event-driven consumer implementation
In an event-driven consumer, processing is driven explicitly by external events. The events are received through an event-listener interface, where the listener interface is specific to the particular event source.
Example 41.4, “JMXConsumer Implementation” shows the implementation of the
JMXConsumer class, which is taken from the Apache Camel JMX component implementation. The
JMXConsumer class is an example of an event-driven consumer, which is implemented by inheriting from the
org.apache.camel.impl.DefaultConsumer class. In the case of the
JMXConsumer example, events are represented by calls on the
NotificationListener.handleNotification() method, which is a standard way of receiving JMX events. In order to receive these JMX events, it is necessary to implement the NotificationListener interface and override the
handleNotification() method, as shown in Example 41.4, “JMXConsumer Implementation”.
Example 41.4. JMXConsumer Implementation
package org.apache.camel.component.jmx; import javax.management.Notification; import javax.management.NotificationListener; import org.apache.camel.Processor; import org.apache.camel.impl.DefaultConsumer; public class JMXConsumer extends DefaultConsumer implements NotificationListener { 1 JMXEndpoint jmxEndpoint; public JMXConsumer(JMXEndpoint endpoint, Processor processor) { 2 super(endpoint, processor); this.jmxEndpoint = endpoint; } public void handleNotification(Notification notification, Object handback) { 3 try { getProcessor().process(jmxEndpoint.createExchange(notification)); 4 } catch (Throwable e) { handleException(e); 5 } } }
- 1
- The
JMXConsumerpattern follows the usual pattern for event-driven consumers by extending the
DefaultConsumerclass. Additionally, because this consumer is designed to receive events from JMX (which are represented by JMX notifications), it is necessary to implement the
NotificationListenerinterface.
- 2
- You must implement at least one constructor that takes a reference to the parent endpoint,
endpoint, and a reference to the next processor in the chain,
processor, as arguments.
- 3
- The
handleNotification()method (which is defined in
NotificationListener) is automatically invoked by JMX whenever a JMX notification arrives. The body of this method should contain the code that performs the consumer’s event processing. Because the
handleNotification()call originates from the JMX layer, the consumer’s threading model is implicitly controlled by the JMX layer, not by the
JMXConsumerclass.
- 4
- This line of code combines two steps. First, the JMX notification object is converted into an exchange object, which is the generic representation of an event in Apache Camel. Then the newly created exchange object is passed to the next processor in the route (invoked synchronously).
- 5
- The
handleException()method is implemented by the
DefaultConsumerbase class. By default, it handles exceptions using the
org.apache.camel.impl.LoggingExceptionHandlerclass.
The
handleNotification() method is specific to the JMX example. When implementing your own event-driven consumer, you must identify an analogous event listener method to implement in your custom consumer.
Scheduled poll consumer implementation
In a scheduled poll consumer, polling events are automatically generated by a timer class,
java.util.concurrent.ScheduledExecutorService. To receive the generated polling events, you must implement the
ScheduledPollConsumer.poll() method (see Section 38.1.3, “Consumer Patterns and Threading”).
Example 41.5, “ScheduledPollConsumer Implementation” shows how to implement a consumer that follows the scheduled poll pattern, which is implemented by extending the
ScheduledPollConsumer class.
Example 41.5. ScheduledPollConsumer Implementation
import java.util.concurrent.ScheduledExecutorService; import org.apache.camel.Consumer; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.PollingConsumer; import org.apache.camel.Processor; import org.apache.camel.impl.ScheduledPollConsumer; public class pass:quotes[CustomConsumer] extends ScheduledPollConsumer { 1 private final pass:quotes[CustomEndpoint] endpoint; public pass:quotes[CustomConsumer](pass:quotes[CustomEndpoint] endpoint, Processor processor) { 2 super(endpoint, processor); this.endpoint = endpoint; } protected void poll() throws Exception { 3 Exchange exchange = /* Receive exchange object ... */; // Example of a synchronous processor. getProcessor().process(exchange); 4 } @Override protected void doStart() throws Exception { 5 // Pre-Start: // Place code here to execute just before start of processing. super.doStart(); // Post-Start: // Place code here to execute just after start of processing. } @Override protected void doStop() throws Exception { 6 // Pre-Stop: // Place code here to execute just before processing stops. super.doStop(); // Post-Stop: // Place code here to execute just after processing stops. } }
- 1
- Implement a scheduled poll consumer class, CustomConsumer, by extending the
org.apache.camel.impl.ScheduledPollConsumerclass.
- 2
- You must implement at least one constructor that takes a reference to the parent endpoint,
endpoint, and a reference to the next processor in the chain,
processor, as arguments.
- 3
- Override the
poll()method to receive the scheduled polling events. This is where you should put the code that retrieves and processes incoming events (represented by exchange objects).
- 4
- In this example, the event is processed synchronously. If you want to process events asynchronously, you should use a reference to an asynchronous processor instead, by calling
getAsyncProcessor(). For details of how to process events asynchronously, see Section 38.1.4, “Asynchronous Processing”.
- 5
- (Optional) If you want some lines of code to execute as the consumer is starting up, override the
doStart()method as shown.
- 6
- (Optional) If you want some lines of code to execute as the consumer is stopping, override the
doStop()method as shown.
Polling consumer implementation
Example 41.6, “PollingConsumerSupport Implementation” outlines how to implement a consumer that follows the polling pattern, which is implemented by extending the
PollingConsumerSupport class.
Example 41.6. PollingConsumerSupport Implementation
import org.apache.camel.Exchange; import org.apache.camel.RuntimeCamelException; import org.apache.camel.impl.PollingConsumerSupport; public class pass:quotes[CustomConsumer] extends PollingConsumerSupport { 1 private final pass:quotes[CustomEndpoint] endpoint; public pass:quotes[CustomConsumer](pass:quotes[CustomEndpoint] endpoint) { 2 super(endpoint); this.endpoint = endpoint; } public Exchange receiveNoWait() { 3 Exchange exchange = /* Obtain an exchange object. */; // Further processing ... return exchange; } public Exchange receive() { 4 // Blocking poll ... } public Exchange receive(long timeout) { 5 // Poll with timeout ... } protected void doStart() throws Exception { 6 // Code to execute whilst starting up. } protected void doStop() throws Exception { // Code to execute whilst shutting down. } }
- 1
- Implement your polling consumer class, CustomConsumer, by extending the
org.apache.camel.impl.PollingConsumerSupportclass.
- 2
- You must implement at least one constructor that takes a reference to the parent endpoint,
endpoint, as an argument. A polling consumer does not need a reference to a processor instance.
- 3
- The
receiveNoWait()method should implement a non-blocking algorithm for retrieving an event (exchange object). If no event is available, it should return
null.
- 4
- The
receive()method should implement a blocking algorithm for retrieving an event. This method can block indefinitely, if events remain unavailable.
- 5
- The
receive(long timeout)method implements an algorithm that can block for as long as the specified timeout (typically specified in units of milliseconds).
- 6
- If you want to insert code that executes while a consumer is starting up or shutting down, implement the
doStart()method and the
doStop()method, respectively.
Custom threading implementation
If the standard consumer patterns are not suitable for your consumer implementation, you can implement the
Consumer interface directly and write the threading code yourself. When writing the threading code, however, it is important that you comply with the standard Apache Camel threading model, as described in Section 2.8, “Threading Model”.
For example, the SEDA component from
camel-core implements its own consumer threading, which is consistent with the Apache Camel threading model. Example 41.7, “Custom Threading Implementation” shows an outline of how the
SedaConsumer class implements its threading.
Example 41.7. Custom Threading Implementation
package org.apache.camel.component.seda; import java.util.ArrayList; import java.util.List; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeUnit; import org.apache.camel.Consumer; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.ShutdownRunningTask; import org.apache.camel.impl.LoggingExceptionHandler; import org.apache.camel.impl.ServiceSupport; import org.apache.camel.util.ServiceHelper; ... import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; /** * A Consumer for the SEDA component. * * @version $Revision: 922485 $ */ public class SedaConsumer extends ServiceSupport implements Consumer, Runnable, ShutdownAware { 1 private static final transient Log LOG = LogFactory.getLog(SedaConsumer.class); private SedaEndpoint endpoint; private Processor processor; private ExecutorService executor; ... public SedaConsumer(SedaEndpoint endpoint, Processor processor) { this.endpoint = endpoint; this.processor = processor; } ... public void run() { 2 BlockingQueue<Exchange> queue = endpoint.getQueue(); // Poll the queue and process exchanges ... } ... protected void doStart() throws Exception { 3 int poolSize = endpoint.getConcurrentConsumers(); executor = endpoint.getCamelContext().getExecutorServiceStrategy() .newFixedThreadPool(this, endpoint.getEndpointUri(), poolSize); 4 for (int i = 0; i < poolSize; i++) { 5 executor.execute(this); } endpoint.onStarted(this); } protected void doStop() throws Exception { 6 endpoint.onStopped(this); // must shutdown executor on stop to avoid overhead of having them running endpoint.getCamelContext().getExecutorServiceStrategy().shutdownNow(executor); 7 if (multicast != null) { ServiceHelper.stopServices(multicast); } } ... //---------- // Implementation of ShutdownAware interface public boolean deferShutdown(ShutdownRunningTask shutdownRunningTask) { // deny stopping on shutdown as we want seda consumers to run in case some other queues // depend on this consumer to run, so it can complete its exchanges return true; } public int getPendingExchangesSize() { // number of pending messages on the queue return endpoint.getQueue().size(); } }
- 1
- The
SedaConsumerclass is implemented by extending the
org.apache.camel.impl.ServiceSupportclass and implementing the
Consumer,
Runnable, and
ShutdownAwareinterfaces.
- 2
- Implement the
Runnable.run()method to define what the consumer does while it is running in a thread. In this case, the consumer runs in a loop, polling the queue for new exchanges and then processing the exchanges in the latter part of the queue.
- 3
- The
doStart()method is inherited from
ServiceSupport. You override this method in order to define what the consumer does when it starts up.
- 4
- Instead of creating threads directly, you should create a thread pool using the
ExecutorServiceStrategyobject that is registered with the
CamelContext. This is important, because it enables Apache Camel to implement centralized management of threads and support such features as graceful shutdown. For details, see Section 2.8, “Threading Model”.
- 5
- Kick off the threads by calling the
ExecutorService.execute()method
poolSizetimes.
- 6
- The
doStop()method is inherited from
ServiceSupport. You override this method in order to define what the consumer does when it shuts down.
- 7
- Shut down the thread pool, which is represented by the
executorinstance.
|
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.3/html/apache_camel_development_guide/consumerintf
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Today we will look into java zip file example. We will also compress a folder and create zip file using java program.
Table of Contents
Java ZIP
java.util.zip.ZipOutputStream can be used to compress a file into ZIP format. Since a zip file can contain multiple entries, ZipOutputStream uses
java.util.zip.ZipEntry to represent a zip file entry.
Java ZIP File
Creating a zip archive for a single file is very easy, we need to create a ZipOutputStream object from the FileOutputStream object of destination file. Then we add a new ZipEntry to the ZipOutputStream and use FileInputStream to read the source file to ZipOutputStream object. Once we are done writing, we need to close ZipEntry and release all the resources.
Java Zip Folder
Zipping a directory is little tricky, first we need to get the files list as absolute path. Then process each one of them separately. We need to add a ZipEntry for each file and use FileInputStream to read the content of the source file to the ZipEntry corresponding to that file.
Java Zip Example
Here is the java program showing how to zip a single file or zip a folder in java.
package com.journaldev.files; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.zip.ZipEntry; import java.util.zip.ZipOutputStream; public class ZipFiles { List<String> filesListInDir = new ArrayList<String>(); public static void main(String[] args) { File file = new File("/Users/pankaj/sitemap.xml"); String zipFileName = "/Users/pankaj/sitemap.zip"; File dir = new File("/Users/pankaj/tmp"); String zipDirName = "/Users/pankaj/tmp.zip"; zipSingleFile(file, zipFileName); ZipFiles zipFiles = new ZipFiles(); zipFiles.zipDirectory(dir, zipDirName); } /** * This method zips the directory * @param dir * @param zipDirName */ private void zipDirectory(File dir, String zipDirName) { try { populateFilesList(dir); //now zip files one by one //create ZipOutputStream to write to the zip file FileOutputStream fos = new FileOutputStream(zipDirName); ZipOutputStream zos = new ZipOutputStream(fos); for(String filePath : filesListInDir){ System.out.println("Zipping "+filePath); //for ZipEntry we need to keep only relative file path, so we used substring on absolute path ZipEntry ze = new ZipEntry(filePath.substring(dir.getAbsolutePath().length()+1, filePath.length())); zos.putNextEntry(ze); //read the file and write to ZipOutputStream FileInputStream fis = new FileInputStream(filePath); byte[] buffer = new byte[1024]; int len; while ((len = fis.read(buffer)) > 0) { zos.write(buffer, 0, len); } zos.closeEntry(); fis.close(); } zos.close(); fos.close(); } catch (IOException e) { e.printStackTrace(); } } /** * This method populates all the files in a directory to a List * @param dir * @throws IOException */ private void populateFilesList(File dir) throws IOException { File[] files = dir.listFiles(); for(File file : files){ if(file.isFile()) filesListInDir.add(file.getAbsolutePath()); else populateFilesList(file); } } /** * This method compresses the single file to zip format * @param file * @param zipFileName */ private static void zipSingleFile(File file, String zipFileName) { try { //create ZipOutputStream to write to the zip file FileOutputStream fos = new FileOutputStream(zipFileName); ZipOutputStream zos = new ZipOutputStream(fos); //add a new Zip Entry to the ZipOutputStream ZipEntry ze = new ZipEntry(file.getName()); zos.putNextEntry(ze); //read the file and write to ZipOutputStream FileInputStream fis = new FileInputStream(file); byte[] buffer = new byte[1024]; int len; while ((len = fis.read(buffer)) > 0) { zos.write(buffer, 0, len); } //Close the zip entry to write to zip file zos.closeEntry(); //Close resources zos.close(); fis.close(); fos.close(); System.out.println(file.getCanonicalPath()+" is zipped to "+zipFileName); } catch (IOException e) { e.printStackTrace(); } } }
Output of the above java zip example program is:
/Users/pankaj/sitemap.xml is zipped to /Users/pankaj/sitemap.zip Zipping /Users/pankaj/tmp/.DS_Store Zipping /Users/pankaj/tmp/data/data.dat Zipping /Users/pankaj/tmp/data/data.xml Zipping /Users/pankaj/tmp/data/xmls/project.xml Zipping /Users/pankaj/tmp/data/xmls/web.xml Zipping /Users/pankaj/tmp/data.Xml Zipping /Users/pankaj/tmp/DB.xml Zipping /Users/pankaj/tmp/item.XML Zipping /Users/pankaj/tmp/item.xsd Zipping /Users/pankaj/tmp/ms/data.txt Zipping /Users/pankaj/tmp/ms/project.doc
Notice that while logging files to zip in directory, I am printing absolute path. But while adding zip entry, I am using relative path from the directory so that when we unzip it, it will create the same directory structure. That’s all for Java zip example.
I would like to know, how to zip individual files that all end in the extension of .txt and then have them preserve their name.
Hi,
Are there anything which can allow to compress a file in different modes like fast compression or full compression.
Thanks,
Nitish Kashyap
how to preserve file permissions?
Thank You …..
Hiii… can you give me a source code of unzip a file with its hirarchy. suppose i have a zip file “DICOM.zip” and it contains a directory called “root”. now again root contains “subroot”. now “subroot” contains three folder “A”,”B”,”C”. and “A”contains some files like a1.img,a2.img,a3.img etc. and “B” and “C” also contains same type of files. So my quation is that, i want such a java program to unzip my “DICOM.zip” so that it can extract the all file with same hirarchy as source hirarchy of the directory. Thanks in advance.
ZipFiles zipFiles = new ZipFiles();
how i can handle this error
Hi, how can i save many folders into the zip file?
I discovered your post on ZipOutputStream after I had finished writing my own code to create an epub (which is a zipfile by another name) from its constituent files. My code is more or less equivalent to yours, and it seems to run clean, but when I look at the epub zipfile after running the code, the file is there but it is length=0. Since there are no exceptions, I have been searching the web for posts like yours. I have run out of ideas. What might I have missed?
Can you try to run in debug mode and check if file is getting written or not.
Hello pankaj can you please help me.? i mantioned my problem ubove. Thanks in advanced .
Hi! I’ve been following your weblog for some time now and finally got the bravery to go ahead and give you a shout out from Lubbock Tx! Just wanted to say keep up the great job!
|
https://www.journaldev.com/957/java-zip-file-folder-example
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
This post is written by Kinnar Sen, Senior Solutions Architect, EC2 Spot
Apache Spark is an open-source, distributed processing system used for big data workloads. It provides API operations to perform multiple tasks such as streaming, extract transform load (ETL), query, machine learning (ML), and graph processing. Spark supports four different types of cluster managers (Spark standalone, Apache Mesos, Hadoop YARN, and Kubernetes), which are responsible for scheduling and allocation of resources in the cluster. Spark can run with native Kubernetes support since 2018 (Spark 2.3). AWS customers that have already chosen Kubernetes as their container orchestration tool can also choose to run Spark applications in Kubernetes, increasing the effectiveness of their operations and compute resources.
In this post, I illustrate the deployment of scalable, resilient, and cost optimized Spark application using Kubernetes via Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon EC2 Spot Instances. Learn how to save money on big data workloads by implementing this solution.
Overview
Amazon EC2 Spot Instances
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS Cloud. Spot Instances are available at up to a 90% discount compared to On-Demand Instance prices. Capacity pools are a group of EC2 instances that belong to particular instance family, size, and Availability Zone (AZ). If EC2 needs capacity back for On-Demand Instance usage, Spot Instances can be interrupted by EC2 with a two-minute notification. There are many graceful ways to handle the interruption to ensure that the application is well architected for resilience and fault tolerance. This can be automated via the application and/or infrastructure deployments. Spot Instances are ideal for stateless, fault tolerant, loosely coupled and flexible workloads that can handle interruptions.
Amazon Elastic Kubernetes Service
Amazon EKS is a fully managed Kubernetes service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. It provides a highly available and scalable managed control plane. It also provides managed worker nodes, which let you create, update, or terminate worker nodes for your cluster with a single command. It is a great choice for deploying flexible and fault tolerant containerized applications. Amazon EKS supports creating and managing Amazon EC2 Spot Instances using Amazon EKS-managed node groups following Spot best practices. This enables you to take advantage of the steep savings and scale that Spot Instances provide for interruptible workloads running in your Kubernetes cluster. Using EKS-managed node groups with Spot Instances requires less operational effort compared to using self-managed nodes. In addition to launching Spot Instances in managed node groups, it is possible to specify multiple instance types in EKS managed node groups. You can find more in this blog.
Apache Spark and Kubernetes
When a spark application is submitted to the Kubernetes cluster the following happens:
- A Spark driver is created.
- The driver and the run within pods.
- The Spark driver then requests for executors, which are scheduled to run within pods. The executors are managed by the driver.
- The application is launched and once it completes, the executor pods are cleaned up. The driver pod persists the logs and remains in a completed state until the pod is cleared by garbage collection or manually removed. The driver in a completed stage does not consume any memory or compute resources.
When a spark application runs on clusters managed by Kubernetes, the native Kubernetes scheduler is used. It is possible to schedule the driver/executor pods on a subset of available nodes. The applications can be launched either by a vanilla ‘spark submit’, a workflow orchestrator like Apache Airflow or the spark operator. I use vanilla ‘spark submit’ in this blog. Amazon EMR is is also able to schedule Spark applications on EKS clusters as described in this launch blog, but Amazon EMR on EKS is out of scope for this post.
Cost optimization
For any organization running big data workloads there are three key requirements: scalability, performance, and low cost. As the size of data increases, there is demand for more compute capacity and the total cost of ownership increases. It is critical to optimize the cost of big data applications. Big Data frameworks (in this case, Spark) are distributed to manage and process high volumes of data. These frameworks are designed for failure, can run on machines with different configurations, and are inherently resilient and flexible.
If Spark deploys on Kubernetes, the executor pods can be scheduled on EC2 Spot Instances and driver pods on On-Demand Instances. This reduces the overall cost of deployment – Spot Instances can save up to 90% over On-Demand Instance prices. This also enables faster results by scaling out executors running on Spot Instances. Spot Instances, by design, can be interrupted when EC2 needs the capacity back. If a driver pod is running on a Spot Instance, which is interrupted then the application fails and the application must be re-submitted. To avoid this situation, the driver pod can be scheduled on On-Demand Instances only. This adds a layer of resiliency to the Spark application running on Kubernetes. To cost optimize the deployment, all the executor pods are scheduled on Spot Instances as that’s where the bulk of compute happens. Spark’s inherent resiliency has the driver launch new executors to replace the ones that fail due to Spot interruptions.
There are a couple of key points to note here.
- The idea is to start with minimum number of nodes for both On-Demand and Spot Instances (one each) and then auto-scale usingCluster Autoscaler and EC2 Auto Scaling Cluster Autoscaler for AWS provides integration with Auto Scaling groups. If there are not sufficient resources, the driver and executor pods go into pending state. The Cluster Autoscaler detects pods in pending state and scales worker nodes within the identified Auto Scaling group in the cluster using EC2 Auto Scaling.
- The scaling for On-Demand and Spot nodes is exclusive of one another. So, if multiple applications are launched the driver and executor pods can be scheduled in different node groups independently per the resource requirements. This helps reduce job failures due to lack of resources for the driver, thus adding to the overall resiliency of the system.
- Using EKS Managed node groups
- This requires significantly less operational effort compared to using self-managed nodegroup and enables:
- Auto enforcement of Spot best practices like Capacity Optimized allocation strategy, Capacity Rebalancing and use multiple instances types.
- Proactive replacement of Spot nodes using rebalance notifications.
- Managed draining of Spot nodes via re-balance recommendations.
- The nodes are auto-labeled so that the pods can be scheduled with NodeAffinity.
- eks.amazonaws.com/capacityType: SPOT
- eks.amazonaws.com/capacityType: ON_DEMAND
Now that you understand the products and best practices of used in this tutorial, let’s get started.
Tutorial: running Spark in EKS managed node groups with Spot Instances
In this tutorial, I review steps, which help you launch cost optimized and resilient Spark jobs inside Kubernetes clusters running on EKS. I launch a word-count application counting the words from an Amazon Customer Review dataset and write the output to an Amazon S3 folder. To run the Spark workload on Kubernetes, make sure you have eksctl and kubectl installed on your computer or on an AWS Cloud9 environment. You can run this by using an AWS IAM user or role that has the AdministratorAccess policy attached to it, or check the minimum required permissions for using eksctl. The spot node groups in the Amazon EKS cluster can be launched both in a managed or a self-managed way, in this post I use the former. The config files for this tutorial can be found here. The job is finally launched in cluster mode.
Create Amazon S3 Access Policy
First, I must create an Amazon S3 access policy to allow the Spark application to read/write from Amazon S3. Amazon S3 Access is provisioned by attaching the policy by ARN to the node groups. This associates Amazon S3 access to the NodeInstanceRole and, hence, the node groups then have access to Amazon S3. Download the Amazon S3 policy file from here and modify the <<output folder>> to an Amazon S3 bucket you created. Run the following to create the policy. Note the ARN.
aws iam create-policy --policy-name spark-s3-policy --policy-document
Cluster and node groups deployment
Create an EKS cluster using the following command:
eksctl create cluster –name= sparkonk8 --node-private-networking --without-nodegroup --asg-access –region=<<AWS Region>>
The cluster takes approximately 15 minutes to launch.
Create the nodegroup using the nodeGroup config file. Replace the <<Policy ARN>> string using the ARN string from the previous step.
eksctl create nodegroup -f managedNodeGroups.yml
Scheduling driver/executor pods
The driver and executor pods can be assigned to nodes using affinity. PodTemplates can be used to configure the detail, which is not supported by Spark launch configuration by default. This feature is available from Spark 3.0.0, requiredDuringScheduling node affinity is used to schedule the driver and executor jobs. Sample podTemplates have been uploaded here.
Launching a Spark application
Create a service account. The spark driver pod uses the service account to create and watch executor pods using Kubernetes API server.
kubectl create serviceaccount spark
kubectl create clusterrolebinding spark-role --clusterrole='edit' --serviceaccount=default:spark --namespace=default
Download the Cluster Autoscaler and edit it to add the cluster-name.
curl -LO
Install the Cluster AutoScaler using the following command:
kubectl apply -f cluster-autoscaler-autodiscover.yaml
Get the details of Kubernetes master to get the head URL.
kubectl cluster-info
Use the following instructions to build the docker image.
Download the application file (script.py) from here and upload into the Amazon S3 bucket created.
Download the pod template files from here. Submit the application.
bin/spark-submit \
--master k8s://<<MASTER URL>> \
--deploy-mode cluster \
--name 'Job Name' \
--conf spark.eventLog.dir=s3a:// <<S3 BUCKET>>/logs \
--conf spark.eventLog.enabled=true \
--conf spark.history.fs.inProgressOptimization.enabled=true \
--conf spark.history.fs.update.interval=5s \
--conf spark.kubernetes.container.image=<<ECR Spark Docker Image>> \
--conf spark.kubernetes.container.image.pullPolicy=IfNotPresent \
--conf spark.kubernetes.driver.podTemplateFile='../driver_pod_template.yml' \
--conf spark.kubernetes.executor.podTemplateFile='../executor_pod_template.yml' \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.shuffleTracking.enabled=true \
--conf spark.dynamicAllocation.maxExecutors=100 \
--conf spark.dynamicAllocation.executorAllocationRatio=0.33 \
--conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=30 \
--conf spark.dynamicAllocation.executorIdleTimeout=60s \
--conf spark.driver.memory=8g \
--conf spark.kubernetes.driver.request.cores=2 \
--conf spark.kubernetes.driver.limit.cores=4 \
--conf spark.executor.memory=8g \
--conf spark.kubernetes.executor.request.cores=2 \
--conf spark.kubernetes.executor.limit.cores=4 \
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
--conf spark.hadoop.fs.s3a.connection.ssl.enabled=false \
--conf spark.hadoop.fs.s3a.fast.upload=true \
s3a://<<S3 BUCKET>>/script.py \
s3a://<<S3 BUCKET>>/output
A couple of key points to note here
- podTemplateFile is used here, which enables scheduling of the driver pods to On-Demand Instances and executor pods to Spot Instances.
- Spark provides a mechanism to allocate resources dynamically based on workloads. In the latest release of Spark (3.0.0), dynamicAllocation can be used with Kubernetes cluster manager. The executors that do not store, active, shuffled files can be removed to free up the resources. DynamicAllocation works well in tandem with Cluster Autoscaler for resource allocation and optimizes resource for jobs. We are using dynamicAllocation here to enable optimized resource sharing.
- The application file and output are both in Amazon S3.
- Spark Event logs are redirected to Amazon S3. Spark on Kubernetes creates local temporary files for logs and removes them once the application completes. The logs are redirected to Amazon S3 and Spark History Server can be used to analyze the logs. Note, you can create more instrumentation using tools like Prometheus and Grafana to monitor and manage the cluster.
Observations
EC2 Spot Interruptions
The following diagram and log screenshot details from Spark History server showcases the behavior of a Spark application in case of an EC2 Spot interruption.
Four Spark applications launched in parallel in a cluster and one of the Spot nodes was interrupted. A couple of executor pods were terminated in three of the four applications, but due to the resilient nature of Spark new executors were launched and the applications finished almost around the same time.
The Spark Driver identified the shut down executors, which handled the shuffle files and relaunched the tasks running on those executors.
The Spark Driver identified the shut down executors, which handled the shuffle files and relaunched the tasks running on those executors.
Dynamic Allocation
Dynamic Allocation works with the caveat that it is an experimental feature.
Cost Optimization
Cost Optimization is achieved in several different ways from this tutorial.
- Use of 100% Spot Instances for the Spark executors
- Use of dynamicAllocation along with cluster autoscaler does make optimized use of resources and hence save cost
- With the deployment of one driver and executor nodes to begin with and then scaling up on demand reduces the waste of a continuously running cluster
Cluster Autoscaling
Cluster Autoscaling is triggered as it is designed when there are pending (Spark executor) pods.
The Cluster Autoscaler logs can be fetched by:
kubectl logs -f deployment/cluster-autoscaler -n kube-system —tail=10
Cleanup
If you are trying out the tutorial, run the following steps to make sure that you don’t encounter unwanted costs.
Delete the EKS cluster and the nodegroups with the following command:
eksctl delete cluster --name sparkonk8
Delete the Amazon S3 Access Policy with the following command:
aws iam delete-policy --policy-arn <<POLICY ARN>>
Delete the Amazon S3 Output Bucket with the following command:
aws s3 rb --force s3://<<S3_BUCKET>>
Conclusion
In this blog, I demonstrated how you can run Spark workloads on a Kubernetes Cluster using Spot Instances, achieving scalability, resilience, and cost optimization. To cost optimize your Spark based big data workloads, consider running spark application using Kubernetes and EC2 Spot Instances.
|
https://awsfeed.com/whats-new/compute/running-cost-optimized-spark-workloads-on-kubernetes-using-ec2-spot-instances
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Smoother Data Science through Ergonomic Aesthetics
Jupyter Notebook Tweaks & Shortcuts
Anyone who says aesthetics don’t impact work is deceiving themselves.
Look at Apple: They built a dominant tech empire by applying artistic minimalism to every aspect of their products, from physical design to user experience.
What word do we associate with iProducts? “Smooth”.
Curved edges: I can’t find a single sharp-cornered shape anywhere on the MacBook I’m writing this on.
Clouded texture: Not blazing shiny like polished lacquer. Faded yet clean.
The result of this is an experience I’d call “sleek”. The software defaults to flowing transitions rather than jarring jumps. Much effort is placed into ‘unifying’ cloud systems for storage and account-based sharing of content across devices.
Plenty of tech people use Mac environments instead of Windows, although there’s no shortage of arguments for why a Windows build is more powerful, customizable, price-efficient, etc. Why?
Ergonomics. If your tools make work feel better, you’ll do more work.
Jupyter
This is the reason I do most of my data science work in Jupyter. Notebooks allow you to write and run code in separate cells within the notebook environment.
This allows you to explore every aspect of your program in detail, and saves an unimaginable amount of time debugging (there’s even a nice debugger for JupyterLab). Cell output supports complex visual displays, as well as interactive graphs & input.
Notebooks are incredibly popular for statistical analysis & machine learning alike, for a simple reason: modular execution lets you learn faster.
If you want to build something you’ve never done, you won’t fully understand all of the code you find in tutorials or books. So write a few lines, run the cell and see what happens. Change a few more things, run again. Keep running.
That’s the experimental process, more or less. If you feel like you’re in over your head, exploring strange libraries and uncertain of what the next code block will do — you’re learning fast.
But how can we make the experience even smoother? That’s something I’ve occasionally asked myself since I started coding a year ago.
I’ve tested various tricks to improve my work experience, and I’ve compiled my favorite addons, keyboard shortcuts & tips here — because I really care that much about ergonomics.
Color Coding
The most important addon I use is Kyle Dunovan’s Jupyter Themes. Change that bright white background to something easier on the eyes (and download a red light filtering app too, since we spend so much time staring at screens late at night).
Find a code color scheme that really speaks to you. What do you find more readable? What (no, really) do you find more relaxing to look at for hours?
Here’s part of a cell from a bitcoin AI backtesting bot I’m working on. You’ll notice that I’m one of those people who uses dark mode in every application that has it enabled:
I care so much about color coding. Your primate ancestors spent millions of years evolving complex retinas, diverse rods and cones because seeing colourful fruit or noticing striped orange tigers was quite important.
Color matters. Seeing red enhances simple detail-oriented performance, while being surrounded by blue made participants perform better at more complex and creative problems.
Compared to someone who programmed COBOL on a black & white screen, I’m quite spoiled — I know this, but I seek harmonious coding environments all the same.
Table of Contents
When you start to fill up a notebook, it can be a hassle to navigate from section to section. NotebookExtensions, specifically the table of contents, has saved me literal hours of scrolling through notes trying to see where I left the few lines I’m looking for.
Simply prefix some markdown text with ‘#’ to write a header (or ‘##’ for sub-headers) and the ToC cell at the top does the rest. It automatically hyperlinks to the relevant section, so you can simply click the header you want to get back to. It even comes with an adjustable side ToC display, so you don’t have to scroll back to the top to realign.
Keyboard Shortcuts
The best way to get your workflow to flow is more mastery over your tools. The small things add up — if you learn 5 tricks and write 10% faster, you’ll have more time to google error codes & read documentation.
A full set of shortcuts can be found here, but I’ll list my favorites in no specific order.
Save time copying & pasting: Shift+M to merge the selected cell with the cell below. Ctrl+Shift+Minus to split a cell at your current position.
Tab-complete is fantastic for exploring new objects. Instantiate a new instance, then type
object_name. and hit Tab to bring up a list of methods & properties built into that object.
Text navigation and highlighting:
|from htm.bindings.algorithms import SpatialPooler
Let’s say your position is at the
| right before ‘from’. Hold down ALT & hit the right arrow key to jump your position to the end of the next word.
from| htm.bindings.algorithms import SpatialPooler
Command (⌘) + Right will take you to the end of your current line.
from htm.bindings.algorithms import SpatialPooler|
And here’s the best trick: hold down Shift while doing any of the above, and you’ll highlight whatever section you move through.
This is subtle but quite useful when you want to highlight a section of what you’ve just wrote to add parentheses (since Jupyter encloses the highlighted section in parentheses when you hit left/open parentheses).
You could also just hold Shift and hit the arrow keys one by one for more fine-grained highlight control, if double / triple clicking on text for word / line highlighting isn’t cutting it.
Keep it simple
The goal of all of this is to make work smoother; to reduce the amount of times your train of thought is interrupted by the necessity of fiddling with a finicky piece of your software environment.
Minimizing interruption leads to a more streamlined mind-state, in my experience. Think less about the mechanics of entering and executing your code and more about the insight behind it.
The singularity that this approaches, of course, is some kind of Brain-Computer-Interface where you control the mouse & keyboard by thinking. Whether or not we’d want to be fully plugged in is another question entirely, but until we can ask that question, you’ll see results by making work easier on yourself.
|
https://mark-s-cleverley.medium.com/smoother-data-science-through-ergonomic-aesthetics-bc7c134ddb6b?source=post_internal_links---------4----------------------------
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Why does the following code not print "Hello, World!"?
using System;
namespace Test
{
public class Program
{
public static void Main(string[] args)
{
var a = new A();
var b = new B(a);
b.Evnt += val => Console.WriteLine(val);
a.Do();
}
}
public class A
{
public void Do()
{
Evnt("Hello, World!");
}
public event Action<string> Evnt = v => {};
}
public class B
{
public B(A a)
{
a.Evnt += Evnt; // this does not work
}
public event Action<string> Evnt = v => {};
}
}
a.Evnt += Evnt;
a.Evnt += v => Evnt(v);
It fails for the same reason this code prints "2":
int x = 2; int y = x; Action a = () => Console.WriteLine(y); x = 3; a();
Here, your event handler is the value of
a.Evnt at the time of assignment -- whatever
a has for
Evnt at the time when you pass
a into
B's constructor, that's what
B gets for an event handler.
public B(A a) { a.Evnt += Evnt; // this does not work }
It actually works fine -- it does what you told it to. It's just that you thought you were telling it to do something different.
Here, you have a handler that evaluates
Evnt itself, at whatever time the handler is executed. So with this one, if
a.Evnt is changed in the mean time, you'll see the output from the new value of
a.Evnt.
public B(A a) { a.Evnt += v => Evnt(v); }
Rewrite B like this, and you'll get a clearer picture:
public class B { public B(A a) { a.Evnt += Evnt; // this does not work } public event Action<string> Evnt = v => { Console.WriteLine("Original"); }; }
The title of your question isn't well phrased; you're not assigning an event, you're assigning an event handler. Also, "this does not work" is not a useful way to describe a problem. There is an infinity of ways for things not to work. My first assumption was that your code wouldn't even compile. Next time, please describe the behavior you expected to see, and what you actually saw instead. Say whether it failed to compile, or threw an exception at runtime, or just quietly did something you didn't anticipate. If there's an error message or an exception message, provide the text of the message.
|
https://codedump.io/share/20P3Jku4QZSY/1/why-c-events-cannot-be-directly-subscribed-to-other-events
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Finds proper and interior intersections in a set of SegmentStrings, and adds them as nodes. More...
#include <IntersectionFinderAdder.h>
Finds proper and interior intersections in a set of SegmentStrings, and adds them as nodes.
Creates an intersection finder which finds all proper intersections and stores them in the provided Coordinate array
Always process all intersections
Reimplemented from geos::noding::SegmentIntersector.
This method is called by clients of the SegmentIntersector class to process intersections for two segments of the SegmentStrings being intersected. Note that some clients (such as MonotoneChains) may optimize away this call for segment pairs which they have determined do not intersect (e.g. by an disjoint envelope test).
Implements geos::noding::SegmentIntersector.
|
https://geos.osgeo.org/doxygen/classgeos_1_1noding_1_1IntersectionFinderAdder.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
How can I use Active Directory commands in a Powershell script?
They are available in Server 2008 R2 as an installable feature. I believe Server 2008 R2 is the only platform that supports that.
- No, thats not true. I just want to import the AD module within my freaking script :) – Jonathan Rioux Aug 16 '10 at 17:15
- 1I am rather certain this is correct. You can look into the Quest add-ins that add similar features but the true get-aduser is 08R2 only right now. technet.microsoft.com/en-us/magazine/ee914610.aspx andersrask.spoint.me/2010/07/28/… – David Remy Aug 16 '10 at 18:26
- If the module is available, you can import it into your script with
import-module $modulename– SysAdmin1138 Aug 16 '10 at 20:55
- One of the complications with the Get-ADUser cmdlet is that it depends on Active Directory Web Services being available. My guess is that many domains will not have this available (I know ours does not). – Shannon Wagner Feb 17 '12 at 15:38
I would highly recommend installing Quest Software's free set of Active Directory cmdlets.
With this, you get access to commands like Get-QADUser and Get-QADObject which probably provide all the functionality you will need, without the dependency on Active Directory Web Services that Get-ADUser has.
Another option would be to use PowerShell's ability to instantiate .NET objects and use the DirectoryServices namespace of the .NET Framework. But unless you need something that is not available via the Quest tools, using .NET is probably more complicated than you need.
|
https://superuser.com/questions/176444/using-active-directory-commands-in-powershell
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Deploy OneAgent on Google Kubernetes Engine clusters
Google Kubernetes Engine (GKE) is a managed environment for operating Kubernetes clusters and running containerized workloads at scale.
For full-stack monitoring of Kubernetes clusters, you need to roll-out Dynatrace OneAgent on each cluster node using OneAgent Operator for Kubernetes 1.9 or higher.
Note:
While full-stack monitoring of Ubuntu-based GKE clusters is fully supported, monitoring GKE clusters running Container-Optimized OS is in EAP at the moment. This means the following:
- the solution is still in development.
- we plan to make it available as BETA and GA within the near-to-mid-term future, but a specific date isn't defined yet.
- deploying the solution in a production environment isn't recommended.
- it is not supported with official Dynatrace SLAs.
Please review the limitations section below.
Prepare Dynatrace tokens for OneAgent Operator
OneAgent Operator requires two different tokens for interacting with Dynatrace servers. These two tokens are made available to OneAgent Operator by means of a Kubernetes secret as explained at a later step.
- Get an API token for the Dynatrace API. This token is later referenced as
API_TOKEN.
- Get a Platform-as-a-Service token. This token is later referenced as
PAAS_TOKEN.
Install OneAgent Operator
Create a role binding to grant your GKE user a cluster-admin before you can create the role necessary for the OneAgent Operator in later steps.
$ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin --user=$(gcloud config get-value account)
Create the necessary objects for OneAgent Operator. OneAgent Operator acts on its separate namespace
dynatrace. It holds the operator deployment and all dependent objects like permissions, custom resources and the corresponding DaemonSet. You can also observe the logs of OneAgent Operator.
$: []
Alternatively, you can use the snippet from the GitHub repository.
$ curl -o cr.yaml
Adapt the values of the custom resource as indicated in the following table.
Note:
If you want to participate in the Early Access Program and roll-out Dynatrace OneAgent to GKE clusters running Container-Optimized OS, please be aware of the limitations and risk as explained above. You'll need to add the following entry to the
env section.
Limitations for Container Optimized OS based GKE clusters
- Disks aren't detected properly and therefore the disk metrics aren't collected properly.
- Only local Docker volume driver has been tested and is supported.
|
https://www.dynatrace.com/support/help/cloud-platforms/google-cloud-platform/google-kubernetes-engine/deploy-oneagent-on-google-kubernetes-engine-clusters/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Getting Started with JSON in C# Using Json.NET
Course info
Course info.
Section Introduction Transcripts
Serialization Fundamentals
Hello and welcome to the next module of this Json. NET course, Serialization Fundamentals. Serialization and deserialization is the main functionality of Json. NET. Let's get into the details and understand why it is so important and how it can help you. Serialization and deserialization involves taking a data structure or object and converting it back and forth between JSON text and. NET objects. A. NET object when serialized can be stored as a stream of bytes of file or in memory and later it can be used to recreate the original object. You have to be careful though. In some cases there are private implementation details that are not available when deserializing or serializing, so you need to review the recreated object to determine if there is any information missing. In the serialization and deserialization process, you map property names and copy their values using the main JSON serializer class with the support of JsonReader and JsonWriter. I will show you important considerations to take like working with dates, collections, error handling, along with a few useful tips and tricks. In this module we'll work closely with the next module where I will show you available settings and attributes that help you control the serialization process.
Settings & Attributes
Hello, and welcome to this next module of JSON and Json. NET course. In this module we will learn how to control and customize the serialization process via settings and attributes. So what is a setting? A setting is a user preference that is supplied during the conversion process. It can be specified as a property on the JsonSerializer class or using the JsonSerializer settings on JsonConvert. And what is an attribute? An attribute is a declarative tag that is supplied on classes, properties, and more, that is taking into account during the serialization and deserialization process.
Custom Serialization
Hello, and welcome to the next module of this Json. NET course, Custom Serialization. Json. NET is very powerful with some key features that include conditional serialization, serialization callbacks, debugging, and more. In this module I will teach you how to customize your serialization process with Json. NET along with a few more pieces of information that will prove invaluable and will save you time while coding. Given that we already covered serialization and deserialization, the next step involves taking it a notch further and explaining the more advanced topics around handling JSON. I will be covering conditional serialization. It may be the case that you want to control the serialization process based on specific conditions. I will show you how. Custom JsonConverter. It may be possible that you want to extend or customize the serialization and deserialization process. Let's learn how we can create our own custom JsonConverter to get exactly the results that we need. Serialization callbacks in general are used to raise events before and after the serialization and deserialization process. I will teach you what events are available and when during the process they occur. ITraceWriter. Debugging the serialization is not a common scenario, but being able to do it using ITraceWriter for logging and debugging could be important. Learn the process in this demo to be prepared.
Performance Tips
Hello, and welcome to this next module of this Json. NET course, Performance Tips. In a lot of cases, performance is one of the most critical features of an application. A badly performed application can affect sales, delivery, and in general it can be problematic. So what if your application works with large amounts of JSON or requires quick responses on code that rely on JSON serialization? Json. NET is the way to go. When you compare Json. NET with other serializers, Json. NET is faster. In this module I will demonstrate tips on how to improve performance when using Json. NET, but more than comparing with other serializers, I will teach you multiple tips to make Json. NET serialization even faster including reading and writing JSON directly instead of serializing, working with fragments, populate objects, control what get serialized, and lowering your memory utilization. We will get started with manual serialization. What's this? When you use the JsonSerializer class, serialization uses reflection which is slower than writing and reading JSON directly. We will then go to fragments where I will show you how to work only with a subset of the JSON objects to make reading much faster. With populate objects I will show you how to populate specific properties of a large JSON object instead of working with the full JSON object, which is similar to merge array handling where I will show you how to control when two JSON objects are merged. Then we learn how to improve serialization speed by using attributes to control what is serialized and finally, we will learn how to optimize memory usage when working with JSON. And most importantly, in several of the demos we will be using a stopwatch to capture timings as proof. It is very easy to use. The code is only a few lines long.
LINQ to JSON
Hello, and welcome to the next module of this Json. NET course, LINQ to JSON. LINQ to JSON is an API used to work with JSON objects. LINQ in general has been available for many years, but just in case, I will take a step back and start by talking about LINQ. So what is LINQ? LINQ stands for language integrated query. It extends powerful query capabilities to C# and VB. NET and also it includes functions. Its standard and easily learned patterns for querying and updating data potentially to any store. Among those we have LINQ to SQL, LINQ to XML, LINQ to ADO. NET, LINQ to objects, and of course, LINQ to JSON. It started around Visual Studio 2008 and I'll show you how we can use it with Json. NET. LINQ to JSON can be found under the Newtonsoft. JSON. Linq namespace. You can use it with JTokenReader or JTokenWriter for fast, non-cached, forward only reading and writing of JSON in a LINQ style. It is also possible to parse JSON using JObject. Parse. The parse method is not only available in the JObject class; it is available under many of the LINQ to JSON classes. If you want to create JSON there are many ways. You can do it in an imperative, in a declarative, or in a from object way. Also you can query JSON in a very simple notation or you can use JSON path with SelectToken. Let's learn what can be done with this namespace and how it can help you improve your code.
JSON & XML
Hello, and welcome to the next module of this Json. NET course, JSON and XML. JSON is a data interchange format which is easy for humans to read and write. It is easy for machines to parse and generate. On the other hand, XML is a markup language that defines a set of rules for encoding documents in a format which is also both supported by humans and machines. They both can serve the same purpose; however, converting between them can be a little bit tricky and there are several different considerations to take. In this module I will teach you how to convert between JSON and XML using JSON convert with an understanding of possible scenarios. Let's talk about JSON and XML. Xml is a markup language. It can be used as a data interchange format and it is both easy for humans to understand and machines to process and JSON is a text based data interchange format. It's made up of key value pairs, arrays, and objects. By now you should be very familiar with JSON. It's also easy for humans to understand and machines to process. XML can be used in multiple different ways, but JSON is specifically used as a data interchange format which means that you don't have to guess about the structure of the data that you're receiving. It is very straightforward, so when you're converting data from XML and JSON, there is no standard way of converting it. In many cases the conversion can be simple, but in others, that round trip conversion can be hard.
|
https://www.pluralsight.com/courses/json-csharp-jsondotnet-getting-started
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
A TopologyLocation is the labelling of a GraphComponent's topological relationship to a single Geometry. More...
#include <TopologyLocation.h>
A TopologyLocation is the labelling of a GraphComponent's topological relationship to a single Geometry.
If the parent component is an area edge, each side and the edge itself have a topological location. These locations are named
If the parent component is a line edge or node, there is a single topological relationship attribute, ON.
The possible values of a topological location are {Location::UNDEF, Location::EXTERIOR, Location::BOUNDARY, Location::INTERIOR}
The labelling is stored in an array location[j] where where j has the values ON, LEFT, RIGHT
Constructs a TopologyLocation specifying how points on, to the left of, and to the right of some GraphComponent relate to some Geometry.
Possible values for the parameters are Location::UNDEF, Location::EXTERIOR, Location::BOUNDARY, and Location::INTERIOR.
|
https://geos.osgeo.org/doxygen/classgeos_1_1geomgraph_1_1TopologyLocation.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
A dynamic list of the vertices in a constructed offset curve. More...
#include <OffsetSegmentString.h>
A dynamic list of the vertices in a constructed offset curve.
Automatically removes close vertices which are closer than a given tolerance.
Check that points are a ring.
add the startpoint again if they are not
References geos::geom::CoordinateArraySequence::add(), geos::geom::CoordinateSequence::back(), geos::geom::Coordinate::equals(), and geos::geom::CoordinateSequence::front().
Referenced by getCoordinates().
Get coordinates by taking ownership of them.
After this call, the coordinates reference in this object are dropped. Calling twice will segfault...
FIXME: refactor memory management of this
References closeRing().
|
https://geos.osgeo.org/doxygen/classgeos_1_1operation_1_1buffer_1_1OffsetSegmentString.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Read Exif metadata from tiff and jpeg files.
Project description
Easy to use Python module to extract Exif metadata from tiff and jpeg files.
Originally written by Gene Cash & Thierry Bousch.
Installation
PyPI
The recommended process is to install the PyPI package, as it allows easily staying up to date:
$ pip install exifread
See the pip documentation for more info.
Archive
Download an archive from the project’s releases page.
Extract and enjoy.
Compatibility
EXIF.py is tested on the following Python versions:
- 2.6
- 2.7
- 3.2
- 3.3
- 3.4
Usage
Command line
Some examples:
$ EXIF.py image1.jpg $ EXIF.py image1.jpg image2.tiff $ find ~/Pictures -name "*.jpg" -name "*.tiff" | xargs EXIF.py
Show command line options:
$ EXIF.py
Python Script
import exifread # Open image file for reading (binary mode) f = open(path_name, 'rb') # Return Exif tags tags = exifread.process_file(f)
Note: To use this library in your project as a Git submodule, you should:
from <submodule_folder> import exifread
Returned tags will be a dictionary mapping names of Exif tags to their values in the file named by path_name. You can process the tags as you wish. In particular, you can iterate through all the tags with:
for tag in tags.keys(): if tag not in ('JPEGThumbnail', 'TIFFThumbnail', 'Filename', 'EXIF MakerNote'): print "Key: %s, value %s" % (tag, tags[tag])
An if statement is used to avoid printing out a few of the tags that tend to be long or boring.
The tags dictionary will include keys for all of the usual Exif tags, and will also include keys for Makernotes used by some cameras, for which we have a good specification.
Note that the dictionary keys are the IFD name followed by the tag name. For example:
'EXIF DateTimeOriginal', 'Image Orientation', 'MakerNote FocusMode'
Tag Descriptions
Tags are divided into these main categories:
- Image: information related to the main image (IFD0 of the Exif data).
- Thumbnail: information related to the thumbnail image, if present (IFD1 of the Exif data).
- EXIF: Exif information (sub-IFD).
- GPS: GPS information (sub-IFD).
- Interoperability: Interoperability information (sub-IFD).
- MakerNote: Manufacturer specific information. There are no official published references for these tags.
Processing Options
These options can be used both in command line mode and within a script.
Faster Processing
Don’t process makernote tags, don’t extract the thumbnail image (if any).
Pass the -q or --quick command line arguments, or as:
tags = exifread.process_file(f, details=False)
Stop at a Given Tag
To stop processing the file after a specified tag is retrieved.
Pass the -t TAG or --stop-tag TAG argument, or as:
tags = exifread.process_file(f, stop_tag='TAG')
where TAG is a valid tag name, ex 'DateTimeOriginal'.
The two above options are useful to speed up processing of large numbers of files.
Strict Processing
Return an error on invalid tags instead of silently ignoring.
Pass the -s or --strict argument, or as:
tags = exifread.process_file(f, strict=True)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ExifRead/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
The.
jtrader is the root command, it will then have sub-commands for each of the brokers
$ jtrader Usage: jtrader [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: zerodha Command line utilities for managin Zerodha account Get started by...
Overview of utilties available with zerodha -
$ jtrader zerodha Usage: jtrader zerodha [OPTIONS] COMMAND [ARGS]... Command line utilities for managin Zerodha account Get started by creating a session $ jtrader zerodha startsession Options: --help Show this message and exit. Commands: configdir Print app config directory location rm Delete stored credentials or sessions config To delete... savecreds Saves your creds in the APP config directory startsession Saves your login session in the app config folder
This is the preffered method for logging in to your Zerodha account
$ jtrader zerodha startsession User ID >: USERID Password >: Pin >: Logged in successfully as XYZ Saved session successfully
This will save your session in the config folder.
Once you have done this in your code you can call
set_access_token or
load_session to use this session. Please note that like your browser login session expires after some time, same way this session will also expire but this is much safer than storing your credentials in code or plain text.
from jugaad_trader import Zerodha kite = Zerodha() kite.set_access_token() # loads the session from config folder profile = kite.profile()
This methods saves credentials in your config folder in
ini format.
⚠️ Please note that you are storing your password in plain text
$ jtrader zerodha savecreds Saves your creds in app config folder in file named .zcred User ID >: USERID Password >: Pin >: Saved credentials successfully
Once you have done this, you can call
load_creds followed by
from jugaad_trader import Zerodha kite = Zerodha() kite.load_creds() kite.login() print(kite.profile())
jugaad-trader uses python click for its CLI. Click provides
get_app_dir function to get config folder. Refer to documentation on how it works.
You can simple run below command to get the config folder location
$ jtrader zerodha configdir
In case you wish to delete configuration, you can delete using
rm command-
To delete SESSION
$ jtrader zerodha rm SESSION
To delete CREDENTIALS
$ jtrader zerodha rm CREDENTIALS
jugaad-traderhome
|
https://marketsetup.in/documentation/jugaad-trader/cli/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Scheduling irregular AWS Lambda executions through DynamoDB TTL attributes29 May 2019
Introduction
This article describes a serverless approach to schedule AWS Lambda invocations through the usage of AWS DynamoDB TTL attributes and streams. At the time of writing there was no way to schedule an irregular point of time execution of a lambda execution (e.g. “run this function once in 2 days and 10 hours and then again in 4 days”) without abusing CloudWatch crons (see Alternatives for more info).
hey @awscloud! is there a way to trigger a #lambda execution at a future point in time without abusing rate/cron from #CloudWatch or ttl from #DynamoDB? e.g. call this function in 2 hours and this function in 3 days, 7 hours and 15 minutes— Michael Bahr (@bahrdev) May 27, 2019
This approach scores with its clean architecture and maintainability. It only requires a function to insert events into a scheduling-table and a function that processes events that hit reach the scheduled point of time. As we build everything on serverless technology, we don’t have to run software upgrades, maintain firewalls and pay for idle time. At low usage it’s practically free and even with higher usage we only really start paying once we schedule hundreds of thousands of events per day. Read more in this follow up article.
While this approach allows one to schedule an execution for a certain time, it falls short on accuracy. In our tests with a scheduling table holding 100.000 entries, the events appeared in the DynamoDB stream with a delay of up to 28 minutes. According to the docs it may take up to 48 hours for especially large workloads. Therefore this approach does not fit, if you require the function to be executed at a certain hour, minute or second. Potential use cases are status updates which run every couple hours or days or non time critical reminders.
The source code for the lambda functions is available at GitHub.
Approach
Our approach is to use a lambda function to create scheduling entries. These entries contain the payload for later execution and a time to live attribute (TTL). We then configure the DynamoDB table to delete items after their TTL has expired and push those changes, including the old entry, to a stream. The stream is then connected to a lambda function which processes the changes and executes the desired logic. The functions are written with Python 3.7.
The executor function may reschedule events by performing the same logic as the scheduler function.
In this guide we will
- setup the table and stream,
- the executor which consumes the stream,
- a little function to schedule events and
- deploy it to AWS with the serverless framework (version 1.32.0).
You can change the order of the steps, but should keep in mind that the executor requires the stream’s ARN.
Table Setup
Start by creating a new DynamoDB table.
If you expect to exceed the free tier, we recommend switching to on-demand. Please not that exceeding a provisioned capacity may lead to DB operations being rejected, while on-demand is not limited.
This will open a dialog where you specify the TTL attribute and activate DynamoDB Streams. We will use the attribute name “ttl”, but you may choose whatever name you like that is not reserved by DynamoDB. Click on Continue to create enable the TTL.
Once DynamoDB has created the TTL and stream, you will see the stream details on the table overview. We will later need the “Latest stream ARN” to hook it up to our executor function.
Executor
Next we write an executor function in Python which consumes the stream events.'])
A few things to keep in mind here:
Line 2–3: You may receive more than one record. Therefore your function must finish before the configurable lambda timeout is reached.
Line 7–10: You will receive events that are unrelated to the TTL deletion. E.g. when an event is inserted into the table, the event_name will be INSERT.
Line 13: The record’s structure differs from the entry we wrote to the DynamoDB table. You need to access Records, then dynamodb and finally OldImage to get the database entry as it was before deletion. Note that the payload follows DynamoDB’s AttributeValue schema shown below:
Scheduler
You may create new entries manually through the DynamoDB management console or through scripts. In this example we will write an AWS Lambda function in Python which creates a new entry.
import boto3 import time from uuid import uuid4 # keep the db initialization outside of the functions to maintain them as long as the container lives dynamodb = boto3.resource('dynamodb', region_name='us-east-1') scheduling_table = dynamodb.Table('lambda-scheduling') def delay(): return 10 def handle(payload, context): print(payload) id = str(uuid4()) ttl = int(time.time()) + delay() item = { 'id': id, 'ttl': ttl, 'payload': payload } scheduling_table.put_item(Item=item)
Please check that line 8 of the scheduler has the table name you specified during the table setup.
All that this function does, is to create a database entry with an id, a payload and a TTL attribute. The payload may be a dict, array or plain value. The ttl is measured in seconds since the epoch. We use the function *delay()
The put_item from line 24 will cause an INSERT event to be pushed to the stream. According to line 8 of the executor we ignore INSERT events.
Deployment
To deploy the functions, we use the serverless framework. Here is the serverless.yml, which specifies the desired resources:
service: lambda-scheduler provider: name: aws runtime: python3.7 iamRoleStatements: - Effect: Allow Action: # Please limit the Actions to read and write if you use this in production - dynamodb:* # Limit this to your lambda-scheduling table Resource: "arn:aws:dynamodb:us-east-1:*:*" functions: schedule: handler: scheduler.handle execute: handler: executor.handle events: # Use your lambda-scheduling stream - stream: arn:aws:dynamodb:us-east-1:256608350746:table/lambda-scheduling/stream/2019-05-27T15:48:18.587
From line 3 to 12 we specify the provider (AWS), the runtime (python3.7) and grant permissions to our lambda functions. Here we only need write and read access for the scheduling table. You may extend the roles depending on what your scheduler and executor do.
Lines 15 to 16 set up the scheduler. This function is not available publicly, but only through the lambda management console. You may extend it to run regularly or be available through an AWS ApiGateway endpoint.
Lines 17 to 21 set up the executor. Through events we specify that the executor should be invoked when an event is pushed to the stream. Here you should replace the example with the name of your stream from the table setup.
Once you adjusted the serverless.yml to your needs, deploy it by running serverless deploy from the folder where the serverless.yml and functions are located. This may take a few minutes to complete.
Test
Once serverless completes the deployment, head over to the lambda management console and click on ScheduleLambdaFunction.
In the top right corner you can then configure a test event.
As we don’t evaluate the input of the scheduler function the default Hello World event is sufficient.
Create the test event and then execute it by clicking on Test.
Now head over to the DynamoDB management console and open the items of your table. When you hover over the TTL attribute you will see an overlay which tells you when the entry is supposed to be deleted (remember that the deletion may be delayed).
Now we have to wait until DynamoDB deletes the entry. As DynamoDB will probably take a few minutes we are in no rush here. Switch back to the console where you deployed the serverless application and run the command serverless logs -f executor -t to listen for new logs of the executor function. Get yourself a drink and a snack as this will probably take 10 to 15 minutes for a small table. If you created 10.000.000.000 entries, then you might have to wait longer.
That’s it! We invoked a single lambda execution without the use of rate or cron.
What’s next?
From here you can play around with the example and try a couple things:
Modify the delay e.g. by using Python’s *random.randint()
If you are using this approach to schedule a lot of executions (read: more than 100 per day), you should look into optimising the resources of your functions away from the default of 1GB RAM. You can do this by glancing over the logs and looking for the RAM usage, then estimating a number that will fit for all those executions or by using the AWS Lambda Power Tuning tool to estimate the perfect fit.
Let the executor reschedule the event by adding another database entry for a future execution. My project does status updates and then reschedules the execution if the desired status did not match yet.
Extend the approach for multiple executors, by specifying another field with ARN of the desired executor, and then writing a new delegating executor which invokes the desired ARN.
Housekeeping
Should you decide to use this approach for your project, make sure to check off a few housekeeping tasks:
Monitor the delay between the defined TTL and the actual execution. The example code prints a log entry that you can use to visualize the delays.
Optimize the granularity of the permissions in the serverless.yml file. Aim for the least amount of privileges.
Define a log retention for the generated CloudWatch log groups. You probably don’t want to rake up costs as time passes and log storage increases.
Tag the resources for cost analysis so you are later able to identify where major cost producers come from.
Set up an AWS Budget to prevent billing surprises.
Alternatives
The approach above is not the only way to schedule irregular executions, but feels the cleanest to me if you need to pass a payload to the lambda function.
CloudWatch PutRule API
If you have a limited amount of scheduled invocations, you can use the CloudWatch PutRule API to create a one time cron execution.
cron(23 59 31 12 ? 2019)
This cron will execute on the 31st of December 2019 at 23:59.
The downside of this approach is that you can’t create more than 100 rules per account and have to clean up expired rules.
EC2 Scheduler
You can spin up an EC2 that runs a scheduler software, but this requires additional costs for the EC2 instance and does not fit the serverless goal of this article.
Summary
In this article we looked at an approach to schedule irregular lambda invocations and learned about its implementation and limitations. If you use this approach, then beware of the delays and monitor them.
What are your thoughts? Do you have a better approach or know how to achieve this on a different cloud provider? Share it!
Further Reading
Analysis of DynamoDB’s TTL delay
Cost Analysis: Serverless scheduling of irregular invocations
Yan Cui’s take on DynamoDB TTL as an ad-hoc scheduling mechanism
More suggestions on scheduling mechanism by Zac Charles
Scheduling with State Machines
Using AWS Lambda with CloudWatch Events
-
Enjoyed this article? I publish a new article every month. Connect with me on Twitter and sign up for new articles to your inbox!
|
https://bahr.dev/2019/05/29/scheduling-ddb/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
.TH socket_recv4 3
.SH NAME
socket_recv4 \- receive a UDP datagram
.SH SYNTAX
.B #include <socket.h>
int \fBsocket_recv4\fP(int \fIs\fR, char* \fIbuf\fR, unsigned int \fIlen\fR,
char \fIip\fR[4],uint16* \fIport\fR);
.SH DESCRIPTION
socket_recv).
.SH RETURN VALUE
socket_recv4 returns the number of bytes in the datagram if one was
received. If not, it returns -1 and sets errno appropriately.
.SH EXAMPLE
#include <socket.h>
int \fIs\fR;
char \fIip\fR[4];
uint16 \fIp\fR;
char buf[1000];
int len;
\fIs\fR = socket_tcp();
socket_bind4(s,ip,p);
len = socket_recv4(s,buf,sizeof(buf),ip,&p);
.SH "SEE ALSO"
socket_recv6(3)
|
https://git.lighttpd.net/mirrors/libowfat/src/commit/6919cf8bf38669d0b609f7d188cd5b5fa3eb73d0/socket/socket_recv4.3
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
First solution in Clear category for Sum Numbers by Oleksii-Levenets
def sum_numbers(text: str) -> int:
# your code here
rez = sum([int(i) if i.isdigit() else 0 for i in text.split()])
return rez
if __name__ == '__main__':
print("Example:")
print(sum_numbers('my numbers is 2'))
#!")
Sept. 13, 2020
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners
|
https://py.checkio.org/mission/sum-numbers/publications/Oleksii-Levenets/python-3/first/share/1d20f6344efe5db5b392096b6e74b409/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
I am trying to port an app to the Android platform, but am new to Android development. I can find plenty of documentation on the Android build environment, but Remobjects/C# (by necessity) is slightly different, and I am burning a tremendous amount of time trying to figure out some minor details which I assume are obvious to someone experienced with Android/Java. What has me stuck right now is the following:
I am subclassing ViewGroup to create a group to do some special formatting for an editor. My class is EditPhrase:
namespace and.cross01 {
public class EditPhrase : ViewGroup {
…
}
}
I have been able to instantiate the view group programmatically, but have been unable to have it created directly from in the main layout XML:
< LinearLayout
android:layout_width=“match_parent”
…
< and.cross01.EditPhrase android:id="@+id/mainTable" android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_weight="1" android:divider="#000000" < /and.cross01.EditPhrase> ...
< /LinearLayout>
The above XML code gives me a warning from the compiler:
Warning The element ‘LinearLayout’ has invalid child element ‘and.cross01.EditPhrase’. List of possible elements expected: 'Theme, Window, (etc. etc.)… and.cross01 C:\Users\andrew\Documents\Visual Studio 2015\Projects\AND.Cross01\AND.Cross01\res\layout\main.layout-xml 79
I have tried adding and removing the path prefiix, removing removing the namespace declaration, adding EditPhrase as an an activity in the AndroidManifest, and every other variation of naming I can think of. I’m probably missing something blindingly obvious, but cannot seem to figure it out.
Any suggestions would be greatly appreciated.
Thanks,
Andrew
|
https://talk.remobjects.com/t/how-to-reference-c-viewgroup-subclass-from-xml/7964
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
By Dassi Orleando, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.
OrientDB is an open source Java Multi-Model NoSQL database management technology that supports Graph, Document, Key-Value, GeoSpatial and Reactive models while managing queries with the well-known SQL syntax.
In this article, we'll cover some initial setup of OrientDB while building a Spring-Boot API with it on and Alibaba Cloud Elastic Compute Service (ECS) instance.
First, we need to create an ECS. For the sake of the demo, I will be using an Ubuntu one with 1 Core and 0.5 GB of memory. Log in via ssh/console as described into this guide:
Next, we need to install the available binary package, let's download the latest stable release of OrientDB (3.0.8 at the time of writing this article) corresponding to our operating system.
The command to use will be similar to this:
curl --output orientdb-3.0.8.tar.gz
Once downloaded, the zipped file called orientdb-3.0.8.tar.gz will be in the directory where you typed the curl command.
Now, we need to unzip that file and move its content to an opportune directory under the environment variable ORIENTDB_HOME. Here are the corresponding commands according to the current version:
tar -xvzf orientdb-3.0.8.tar.gz to unzip the folder,
cp -r orientdb-3.0.8 /opt to copy the entire folder to the /opt directory.
The content of /etc/environment will have these three lines:
JAVA_HOME=/usr/lib/jvm/java-8-oracle ORIENTDB_HOME=/opt/orientdb-3.0.8 PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games :$ORIENTDB_HOME/bin"
Knowing that you have Java (1.7 or higher) installed already + added a swap space to your Ubuntu.
Note: Don't forget to source this file after updated so that the new OrientDB executables get available in the terminal, the command is source /etc/environment
Finally, we need to edit the orientdb.sh file located in ORIENTDB_HOME/bin by filling the location (ORIENTDB_HOME) of OrientDB directory in lieu of ORIENTDB_DIR and also the system user we'd like to use instead of USER_YOU_WANT_ORIENTDB_RUN_WITH.
Now we've a fully working OrientDB installation with the following commands ready to be use:
Most of the time production environment required to have a very secured installation where any user will not be allowed to start/stop the database as willed, in OrientDB bin/orientdb.sh file there is the possibility to fill the administration user in place of USER_YOU_WANT_ORIENTDB_RUN_WITH then the filled user will be the only to have full right on our OriendDB most sensible commands.
To know more about OrientDB here's the official documentation link.
Let's test our installation by running the command: orientdb.sh start and access the portal (OrientDB Studio) at or as shown in the following screenshot:
To connect yourself and access the dashboard, we need to define our users at the very end of the file $ORIENTDB_HOME/config/orientdb-server-config.xml as described here.
Here we can see that 47.254.88.191 is the IP address of the ECS used right now, you should configure your instance security group for your port 2480 (OrientDB Studio port) to be accessible (should be done well for a production environment) via the web, printed here the configuration for our testing instance:
OrientDB's multi-model capability allows to manage many types of database with the same engine, here we can manage:
One of the big innovation behind this it to be able to query both types with a single well known syntax which is SQL, the Document type of database is the one we'll be using for the Spring-boot API we're building.
From the OrientDB Studio home screen let's create a document database called alibabacloudblog as illustrated in the image bellow:
The next time we'll access the dashboard, we'll prior need to select the database from the home screen, provide the user credentials to use then hit Connect.
Regardless the database type, OrientDB gives the ability to work with three kinds of Schemas which are:
As stated at the beginning of this article, the end result is to have a fully working API (only some CRUD operations) where Spring-Boot and OrientDB are both in actions on Alibaba Cloud ECS.
Visit start.spring.io to generate the basic structure of a Spring-boot project with the Web dependency as follows:
Now we've a fresh Maven project we can unzip and open with our favorite Java IDE.
OrientDB is entirely written in Java, meaning we can immediately use its Java API's without the need to add anymore drivers or adapters.
Let's add the following properties to the pom.xml of our project:
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> <orientdb.version>3.0.5</orientdb.version> </properties>
In the dependencies section we specify the OrientDB librabies depending of our use case:
<!-- OrientDB --> <dependency> <groupId>com.orientechnologies</groupId> <artifactId>orientdb-core</artifactId> <version>${orientdb.version}</version> </dependency> <dependency> <groupId>com.orientechnologies</groupId> <artifactId>orientdb-client</artifactId> <version>${orientdb.version}</version> </dependency> <dependency> <groupId>com.orientechnologies</groupId> <artifactId>orientdb-object</artifactId> <version>${orientdb.version}</version> </dependency> <dependency> <groupId>com.orientechnologies</groupId> <artifactId>orientdb-graphdb</artifactId> <version>${orientdb.version}</version> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> <exclusion> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> </exclusion> </exclusions> </dependency>
Some OrientDB dependencies use cases:
Now there is even a Spring Data OrientDB built by orienttechnologies to make easy the configurations/querying of OrientDB from a Java/Spring application, it's based on your Spring Data model.
Let's create our custom configuration class OrientDBConfiguration, it's where we'll defined our database user credentials to use for the project while producing a database instance to use later in our queries.
The configuration class is as follows:
/** * Basic OrientDB configuration class * To configure and provide the bean to inject later for database interactions * @author dassiorleando */ @Configuration public class OrientDBConfiguration { // The orientdb installation folder private static String orientDBFolder = System.getenv("ORIENTDB_HOME"); /** * Connect and build the OrientDB Bean for Document API * @return */ @Bean public ODatabaseDocumentTx orientDBfactory() { return new ODatabaseDocumentTx("plocal:" // or remote + orientDBFolder + "/databases/alibabacloudblog") .open("username", "userpwd"); } }
Notes: username and userpwd are respectively the username and password of the database user to use, these are configurable from one OrientDB configuration file that we've highlighted up here. plocal here is to specify we're trying to access a local instance of OrientDB hosted on the same server as our API, if it was a remote instance remote should have been used instead as follow:
return new ODatabaseDocumentTx("remote:server_ip/alibabacloudblog").open("admin", "admin");
The database url is written according to a specific format depending if we're accessing a local, remote or memory database as stated here.
Here the Class (entity in a relational word) are being created automatically as the default database mode is Schemaless:
/** * To create an article * @param article * @return article */ public Article save(Article article) { // Specify to use the same db instance for this thread ODatabaseRecordThreadLocal.instance().set(db); // The Class will be automatically created into Orient Studio ODocument doc = new ODocument(Article.class.getSimpleName()); // The entity name is provided as parameter doc.field("title", article.getTitle()); doc.field("content", article.getContent()); doc.field("author", article.getAuthor()); doc.save(); return article; }
It's pretty straightforward and just requires you to instantiate an ODocument, providing the necessary field to save and call the save() method.
Here is a concrete example of how the SQL can be perfectly used with OrientDB regardless the type of database we opted for, let's update an article based on its title:
/** * To update an article * @param article * @return boolean true if it was successfully updated */ public boolean update(Article article) { // Specify to use the same db instance for this thread ODatabaseRecordThreadLocal.instance().set(db); // Data String title = article.getTitle().trim(); String content = article.getContent(); String author = article.getAuthor(); // The sql query String query = "update Article set content = '" + content + "', author = '" + author + "' where title = '" + title + "'"; int resultInt = db.command( new OCommandSQL(query)).execute(); if(resultInt != -1) return true; return false; }
The query of a single Article by its title can be done in this way with another basic SQL command:
/** * Find a single article by title * @param title * @return article if found null else */ public Article findOne(String title) { // SQL query to have the ones that match List<ODocument> results = db.query( new OSQLSynchQuery<ODocument>("select * from Article where title = '" + title + "'")); // For the sake of the test, pick the first found result if(!results.isEmpty()) { ODocument oDocument = results.get(0); return Article.fromODocument(oDocument); } return null; }
Now, accessing the list of all articles could be done in two ways, either with SQL or with a special database instance function as shown below:
/** * Find all saved articles so far * @return */ public List<Article> findAll() { // List of resulting article List<Article> articles = new ArrayList<>(); // Load all the articles for (ODocument articleDocument : db.browseClass("Article")) { Article article = Article.fromODocument(articleDocument); articles.add(article); } return articles; }
Deleting an Article could be made with an SQL command also:
/** * Delete a single article by its title * @param title * @return boolean true if it was deleted successfully */ public boolean delete(String title) { title = title.trim(); // The title of the article to delete int resultInt = db.command( new OCommandSQL("delete * from Article where title = '" + title + "'")).execute(); if(resultInt != -1) return true; return false; }
Counting all the articles is a simple call as follow:
long size = db.countClass("Article");
The use of SQL within OrientDB either with a Graph or a Document database is such a great feature especially because here it's internally incorporated into the engine, without the need of adding an additional driver.
The API need a front gate to serve the query, here we're using Spring RestController annotation to define a Rest controller. The full source code of the project is available on Github:
/** *("/article") public Article create(@RequestBody @Valid Article article) { log.debug("Create an article with the properties {}", article); return articleService.save(article); } /** * To update an article * @param article * @return */ @PutMapping("/article") public boolean update(@RequestBody @Valid Article article) { log.debug("Update the article of title {} with the properties {}", article.getTitle(), article); return articleService.update(article); } /** * Get the list of all articles * @return */ @GetMapping("/article") public List<Article> list() { log.debug("We just get the list of articles one more time"); return articleService.findAll(); } /** * We asynchronously find an article by his title * @param title * @return */ @GetMapping("/article/{title}") public Article findByTitle(@PathVariable @NotNull String title) { log.debug("Load the article of title: {}", title); return articleService.findOne(title); } /** * Delete an article by its title * @param title */ @DeleteMapping("/article/{title}") public boolean deleteById(@PathVariable @NotNull String title) { log.debug("Delete the article of title: {}", title); return articleService.delete(title); } }
A basic script is available to run your API on your ECS or locally on any Linux/Mac computer as a bash file (startup.sh), supposing the port 8080 is the one used. Here's the file content:
echo "RUN THE PROJECT IN THE SERVER/LOCAL" echo "Compiling while skipping tests ..." ./mvnw clean install -DskipTests echo "Compilation finished" echo "Kill the process on port 8080, to undeploy the former version if existing" sudo kill $(sudo lsof -t -i:8080) echo "Let's deploy the new version silently" nohup ./mvnw spring-boot:run &
Note: startup.sh needs to be executable.
Here's an example of SQL query from the OrientDB dashboard to have all the saved Articles:
Clicking on the first column on a specific row will show us the full details of that Article plus the ability to update its content, change the fields type, add more fields and delete it from this view:
In this long article, we've seen how to build a Spring-Boot API using OrientDB as the database management system with its Java APIs and how to set it up all on an Alibaba Cloud ECS.
The full source code for this article can be found on Github.
Introduction to Federation in Stellar Blockchain
An Overview of Kafka Distributed Message System
2,183 posts | 504 followersFollow
Alibaba Clouder - September 7, 2020
Alibaba Clouder - April 13, 2019
Alibaba Clouder - September 29, 2019
Alibaba Clouder - June 5, 2019
Alibaba Clouder - August 2, 2018
Alibaba Clouder - January 11, 2019
2,183 posts | 504 followersFollow
Accelerate software development and delivery by integrating DevOps with the cloudLearn More
Web App Service allows you to deploy, scale, adjust, and monitor applications in an easy, efficient, secure, and flexible manner.Learn More
colince December 4, 2018 at 2:12 pm
good article and thanks for sharing.
|
https://www.alibabacloud.com/blog/building-a-spring-boot-api-with-a-multi-model-database-orientdb-on-alibaba-cloud_594216
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Extending SQLite with Python
SQLite is an embedded database, which means that instead of running as a separate server process, the actual database engine resides within the application. This makes it possible for the database to call directly into the application when it would be beneficial to add some low-level, application-specific functionality. SQLite provides numerous hooks for inserting user code and callbacks, and, through virtual tables, it is even possible to construct a completely user-defined table. By extending the SQL language with Python, it is often possible to express things more elegantly than if we were to perform calculations after the fact.
In this post I'll describe how to extend SQLite with Python, adding functions and aggregates that will be callable directly from any SQL queries you execute. We'll wrap up by looking at SQLite's virtual table mechanism and seeing how to expose a SQL interface over external data sources.
Getting started
The following examples will use peewee ORM. If you're new to peewee, it's a lightweight ORM that aims to provide an expressive, Pythonic interface for executing SQL queries. In order to get started, we will use the peewee ORM sqlite extension. This extension provides a database class that exposes the SQLite hooks for registering custom functions and aggregates.
If you'd like to follow along, you can install peewee using
pip:
$ pip install peewee
Then start an interactive shell and import the SQLite extension module. We will use an in-memory SQLite database:
from playhouse.sqlite_ext import * db = SqliteExtDatabase(':memory:')
Functions
The simplest way we can extend the SQL language is by defining our own functions. A function operates on a single row of data and returns a single value. Many databases provide a standard set of functions for performing operations like transforming text to upper or lower-case, extracting substrings, generating random numbers, and manipulating dates.
Examples of SQL functions:
print db.execute_sql('SELECT random()').fetchone() # OUTPUT: (-2523015414799391104,) print db.execute_sql('SELECT strftime("%Y-%m-%d", "now")').fetchone() # OUTPUT: ('2014-12-02',)
You can also integrate SQL functions with peewee models:
class TextData(Model): value = TextField() class Meta: database = db TextData.create_table() # Populate the table with some strings. for value in ['Foo', 'Bar']: TextData.create(value=value) # Use SQLite's lower-case and upper-case functions: query = (TextData .select(fn.lower(TextData.value), fn.upper(TextData.value)) .tuples()) print [row for row in query] # OUTPUT: [('foo', 'FOO'), ('bar', 'BAR')]
Creating our own functions
SQLite provides a hook for registering our own functions. Using the peewee ORM, let's take a look at how easy it is to add a new function to SQLite. For our example, we will create a function that generates a URL-friendly representation of a title string, a process called slugifying.
import re @db.func() def slugify(title): return re.sub('[^\w]+', '-', title.lower()).strip('-')
The
slugify function will take any non-alphanumeric characters, and replace them with a dash. Finally, any leading or trailing dashes are removed. Here are some example function calls:
slugify('Hello world')becomes
'hello-world'
slugify('My Awesome Post!')becomes
'my-awesome-post'
This function can be used to select, along with the title, the slugified version, which can be used in HTML templates to generate a URL-friendly link to the post. Similarly, we can use the function in the where clause to locate a post matching a given slug.
class Post(Model): title = CharField() content = TextField(default='') class Meta: database = db Post.create_table() Post.create(title='Extending SQLite with Python') Post.create(title='SQLite: Small. Fast. Reliable.') # Get a list of posts, along with the slugified title. posts = Post.select(Post, fn.slugify(Post.title).alias('slug')) for post in posts: print post.title, '-->', post.slug # OUTPUT: # Extending SQLite with Python --> extending-sqlite-with-python # SQLite: Small. Fast. Reliable. --> sqlite-small-fast-reliable # Find the post matching a given slug. slug = 'sqlite-small-fast-reliable' post = Post.select().where(fn.slugify(Post.title) == slug).get()
One other cool aspect of functions is that they can be composed with other functions. Imagine we want to display a list of posts by the first letter of their title. To get the list of all posts that start with the letter S we could write:
s_posts = (Post .select() .where(fn.lower(fn.substr(Post.title, 1, 1)) == 's'))
In the peewee source code you can find other examples of functions which perform date-manipulation.
Aggregate functions
Aggregate functions operate on one or more rows of data, performing some calculation and returning a single aggregate value. If you've ever queried for the count of objects in a table, then you've used an aggregate function. The databases I'm most familiar with offer a pretty standard set of built-in aggregates:
COUNT
SUM
MIN
MAX
AVG
GROUP_CONCAT(this joins multiple strings with a provided delimiter)
Example code:
class IntData(Model): value = IntegerField() class Meta: database = db IntData.create_table() # Populate the table with numbers 1 - 10. (IntData .insert_many({'value': value} for value in range(1, 11)) .execute()) # Perform some simple aggregate queries. print IntData.select(fn.Min(IntData.value)).scalar() # OUTPUT: 1 print IntData.select(fn.Max(IntData.value)).scalar() # OUTPUT: 10 print IntData.select(fn.Sum(IntData.value)).scalar() # OUTPUT: 55 print IntData.select(fn.Group_Concat(IntData.value, '--')).scalar() # OUTPUT: 1--2--3--4--5--6--7--8--9--10
Creating our own aggregates
With SQLite, it is very easy to define custom aggregate functions. We only need to create a class that implements two methods, and then register the class with our peewee database instance.
Let's see how this is done by writing an aggregate function to calculate the mode for a list of values. If it's been a while since your 8th grade math class, the mode is the most common value in a list.
We'll begin by defining our class. To do the actual calculation we'll use a handy container object from the standard library,
collections.Counter.
from collections import Counter class Mode(object): def __init__(self): self.counter = Counter()
The aggregate interface specifies two methods,
step() and
finalize(). step is called once for each row in the result set, and we will use this method to update the counter with the incoming value. finalize is called once when there are no more rows and we are ready to yield a final result. This method will find the most common value in the counter and return it.
@db.aggregate() class Mode(object): def __init__(self): self.counter = Counter() def step(self, *values): self.counter.update(values) def finalize(self): # If there are no items in the counter, we'll return None. if self.counter: return self.counter.most_common(1)[0][0]
We can now use the new aggregate function just like any other built-in aggregate. The following code example shows how to calculate the mode for a table of values. We will add some new rows to the
IntData table we created in the previous example:
values = [1, 1, 2, 3, 3, 3, 4, 5, 5] (IntData .insert_many({'value': value} for value in values) .execute()) # Call the mode function. print IntData.select(fn.Mode(IntData.value)).scalar() # OUTPUT: 3
If you prefer not to use a decorator, you can also call
db.register_aggregate(), passing in the class.
Ideas for useful aggregates
I thought I'd share some ideas for other custom aggregations you might add:
Varianceand
StdDev, which exist in PostgreSQL but are not included with SQLite.
MD5Sumor
SHA1Sumfor calculating a checksum.
Firstand
- Bitwise and logical operators that AND or OR together multiple values.
Experimenting with Virtual Tables
By far the most interesting hook provided by SQLite is the virtual table mechanism. SQLite virtual tables allow you to expose a SQL query interface over literally any tabular data source. For instance, you could write a virtual table to expose the filesystem as a database table, query a 3rd party RESTful API, or expose your Redis instance as a SQLite table. In my post last week, the transitive closure extension provides a virtual table for querying hierarchical data, but the actual data is stored behind-the-scenes in an AVL tree.
Unfortunately, the standard-library SQLite driver does not come with support for creating virtual tables. Luckily, there is a more powerful SQLite library APSW that makes this functionality available to Python programmers. Peewee comes with an APSW database driver, so you can use APSW to create virtual tables in Python, then use Peewee models to work with the virtual tables.
Exposing Redis as a Virtual Table
Virtual tables, because they provide so much functionality, are rather complex beasts to implement. For that reason, rather than providing the entire code to the redis example, I'll try to provide an overview of the important methods with an explanation of what's going on.
When implementing a virtual table, we need to write three classes which will answer the following questions:
- What does our data look like? If our data were an actual SQL table, how would we declare it?
- How will we map data into rows and columns?
- How will we iterate through the data?
- How will we apply filters from the WHERE clause, particularly if this would lead to implementation-specific optimizations?
- How do we update, insert or delete rows?
Through the
Module,
Table and
Cursor classes, we will provide the answers to these questions.
The Module Class
At the highest-level, we have the module class, which is responsible for describing the table structure. The module class contains a
Create method, which provides a table definition and returns an instance of the user-defined Table object. Here is the
Create method for the redis module:
def Create(self, db, modulename, dbname, tablename, *args): schema = 'CREATE TABLE %s (rowid, key, value, type, parent);' return schema % tablename, RedisTable(tablename, self.db)
The module is the visible portion of your virtual table API, and is registered with the database connection when one is opened.
The Table Class
The next class describes the virtual table (
RedisTable), which is instantiated by the module's
Create method. The table class receives queries from SQLite and determines which indexes (if any) to use to efficiently generate results. If your data-source supports writes, the table also handles inserts, updates and deletes.
For the
RedisTable, we are primarily interested in whether we are searching the entire list of keys, or whether we are searching for items located at a key, such as the key/value pairs of a hash, or the keys and scores of a zset. The table's
BestIndex method can be simple or complex, depending on your needs. Here is the
BestIndex method for the
RedisTable class:
def BestIndex(self, constraints, orderbys): """ Example query: SELECT * FROM redis_tbl WHERE parent = 'my-hash' AND type = 'hash'; Since parent is column 4 and type is colum 3, the constraints will be: (4, apsw.SQLITE_INDEX_CONSTRAINT_EQ), (3, apsw.SQLITE_INDEX_CONSTRAINT_EQ) Ordering will be a list of 2-tuples consisting of the column index and boolean for descending. Return values are: * Constraints used, which for each constraint, must be either None, an integer (the argument number for the constraints passed into the Filter() method), or (int, bool) tuple. * Index number (default zero). * Index string (default None). * Boolean whether output will be in same order as the ordering specified. * Estimated cost in disk operations. """ constraints_used = [] columns = [] for i, (column_idx, comparison) in enumerate(constraints): constraints_used.append(i) columns.append(self._columns[column_idx]) return [ constraints_used, # Indices of constraints we are interested in. 0, # The index number, not used by us. ','.join(columns), # The index name, a list of filter columns. False, # Whether the results are ordered. 1000 if 'parent' in columns else 10000]
If you like, you can also leave off the
BestIndex method and let SQLite perform the filtering manually (using a table scan). For large amounts of data, or for data which may change depending on the filters (as is the case with our Redis virtual table), the
BestIndex method provides a way to control what data is passed to SQLite.
The Cursor Class
The final class is the cursor, which we will use to actually extract data from Redis and apply the appropriate filters. The cursor implements a
Filter method which receives the data specified by the return value of
BestIndex. For our Redis example, we will use the following logic:
- If no parent key is specified, our base data-set will be all keys in the database.
- If a parent key is specified, depending on the type of data stored in the key, generate appropriate tabular data.
The important thing to note is that we do not need to do all the processing in our virtual table -- SQLite will actually do most of the work for us, so long as we give it some tabular data. Here is the
Filter method, and the
get_data_for_key helper:
def Filter(self, indexnum, indexname, constraintargs): """ This method is always called first to initialize an iteration to the first row of the table. The arguments come from the BestIndex() method in the table object with constraintargs being a tuple of the constraints you requested. """ columns = indexname.split(',') column_to_value = dict(zip(columns, constraintargs)) if 'parent' in column_to_value: initial_key = column_to_value['parent'] data = self.get_data_for_key(initial_key) else: data = [] for i, key in enumerate(self.db.keys()): key_type = self.db.type(key) if key_type == 'string': value = self.db.get(key) else: value = None data.append((i, key, value, None, key_type)) self.data = data self.index = 0 self.nrows = len(data) def get_data_for_key(self, key): # 'rowid', 'key', 'value', 'type', 'parent' key_type = self.db.type(key) if key_type == 'list': return [ (i, i, value, 'list', key) for i, value in enumerate(self.db.lrange(key, 0, -1))] elif key_type == 'set': return [ (i, value, None, 'set', key) for i, value in enumerate(self.db.smembers(key))] elif key_type == 'zset': all_members = self.db.zrange(key, 0, -1, withscores=True) return [ (i, value, score, 'zset', key) for i, (value, score) in enumerate(all_members)] elif key_type == 'hash': return [ (i, k, v, 'hash', key) for i, (k, v) in enumerate(self.db.hgetall(key).iteritems())] elif key_type == 'none': return [] else: return [(1, key, self.db.get(key), 'string', key)]
This is pretty much all of the code! The rest is just book-keeping and stubbed-out methods. Of course, the sky's the limit with virtual tables. I'm only just beginning to learn how to work with them, but they seem like a great way to create some truly ridiculous hacks.
For more information on virtual tables, here are some links you might find useful:
- APSW's virtual table documentation and example code.
- Redis virtual table
- REST API virtual table hack
- CouchDB virtual table
Thanks for reading
Thanks for taking the time to read this post, I hope you found it interesting. Feel free to leave a comment below if you have any comments or questions.
Links
Here are some links which you may find helpful:
- Peewee SQLite extension, with aggregate, function and collation helpers.
- Peewee APSW extension.
Here are some blog posts on related topics:
- Building an encrypted diary with Python and SQLite
- Using SQLite's full-text search engine with Python
- Querying tree structures with SQLite and the transitive closure extension
Commenting has been closed, but please feel free to contact me
yegle | dec 05 2014, at 01:30pm
I wonder if there's any other ORMs out there that support these SQLite advanced features?
|
https://charlesleifer.com/blog/extending-sqlite-with-python/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Since Groovy 1.8 we can convert an array or collection to a Set with only the unique elements of the original array or collection. We use the
toSet() method to do this.
def numbers = [1,2,1,4,1,2] as int[] assert numbers.toSet() == [1,2,4] as Set def list = ['Groovy', 'Java', 'Grails', 'Groovy'] assert list.toSet() == ['Groovy', 'Java', 'Grails'] as Set
|
https://blog.mrhaki.com/2011/04/groovy-goodness-convert-collection-to.html
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
InputField
Offers users a simple input for a form.
Forms require input from users. When you only need basic information, use input fields to gather exactly what you need.
Input fields accept several types of data and can impose various restrictions to ensure you get what you need from users. And help and error guidance to ensure they know what to enter.
Component status
Content structure
Mobile
Desktop
Guidelines
Use labels
Labels serve to clearly present what is expected. They are especially important for people who won’t see other visual cues. But they also help everyone know exactly what to enter.
For the label, use short descriptive phrases, ideally nouns that make the request clear. See our patterns for form labels for some examples.
Set input mode
Setting the proper input mode for the field (such as number, email address) helps make it clear what is expected. It also displays the correct keyboard on mobile devices, making it easier for users to complete the field.
Use help and error messages
For more complicated fields, sometimes labels aren’t enough. You want to include any necessary information as clear as possible to help users complete the fields.
Use help messages to guide users before they enter anything and clear calm error messages when there’s a problem with what they entered.
Remember that such messages are likely to invoke negative feelings, so be positive and focused on solutions to any problems.
Help message
import InputField from "@kiwicom/orbit-components/lib/InputField";
() => ( <InputField label="Password" placeholder="paSsw0rd" type="password" help="Use at least one uppercase letter and one number" /> )
Error message
import InputField from "@kiwicom/orbit-components/lib/InputField";
() => ( <InputField label="Password" placeholder="paSsw0rd" type="password" help="Use at least one uppercase letter and one number" error="Your password must contain a number" /> )
Include placeholder examples
When you have additional information or helpful examples, include placeholder text to help users along.
Remember that placeholder text is visually less important, low in contrast, and disappears once users enter anything. So do not include anything necessary to complete the field.
import InputField from "@kiwicom/orbit-components/lib/InputField";
() => <InputField label="Given names" placeholder="Sofia Cruz" />
Look & feel
Background color and borders
It’s important for fields to stand out from the background as something different that can be filled in.
That’s why our native fields come with a Cloud / Normal background to stand out against the White background natural to the app. And they don’t have a border as that would make them too heavy and overwhelming.
For our desktop and responsive fields, which will usually be placed against a Cloud / Light).
import InputField from "@kiwicom/orbit-components/lib/InputField"; import ButtonLink from "@kiwicom/orbit-components/lib/ButtonLink"; import Stack from "@kiwicom/orbit-components/lib/Stack";
() => { const [showPassword, setShowPassword] = React.useState(false) return ( <Stack direction="column"> <InputField label="Maximum price" type="number" suffix={ <div style={{ paddingRight: "12px", }} > Kč </div> } /> <InputField label="Password" type={showPassword ? "text" : "password"} suffix={ <ButtonLink type="primary" iconLeft={ showPassword ? ( <VisibilityOff ariaLabel="Hide password" /> ) : ( <Visibility ariaLabel="Show password" /> ) } compact onClick={() => setShowPassword(!showPassword)} /> } /> </Stack> ) }
Related components
InputGroup
When you have multiple fields that are related, such as a day, month, and year of a date of birth, join them together in an input group to give them shared labels and validation.
TextArea
When you want longer responses from users, such as to get answers to open questions in a feedback form, use a text area to provide them space.
|
https://orbit.kiwi/components/inputfield/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Abstract:.
Welcome to the 270th edition of The Java(tm) Specialists' Newsletter, sent to you from beautiful Crete. A very special welcome to my friends in Suriname, bringing the countries that read this newsletter to at least 149.
This week at jPrime: "Your newsletter is the most enjoyable thing for me to do on the toilet." I kid you not. Only complaint was that the default email format that I've been using is not responsive, so the fonts were impossible to read on his phone. The web version works fine. OK, ok, way too much information, I know. And the "most enjoyable thing"? Um, alright then. Well, here goes. I've tried to make the email more responsive and readable on cell phones. Let's see if it works.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
After my talk at jPrime 2019, which you can watch here, someone asked me how one could tell how effective string deduplication would be in their application. To me the best way of measuring is to look at the response rates and memory consumption. In the end, what does it matter if we have saved memory, but our performance is worse? Memory is cheap. On the other hand, less memory consumption leads to less reachable objects, and can thus give us better speed.
After wondering how I could get a peek into the deduplication
data, I remembered that there was a flag
-XX:+PrintStringDeduplicationStatistics in Java
8. This changed to
-Xlog:stringdedup*=debug
with Java 9 unified logging. Each time that deduplication is
triggered, it outputs a ream of useful information such as:
Concurrent String Deduplication (90.162s) Concurrent String Deduplication 72.0B->24.0B(48.0B) avg 59.8% (90.162s, 90.162s) 0.020ms Last Exec: 0.020ms, Idle: 16740.738ms, Blocked: 0/0.000ms Inspected: 3 Skipped: 0( 0.0%) Hashed: 0( 0.0%) Known: 0( 0.0%) New: 3(100.0%) 72.0B Deduplicated: 2( 66.7%) 48.0B( 66.7%) Young: 2(100.0%) 48.0B(100.0%) Old: 0( 0.0%) 0.0B( 0.0%) Total Exec: 3/1.014ms, Idle: 3/89992.409ms, Blocked: 0/0.000ms Inspected: 8457 Skipped: 0( 0.0%) Hashed: 610( 7.2%) Known: 5640( 66.7%) New: 2817( 33.3%) 86.7K Deduplicated: 1810( 64.3%) 51.8K( 59.8%) Young: 1810(100.0%) 51.8K(100.0%) Old: 0( 0.0%) 0.0B( 0.0%) Table Memory Usage: 187.7K Size: 4096, Min: 1024, Max: 16777216 Entries: 6644, Load: 162.2%, Cached: 0, Added: 6645, Removed: 1 Resize Count: 2, Shrink Threshold: 2730(66.7%), Grow Threshold: 8192(200.0%) Rehash Count: 0, Rehash Threshold: 120, Hash Seed: 0x0 Age Threshold: 3 Queue Dropped: 0
Every time a young GC is triggered (also called scavenge or partial GC), the age of our survivors increases. Once they reach the tenuring threshold, they are moved from the young to the old regions. By default, strings are deduplicated once they reach age 3, thus at their third young GC. We don't want to do it earlier, as deduplication costs precious CPU cycles. If the string won't survive into old age, there is also no need to try save memory. If for some reason the string is prematurely promoted into old, then it is deduplicated early as well.
To try this all out, I wrote my DeduplicationExplorer. To
get the most benefit, you'll have to run it yourself. You
connect to it via
telnet localhost 8080 and then
send it text and commands. All the text is stored in an
ArrayList.
Here are the commands that we support:
clear: clears the strings from the list
ygc: causes a young GC
fgc: causes a full GC
close: closes the connection
Here is the code for DeduplicationExplorer. I recommend running it yourself:
import java.io.*; import java.lang.management.*; import java.lang.reflect.*; import java.net.*; import java.util.*;
// Java8: // -XX:+UseG1GC // -XX:+UseStringDeduplication // -XX:+PrintStringDeduplicationStatistics // -verbose:gc // Java11: // -XX:+UseStringDeduplication // -Xlog:stringdedup*=debug // -verbose:gc public class DeduplicationExplorer { public static void main(String... args) throws IOException { List<String> lines = new ArrayList<>(); Socket socket = new ServerSocket(8080).accept(); PrintStream out = new PrintStream( socket.getOutputStream(), true); out.println("Commands: clear, print, ygc, fgc, close"); BufferedReader in = new BufferedReader( new InputStreamReader( socket.getInputStream())); String line; while ((line = in.readLine()) != null) { System.out.println(line); switch (line) { case "clear": lines.clear(); break; case "print": print(lines); break; case "ygc": youngGC(); break; // young GC case "fgc": System.gc(); break; // full GC case "close": return; default: lines.add(line); } } }
private static void youngGC() { long collectionCount = YOUNG_GC.getCollectionCount(); do { // array is too big to be eliminated with escape analysis byte[] bytes = new byte[1024]; } while (YOUNG_GC.getCollectionCount() == collectionCount); }
private static void print(List<String> lines) { System.out.println("lines:"); lines.forEach(DeduplicationExplorer::print); }
private static void print(String line) { try { System.out.printf("\t\"%s\" - %s%n", line, VALUE.get(line)); } catch (IllegalAccessException e) { throw new IllegalStateException(e); } }
private final static Field VALUE;
static { try { VALUE = String.class.getDeclaredField("value"); VALUE.setAccessible(true); } catch (NoSuchFieldException e) { throw new Error(e); } }")); } }
For example, I started the Oracle OpenJDK 12.0.1 with the following flags:
-XX:+UseStringDeduplication -Xlog:stringdedup*=debug -verbose:gc
I then connected with
telnet localhost 8080 and
sent it:
hello hello hello print
On the program output, we saw this:
hello hello hello print lines: "hello" - [B@5d099f62 "hello" - [B@37f8bb67 "hello" - [B@49c2faae
As you see, the three strings each have their own byte[]. The hexadecimal number is the identity hash code, a random number that is fairly unique.
Next we send the command
fgc - Full GC. In our
DeduplicationExplorer, we see the GC event and the
deduplication statistics.
If we now send the
byte[].
If we clear the ArrayList with command
clear,
and then issue a full GC with
fgc, then the
shared byte[] will be collected. If we send "hello" three
more times and call
fgc and
fgc *snip* GC output and deduplication statistics *snip* print lines: "hello" - [B@2e0fa5d3 "hello" - [B@2e0fa5d3 "hello" - [B@2e0fa5d3
If the string happened to already exist in the JVM as a
constant, then that will be the basis of our shared byte[].
For example, let's
clear and then send the
string "main" three times, followed by
clear main main main print lines: "main" - [B@5010be6 "main" - [B@685f4c2e "main" - [B@7daf6ecc
If we now issue a
fgc and a
fgc *snip* GC output and deduplication statistics *snip* print lines: "main" - [B@1f7e245f "main" - [B@1f7e245f "main" - [B@1f7e245f
Note that the
byte[] is none of those we had
read from the BufferedReader. And if we
clear
and send "main" again and then issue a
fgc
and a
print lines: "main" - [B@1f7e245f
Since we have a method called "main" in our JVM, all our strings are sharing its byte[].
So far we have only considered full GC. However, I also have
support for triggering a young GC with the command
ygc. I'm pretty proud of that code, as I woke
up at 5am and wrote it in my mind in my half sleep before
dozing off again. Usually code that I write whilst dreaming
does not work, but this does :-) I first find the
garbage collector MXBean for the G1 Young Generation.
I then loop whilst the collection count is the same and
allocate 1k byte arrays. Escape analysis won't remove the
heap allocation since the arrays are larger than 64. Here
is my dreamy code again:
private static void youngGC() { long collectionCount = YOUNG_GC.getCollectionCount(); do { // array is too big to be eliminated with escape analysis byte[] bytes = new byte[1024]; } while (YOUNG_GC.getCollectionCount() == collectionCount); }")); }
We now issue the following commands:
hello hello hello ygc hello ygc hello ygc print
Each time that we send the
ygc we see a
"Pause Young (Normal)" appear on the console. After the
third
ygc, we also see the deduplication
statistics appear. The
print lines: "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@37f8bb67 "hello" - [B@49c2faae
Thus the first three "hello" strings have been deduplicated,
but not hte other two. If we issue another
ygc
and
print lines: "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@49c2faae
And with the next
ygc, the
print lines: "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62
It all works how it is supposed to.
-XX:+AlwaysTenure
A rather weird flag in OpenJDK is
-XX:+AlwaysTenure.
It effectively does away with the survivor spaces and promotes
all survivors from young to old. Whilst you would not want
to use this in production, it is fun to turn it on for our
experiment. If we now send the following:
hello hello hello ygc print
We will see similar output to what we saw with the call to System.gc():
hello hello hello ygc [154.162s][info][gc] GC(0) Pause Young (Normal) *snip* GC output and deduplication statistics *snip* print lines: "hello" - [B@5d099f62 "hello" - [B@5d099f62 "hello" - [B@5d099f62
Thank you for your support and for reading this...
|
https://www.javaspecialists.eu/archive/Issue270-Excursions-into-Deduplication.html
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
SYNOPSIS
#include <Inventor/nodes/SoBlinker.h>
Inherits SoSwitch.
Public Member Functions
virtual SoType getTypeId (void) const
SoBlinker (void)
virtual void getBoundingBox (SoGetBoundingBoxAction *action)
virtual void write (SoWriteAction *action)
Static Public Member Functions
static SoType getClassTypeId (void)
static void initClass (void)
Public Attributes
SoSFFloat speed
SoSFBool on
Protected Member Functions
virtual const SoFieldData * getFieldData (void) const
virtual ~SoBlinker ()
virtual void notify (SoNotList *nl)
Static Protected Member Functions
static const SoFieldData ** getFieldDataPtr (void)
Detailed Description
The SoBlinker class is a cycling switch node.
This switch node cycles its children SoBlinker::speed number of times per second. If the node has only one child, it will be cycled on and off. Cycling can be turned off using the SoBlinker::on field, and the node then behaves like a normal SoSwitch node.
FILE FORMAT/DEFAULTS:
Blinker { whichChild -1 speed 1 on TRUE }
Constructor & Destructor Documentation
SoBlinker::SoBlinker (void)Constructor.
SoBlinker::~SoBlinker () [protected], [virtual]Destructor.
Member Function Documentation
SoType SoBlinker::getClassTypeId (void) [static]This static method returns the SoType object associated with objects of this class.
Reimplemented from SoSwitch.
SoType SoBlinker:oSwitch.
const SoFieldData ** SoBlinker::getFieldDataPtr (void) [static], [protected]This API member is considered internal to the library, as it is not likely to be of interest to the application programmer.
Reimplemented from SoSwitch.
const SoFieldData * SoBlinker::getFieldData (void) const [protected], [virtual]Returns a pointer to the class-wide field data storage object for this instance. If no fields are present, returns NULL.
Reimplemented from SoSwitch.
void SoBlinker::initClass (void) [static]Sets up initialization for data common to all instances of this class, like submitting necessary information to the Coin type system.
Reimplemented from SoSwitch.
void SoBlinker:oSwitch.
void SoBlinker::write (SoWriteAction *action) [virtual]Action method for SoWriteAction.
Writes out a node object, and any connected nodes, engines etc, if necessary.
Reimplemented from SoSwitch.
void SoBlinker::notify (SoNotList *l) [protected], [virtual]Notifies all auditors for this instance when changes are made.
Reimplemented from SoSwitch.
Member Data Documentation
SoSFFloat SoBlinker::speedNumber of cycles per second.
SoSFBool SoBlinker::onControls whether cycling is on or off.
Author
Generated automatically by Doxygen for Coin from the source code.
|
http://manpages.org/soblinker/3
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
We know earlier, in method overriding, the subclass method supersedes the super class method. When the method is overridden in inheritance, the subclass object calls its own method. If it calls its own method, the subclass looses the functionality of the super class method (of course, subclass is at liberty). If the subclass would like to call the super class method also, the subclass can do so using "super" keyword. "super" keyword is used by the subclass to call that of super class also when the methods or variables are overridden. "super" keyword can be used with instance variables and methods but not with classes.
1. super with Methods
In the following program, subclass overrides the eat() method of super class and at the same uses both with "super" keyword.
class Bird { public void eat() { System.out.println("Eats insects."); } } public class Sparrow extends Bird { public void eat() { super.eat(); System.out.println("Eats grains."); super.eat(); // again you can call } public static void main(String args[]) { Sparrow s1 = new Sparrow(); s1.eat(); } }
The eat() method exists in both Bird and Sparrow classes. We say, super class method is overridden by subclass. s1.eat() calls its own (Sparrow) method. Sparrow uses "super.eat();" to call Bird's eat() method. The next program illustrates "static" with variables.
Note:
- "super" keyword cannot be used from static methods like main().
- Static methods cannot be overridden (cannot be called with super keyword).
2. super with Variables
You have seen earlier "super" with methods. Let us go for with variables.
class Packing { int cost = 50; } public class TotalCost extends Packing { int cost = 100; public void estimate() { System.out.println("Cost of articles Rs." + cost); System.out.println("Packing expenses Rs." + super.cost); System.out.println("Total to pay Rs." + (cost + super.cost)); } public static void main(String args[]) { TotalCost tc1 = new TotalCost(); tc1.estimate(); } }
In the above code, cost variable of Packing class is overridden by TotalCost. In this case, the subclass object prefers to call its own variable. super.cost calls super class cost variable. super.cost cannot be called from static methods like main() method.
Programming Tip: Do not attempt to use super keyword with classes. It is a compilation error.
"super" and "this"
In java, "super" keyword is used to call super methods and variables (when overridden only, else, not necessary to use) and "this" keyword is used to refer the current object.
8 thoughts on “super Keyword Java”
Sir if i write only cost=100; insteda of int cost=100; then it provide the result 100 if i use super on not in both case why ?
If you omit int, cost refers super class.
static methods are possible to override,taken example main() are override with distinct parameters ,its is possible but only main(String args[]) are directly called by jvm,remaining main() methods are called by programmer itself it acts like a normal methods.
By rule, static methods cannot be overridden. If overridden, they are treated local (not overridden) to the class. For this reason, static methods cannot be called with super keyword from subclass.
Q.Why can I not use “super” variable from a static context, even though “super” refers to the parent class and NOT a class instant, unlike “this”?
My Ans: Because we will not give any guarantees to compiler assigning memory to non static member(Non static members like variable, method will get memory only after creating object its class ).
=> Is my thought is correct? Please let me.
static context is not tied to a particular object. Calling super from static context, the super keyword should call which object data.
when i use super keyword to access instance variable of parent class then i receive this error.non-static variable super cannot be referenced from static context.
suggest me a way to remove it.
super keyword cannot be used from static methods. Perhaps, you are calling it from static main. Check.
|
https://way2java.com/oops-concepts/member-hiding-super-keyword/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
public abstract class LDG. This class addresses that issue
for applications that prefer to use Gson for everything rather than calling
JsonSerialization for individual objects.
An application that wishes to use Gson to serialize or deserialize classes from the SDK should
configure its
Gson instance as follows:
import com.launchdarkly.sdk.json.LDGson; Gson gson = new GsonBuilder() .registerTypeAdapterFactory(LDGson.typeAdapters()) // any other GsonBuilder options go here .create();
This causes G Gson's behavior for any other classes.
Note that some of the LaunchDarkly SDK distributions deliberately do not expose Gson as a
dependency, so if you are using Gson in your application you will need to make sure you have
defined your own dependency on it. Referencing
LDGson will cause a runtime
exception if Gson is not in the caller's classpath.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static com.google.gson.TypeAdapterFactory typeAdapters()
TypeAdapterFactorythat defines the correct serialization and deserialization behavior for all LaunchDarkly SDK objects that implement
JsonSerializable.
import com.launchdarkly.sdk.json.LDGson; Gson gson = new GsonBuilder() .registerTypeAdapterFactory(LDGson.typeAdapters()) // any other GsonBuilder options go here .create();
TypeAdapterFactory
|
http://launchdarkly.github.io/java-server-sdk/com/launchdarkly/sdk/json/LDGson.html
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
On Thu, 03 Nov 2011 21:48:56 +0100, Tom Tromey wrote: [...] > namespace N1 { > int m() { return 23; } > }; > > namespace N2 { > int m() { return 23; } > }; > > int main() > { > using namespace N1; > using namespace N2; > return 0; > } > > I think this is valid (g++ accepts it). > > What should gdb do if we are stopped in 'main' and the user types 'break m'? > > > Doing namespace searches is a problem if they yield an ambiguous result > because either: > > 1. There is no canonical name that can be put into the breakpoint for > resetting, or > > 2. The breakpoint would have to also capture the current block for > re-setting, which opens a whole new set of problems. > > > I understand that the rationale here is for gdb to work like the > compiler does. Compiler says: .C:13:6: error: call of overloaded ‘m()’ is ambiguous .C:13:6: note: candidates are: .C:6:7: note: int N2::m() .C:2:7: note: int N1::m() and I think GDB should also say the same output as error. It is questionable what it should do on re-set if it becomes ambigous. One can store the available namespaces as strings with the breakpoint (instead of storing pointer to the block - where the block may disappear). I understand it is not feasible to throw an error if ambiguity happens later on a breakpoint re-set, so a multi-location breakpoint is probably OK. Which brings a question whether the multi-location breakpoint should not be placed there already when creating the breakpoint (instead of the suggested error). As GDB already ignores `static' for variables in other files and already ignores even C++ access specifiers it cannot work exactly like the compiler anyway. > I would rather just require the user to type what they mean. It breaks that GDB should be able to parse what the source says. Thanks, Jan
|
https://sourceware.org/legacy-ml/gdb-patches/2011-11/msg00107.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Created on 2017-09-04 11:39 by scoder, last changed 2017-10-01 08:43 by scoder. This issue is now closed.
The method lookup fast path in _PyType_Lookup() does not apply during type creation, which is highly dominated by the performance of the dict lookups along the mro chain. Pre-calculating the name hash speeds up the creation of an empty class (i.e. "class Test: pass") by around 20%.
Will send a pull request shortly.
Could you please provide benchmarks that you used?
I literally just ran timeit on "class Test: pass", but I'll see if I can provide proper numbers.
Comparing against CPython master as of 122e88a8354e3f75aeaf6211232dac88ac296d54
I rebuilt my CPython to get clean results, and that still gave me almost 15% overall speedup.
Original:
$ ./python -m timeit 'class Test: pass'
20000 loops, best of 5: 9.55 usec per loop
Patched:
$ ./python -m timeit 'class Test: pass'
50000 loops, best of 5: 8.27 usec per loop
I came across this when benchmarking the class creation in Cython, which seemed slow, and after a couple of tweaks, I ended up with 97% of the runtime inside of CPython, so I took a look over the fence.
According to callgrind, the original version spent over 125M instructions inside of _PyType_Lookup() for the timeit run, whereas the patched version only counts about 92M, that's about 25% less.
What names are looked up when you create an empty class?
I'm surprised that this change has measurable performance effect. Aren't name an interned string with precalculated cache?
It's the slot names in "slotdefs". See "update_one_slot()".
The time that is saved is mostly the overhead of calling PyDict_GetItem(). I actually tried PyDict_GetItemWithError() first, which is faster due to the lower error handling overhead, before I noticed that the real problem is the repeated lookups of the same name, which can be optimised further by pre-calculating the hash and calling the low-level lookup function.
I would prefer to use the _Py_IDENTIFIER API rather than using _PyDict_GetItem_KnownHash().
After adding PyErr_Clear() the benefit of this optimization is smaller. Only 8% on my 64-bit computer. And no any effect on 32-bit. I wounder if it is worth to complicate the code for such small benefit. For non-empty classes the relative benefit is even smaller. Maybe there are other opportunities for optimization?
> I.
There is special internal API in dictobject.c
_PyDict_LoadGlobal(PyDictObject *globals, PyDictObject *builtins, PyObject *key)
Maybe, we can have special API like that
_PyDict_LookupMro(PyObject *mro, PyObject *key);
BTW, method cache in _PyType_Lookup is not good for initializing type.
It always mishit. And all specialized method is cached even if it isn't used anymore.
Stefan, would you benchmark these ideas?
I updated the pull request with a split version of _PyType_Lookup() that bypasses the method cache during slot updates. I also ran the benchmarks with PGO enabled now to get realistic results. The overall gain is around 15%.
Original:
$ ./python -m timeit 'class Test: pass'
20000 loops, best of 5: 7.29 usec per loop
Patched:
$ ./python -m timeit 'class Test: pass'
50000 loops, best of 5: 6.15 usec per loop
Patched with non-trivial bases:
$ ./python -m timeit 'class Test(object): pass'
50000 loops, best of 5: 6.05 usec per loop
$ ./python -m timeit 'class Test(type): pass'
50000 loops, best of 5: 6.08 usec per loop
$ ./python -m timeit 'class Test(int): pass'
50000 loops, best of 5: 9.08 usec per loop
I do not consider the optimisations a code degredation.
There is one semantic change: the new function _PyType_LookupUncached() returns on the first error and might set an exception. I considered that better behaviour than the old function, but it means that if one base class name lookup fails and the next one previously succeeded, it will no longer succeed now. I don't have a strong feeling about this and would change it back if compatibility is considered more valuable. It generally feels wrong to have errors pass silently here, however unlikely they may be in practice.
Please use perf.timeit not timeit for microbenchmarks:
Since I'm getting highly reproducible results on re-runs, I tend to trust these numbers.
BTW, it seems that Yury's dict copy optimisation would also help here. When I use a benchmark scenario with a simple non-empty method/attribute dict (from Cython this time), almost 10% of the creation time is spent copying that dict, which should essentially just be a memcopy() since it doesn't need any resizing at that point.
Confirmed:
$ ./python-patched -m perf timeit --compare-to `pwd`/python -- 'class C: pass'
python: ..................... 11.9 us +- 0.1 us
python-patched: ..................... 10.3 us +- 0.1 us
Mean +- std dev: [python] 11.9 us +- 0.1 us -> [python-patched] 10.3 us +- 0.1 us: 1.15x faster (-13%)
Any more comments on the proposed implementation? 13-15% seem worth it to me.
@Victor, or are you saying "PyId, or no change at all"?
Good work, Stefan! It's an impressive speedup of class creation.
It looks like you have not yet addressed Serhiy's comment
My comment was addressed.
As for using the _Py_IDENTIFIER API, I don't think it is related. The speedup was caused by avoiding few simple checks and function calls. The _Py_IDENTIFIER API is great, but it has a small overhead which is comparable to the small difference caused by Stefan's patch.
No, that one was addressed. I think only Victor's comment is still open, that's why I asked back.
Victor is currently travelling and recovering from jetlag. I'm sure he'll reply within a day or two.
I ran a microbenchmark on the current PR 3279 using:
./python -m perf timeit --inherit=PYTHONPATH 'class C: pass'
Result:
haypo@selma$ ./python -m perf compare_to ref.json patch.json
Mean +- std dev: [ref] 9.71 us +- 0.38 us -> [patch] 8.74 us +- 0.22 us: 1.11x faster (-10%)
I compiled Python using "./configure && make", no LTO nor PGO.
Serhiy on the PR: "This is overgeneralization. Can tp_dict be not exact dict at all? I don't think this is possible. In many places concrete dict API is used with tp_dict. If you want to allow tp_dict be not exact dict, please open a separate issue for this."
Using the following code, A.__dict__ type is dict even if the metaclass creates a different type, probably because type_new() calls PyDict_Copy(orig_dict):
---
class mydict(dict):
def __setitem__(self, name, value):
if name == "__module__":
value = "<mock module>"
super().__setitem__(name, value)
class MetaClass(type):
@classmethod
def __prepare__(mcl, name, bases):
return mydict()
class A(metaclass=MetaClass):
pass
print(A.__module__)
---
On my computer the speed up is 13% without LTO and PGO and around 20% with LTO and PGO. Building with LTO and PGO adds other 20%.
> Since _PyDict_GetItem_KnownHash() may or may not set an exception, we have to check for a live exception after calling it, and that finds the old exception of the last attribute lookup and decides that its own lookup failed.
Hmm, with PyDict_GetItem() we don't falsely detect a lookup failing with a live exception set. Is it correct to call _PyType_Lookup() with an exception set? Perhaps we should save a current exception before calling find_name_in_mro() and restore it after. Or raise SystemError if an exception set. Or just add assert(!PyErr_Occurred()) at the begin of find_name_in_mro(). I don't know what is more correct.
> Is it correct to call _PyType_Lookup() with an exception set?
The general rule of thumb is that it's not safe to call any user code with a live exception set, and lookups can call into user code.
I quickly looked through all occurrences (there aren't that many actually), and they all seem be be called from a context where a live exception would be wrong.
> Perhaps we should save a current exception before calling find_name_in_mro() and restore it after.
I thought about that, too, but it feels wrong. This just shouldn't be necessary. It's ok to save away exceptions in exception related code, but general purpose code shouldn't have to assume that it gets called with a live exception.
> Or just add assert(!PyErr_Occurred()) at the begin of find_name_in_mro().
That would catch the error at hand. The only concern is backwards compatibility. But it's easy to argue that it's the foreign code that is broken in this case (as show-cased by cElementTree) and clearly needs fixing if this occurs. An assert() seems like a reasonable way to catch this kind of bug, at least in a debug build.
One.
Still ready for merging :)
I'm going to merge this over the next 24 hours if there are no comments from other core developers.
New changeset 2102c789035ccacbac4362589402ac68baa2cd29 by Serhiy Storchaka (scoder) in branch 'master':
bpo-31336: Speed up type creation. (#3279)
Thank you for your contribution Stefan! I had doubts about your initial (much simpler) changes, but changed my mind.
Thanks for the reviews, and thank you for merging it, Serhiy.
|
https://bugs.python.org/issue31336
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
I previously wrote an article about using the open source JQuery plugin 'Full Calendar' to create a diary in .NET MVC. That article covered the basics of using the plugin, and demonstrated the usual front and backend functionality you need in an appointment diary system. This included creating an appointment/event, editing it, showing different views, etc. The content in the article remains valid and useful. This article improves on the last by showing how to use some of the new features offered to give multi-user, multi-resource calendar/diary and recurring/repeat events/appointment features to your diary/appointment/calendar app. I have attached an MVC project to the article that demonstrates the concepts discussed here - download it to see the solution in action.
Here are the images that show what we are going to build:
style="width: 640px; height: 384px" alt="Image 2" data-src="/KB/scripting/1117424/FullCal_2.png" class="lazyload" data-sizes="auto" data->
width="440px" alt="Image 3" data-src="/KB/scripting/1117424/Repeats.png" class="lazyload" data-sizes="auto" data->
When we manage our own appointments and diary entries, we normally do it only for ourselves, and that's fine. When we start to organise our lives and times around other people however, we need to consider their schedule as well as ours. This is especially important in organisations that need to manage the time of multiple individuals (think doctors, mechanics, trainers), and also those who need to manage resources and equipment (think meeting rooms, portable/shared office equipment). Full Calendar version 2 introduced a new add-on for displaying events and resources using a grouped view known as Scheduler.
This add-on operates under a multi-license, allowing users to use it free of charge under GPL, get a limited license under creative commons, and also via a commercial version. Frankly, the commercial license charge is very reasonable for the value given. I have tried and used most of the commercial Calendar systems available, and this is now my go to choice every time. It is lightweight, fast and I find it more flexible than anything else on the market at this time.
This article will build on the previous article and demonstrate the basics that you need to know to provide a very functional multi-user, multi-resource calendar/diary solution to your users.
In a personal diary, like Outlook diary or Google Calendar, by default, we see our own individual schedules. In a multi user environment, we need to see, and be able to manage, many users' schedules at once. For a personal diary, there is only one place we can be, but in a multi-user situation, users may be viewed as being grouped in different ways. In FullCalendar, these groupings are referred to as 'Resources'.
FullCalendar
Resources
Some examples of how we might group users together:
In addition to being grouped by some overall theme, we might also find that users share a set of things. We might want to view these either individually, or as a group. Here are some examples of things users might share:
In FullCalander, the items that appear as groupings, are called 'Resources'. The image below shows how they can appear in both horizontal and vertical view.
FullCalander
style="width: 640px; height: 458px" alt="Image 4" data-src="/KB/scripting/1117424/FullCal_3a.png" class="lazyload" data-sizes="auto" data->
style="width: 637px; height: 503px" alt="Image 5" data-src="/KB/scripting/1117424/FullCal_4a.png" class="lazyload" data-sizes="auto" data->
If we take things a step further, we could consider that a user or item they share may be involved in some kind of relationship, like for example certain meeting rooms are constrained by being in certain offices, or users or their equipment may be available only in certain locations. Here are some examples of how these things might be grouped:
In order to achieve the flexibility described above, where we can have 'resources within resources', the approach FullCalander takes is to allow the creation of a parent/child relationship between resource groupings. Note that we are not restricted to having the same items or relationships from one resource node to the next - if we want to have a top level resource node with no children, the next with three, the next with five, the next with eight, and each of these with multiple child nodes as well, it's all possible.
width="416px" alt="Image 6" data-src="/KB/scripting/1117424/ParentChildView.png" class="lazyload" data-sizes="auto" data->
The key to working with FullCalendar in this way is manipulation of these resources, and how they get grouped together. Let's look now at how it can be done.
Generally, I would expect to drive a diary system from a database of some sort. In order to keep this article and its demo code database agnostic, I decided to put together a simple test harness. This relies on creating a series of classes, creating/populating them with sample data, and using this to simulate a database environment.
The harness consists of a number of classes which represent different things I want to demonstrate in the article. These are lists of users, equipment, offices, what users work out of which offices, and schedule/diary events. The harness defines the classes, and then there is an initialisation section that seeds the harness with test data. When the user (that's you!) runs the demo code, the application checks if a serialised (XML) representation of the harness exists in the user/temp folder, and if it does, it loads that, if not, it initialises itself and creates the test data. I'm only going to touch on the highlights of the setup code in this article, as the main focus is on how to use the resource functionality to enhance the Diary. If you want to go through the setup and other code in more detail, please download the code!
Setting up the TestHarness...
TestHarness
public class TestHarness
{
public List<BranchOfficeVM> Branches { get; set; }
public List<ClientVM> Clients { get; set; }
public List<EquipmentVM> Equipment { get; set; }
public List<EmployeeVM> Employees { get; set; }
public List<ScheduleEventVM> ScheduleEvents { get; set; }
public List<ScheduleEventVM> UnassignedEvents { get; set; }
// constructor
public TestHarness()
{
Branches = new List<BranchOfficeVM>();
Equipment = new List<EquipmentVM>();
Employees = new List<EmployeeVM>();
ScheduleEvents = new List<ScheduleEventVM>();
UnassignedEvents = new List<ScheduleEventVM>();
Clients = new List<ClientVM>();
}
... <etc>
Initialising lists to take data...
// initial setup if none already exists to load
public void Setup()
{
initClients();
initUnAssignedTasks();
initBranches();
initEmployees();
linkEmployeesToBranches();
initEquipment();
initEvents();
}
In this one, we create a List of branch offices to play with.
public void initBranches()
{
var b1 = new BranchOfficeVM();
b1.BranchOfficeID = Guid.NewGuid().ToString();
b1.Name = "New York";
Branches.Add(b1);
var b2 = new BranchOfficeVM();
b2.BranchOfficeID = Guid.NewGuid().ToString();
b2.Name = "London";
Branches.Add(b2);
}
We create some test employees and clients...
employees
clients
public void initEmployees()
{
var v1 = new EmployeeVM();
v1.EmployeeID = Guid.NewGuid().ToString();
v1.FirstName = "Paul";
v1.LastName = "Smith";
Employees.Add(v1);
var v2 = new EmployeeVM();
v2.EmployeeID = Guid.NewGuid().ToString();
v2.FirstName = "Max";
v2.LastName = "Brophy";
Employees.Add(v2);
var v3 = new EmployeeVM();
v3.EmployeeID = Guid.NewGuid().ToString();
v3.FirstName = "Rajeet";
v3.LastName = "Kumar";
Employees.Add(v3);
... <etc>
public void initClients()
{
Clients.Add(new ClientVM("Big Company A", "New York"));
Clients.Add(new ClientVM("Small Company X", "London"));
Clients.Add(new ClientVM("Big Company B", "London"));
Clients.Add(new ClientVM("Big Company C", "Mumbai"));
Clients.Add(new ClientVM("Small Company Y", "Berlin"));
Clients.Add(new ClientVM("Small Company Z", "Dublin"));
}
Create relationships between the various bits of test data...
public void linkEmployeesToBranches()
{
var EmployeeUtil = new EmployeeVM();
Branches[0].Employees.Add(EmployeeUtil.EmployeeByName(Employees, "Paul"));
Branches[0].Employees.Add(EmployeeUtil.EmployeeByName(Employees, "Max"));
Branches[0].Employees.Add(EmployeeUtil.EmployeeByName(Employees, "Rajeet"));
Branches[1].Employees.Add(EmployeeUtil.EmployeeByName(Employees, "Philippe"));
Branches[1].Employees.Add(EmployeeUtil.EmployeeByName(Employees, "Samara"));
... <etc>
Having finished with the supporting data, we then put in some sample data for diary events themselves.
(As an aside, to reiterate again, this article focuses on the resources and repeat functionality of the diary implementation. A complete explanation of the important fields, usage functions, etc. are all discussed in my previous introductory article to Full Calendar. If you are new to creating a diary using FullCalendar, you should start with that article, and then read this one!)
public void initEvents()
{
var utilBranch = new BranchOfficeVM();
var EmployeeUtil = new EmployeeVM();
var s1 = new ScheduleEventVM();
s1.BranchOfficeID = utilBranch.GetBranchByName(Branches, "New York").BranchOfficeID;
var c1 = utils.GetClientByName(Clients, "Big Company A");
s1.clientId = c1.ClientID;
s1.clientName = c1.Name;
s1.clientAddress = c1.Address;
s1.title = "Event 2 - Big Company A";
s1.statusString = Constants.statusBooked;
var v1 = EmployeeUtil.EmployeeByName(Employees, "Paul");
s1.EmployeeId = v1.EmployeeID;
s1.EmployeeName = v1.FullName;
s1.DateTimeScheduled = new DateTime(DateTime.Now.Year, DateTime.Now.Month,
DateTime.Now.Day, 11, 15, 0);
s1.durationMinutes = 120;
s1.duration = s1.durationMinutes.ToString();
s1.DateTimeScheduledEnd = s1.DateTimeScheduled.AddMinutes(s1.durationMinutes);
ScheduleEvents.Add(s1);
... <etc>
We also create some sample 'unscheduled/unassigned diary events'. This will be used to demonstrate some functionality particular to the resource/scheduler add-on that are different to the main diary control. The main difference between a standard event and one that is not yet assigned to a diary event.
public void initUnAssignedTasks()
{
var uaItem1 = new ScheduleEventVM();
var cli1 = utils.GetClientByName(Clients, "Big Company A");
uaItem1.clientId = cli1.ClientID;
uaItem1.clientName = cli1.Name;
uaItem1.clientAddress = cli1.Address;
uaItem1.title = cli1.Name + " - " + cli1.Address;
uaItem1.durationMinutes = 30;
uaItem1.duration = uaItem1.durationMinutes.ToString();
uaItem1.DateTimeScheduled = DateTime.Now.AddDays(14);
uaItem1.DateTimeScheduledEnd = uaItem1.DateTimeScheduled.AddMinutes
(uaItem1.durationMinutes);
uaItem1.notes = "Test notes 1";
... <etc>
To setup the plugin, and its resources, we need to initialise it when the browser loads. A full description of the general options and properties are discussed in my other article. So I will show the code for JavaScript setup here, and discuss how it pertains to resources. Please refer back to the other article if you need details on the overall diary setup.
First, the start of the general setup...
// Main code to initialise/setup and show the calendar itself.
function ShowCalendar() {
$('#calendar').fullCalendar({
schedulerLicenseKey: 'GPL-My-Project-Is-Open-Source', // change depending on license type
theme: false,
resourceAreaWidth: 230,
groupByDateAndResource: false,
editable: true,
aspectRatio: 1.8,
scrollTime: '08:00',
timezone: 'local',
droppable: true,
drop: function
...<snip> ...
Then the important part for the resources....
// this is where the resource loading for laying out the page is triggered from
resourceLabelText: "@Model.ResourceTitle", // set server-side
resources:
{
url: '/Home/GetResources',
data: {resourceView : "@Model.DefaultView"},
type: 'POST',
...<snip> ...
In this case, I have told it to get its feed for resources from a server-side ajax controller '/Home/GetResources'.
There are multiple options for setting up the resources, you can use inline arrays, ajax/json feed or functions.
Here is an array example:
$('#calendar').fullCalendar({
resources: [
{
id: 'a',
title: 'Room A'
},
{
id: 'b',
title: 'Room B'
}
]
});
Full Calendar displays event objects. Here is a basic construction that adds a single basic event:
events: [
{
id: '1',
title: 'Meeting',
start: '2015-02-14'
}
Let's say we have the following structure of resources:
resources:
[ {
id: 'a',
title: 'Room A'
} ]
The unique ID of the resource is 'a', so to link that to our event, and have it display in the appropriate resource column/row, we simply tell the event it is using that resource ID:
a
$('#calendar').fullCalendar({
resources: [
{
id: 'a',
title: 'Room A'
}
],
events: [
{
id: '1',
resourceId: 'a',
title: 'Meeting',
start: '2015-02-14'
}
]
});
We can also associate an event with multiple different resources - in this case, we separate the IDs using a comma, and use the plural 'resourceIds' to define the relationship, versus the singular used for one link.
resourceIds
$('#calendar').fullCalendar({
resources: [
{
id: 'a',
title: 'Room A'
},
{
id: 'b',
title: 'Room B'
}
],
events: [
{
id: '1',
resourceIds: ['a', 'b'],
title: 'Meeting',
start: '2015-02-14'
}
]
});
Now, here's something to get your head around ... a resourceId is a moving target. What I mean by this is that depending on what kind of view you are using, the ID of it means something different. For example, if the current view is ‘timeline: equipment’, then the ResourceID refers to the EquipmentID. If the current view is ‘timeline: Employees’, then the ResourceID refers to the EmployeeID. The reason for this is that the diary grid for timeline view shows date/time on top (columns), and the rows are reserved for the main ‘diary event’ in question, being the resource having the focus (employee, or equipment, etc.).
resourceId
timeline: equipment
ResourceID
EquipmentID
timeline: Employees
EmployeeID
employee
equipment
Looking at code is one thing, but a picture tells a better story! ... here is a screenshot that shows the popup form the user can use to switch between the different resource views.
style="width: 584px; height: 255px" alt="Image 7" data-src="/KB/scripting/1117424/FIlterView.png" class="lazyload" data-sizes="auto" data->
In the OnClick/close modal event of the selector form, JavaScript code takes the value of the resource type the user wants to see, and posts this to the server calling 'setView'. I decided to use a post versus a get as I didn't want to make my URL ugly! The SetView controller sets the new default view in the session data, then redirects back to the index controller and the correct view is then rendered to the user.
OnClick
setView
get
SetView
session
$('#btnUpdateView').click(function () {
var selectedView = $('input[name="rdoResourceView"]:checked').val();
post('/Home/setView', { ResourceView: selectedView }, 'setView');
});
The starting point for the diary on the server-side is the home/index controller. Note at the top of the controller the TestHarness class is declared. The Index takes one parameter ‘ResourceView’. This tells the controller what view to return - in our example, Branch/Employee view, or equipment view. By default, it returns Branch/Employee view. When the index loads (as all other controllers), it loads up the TestHarness - for bringing this to production, you will connect here to your database and load data as appropriate. In this example, if the TestHarness XML data does not exist, it creates it by calling the TestHarness.Setup() method.
Index
ResourceView
Branch
Employee
TestHarness.Setup()
The Index.cshtml page contains a model ‘FullCal.ViewModels.Resource’. This can carry any information you want. This example uses it to carry the ‘Default View’ to be shown to the user when the page reloads, and the title string for the resource column. In the controller, once we decide/set the view to use, we send back the ‘ResourceView’ as a string in the view model. The last thing to note about ‘ResourceView’ is that it is stored as a ‘Session value’. See controller method ‘setView’ (below) for the implementation.
FullCal.ViewModels.Resource
string
public class HomeController : Controller
{
TestHarness testHarness; // used to store temp data repository
public ActionResult Index(string ResourceView)
{
// create test harness if it does not exist
// this harness represents interaction you may replace with database calls.
if (Session["ResourceView"]!=null) // refers to whatever default view you may have,
// for example Offices/Users/Shared Equipment/etc.
// you can create any amount and combination
// of different view types you need.
ResourceView = Session["ResourceView"].ToString();
if (!System.IO.File.Exists(utils.GetTestFileLocation()))
{
testHarness = new TestHarness();
testHarness.Setup();
utils.Save(utils.GetTestFileLocation(), testHarness);
}
if (ResourceView == null)
ResourceView = "";
ResourceView = ResourceView.ToLower().Trim();
var DiaryResourceView = new Resource();
if (ResourceView == "" || ResourceView == "employees") // set the default
{
DiaryResourceView.DefaultView = "employees";
DiaryResourceView.ResourceTitle = "Branch offices";
}
else if (ResourceView == "equipment")
{
DiaryResourceView.DefaultView = ResourceView;
DiaryResourceView.ResourceTitle = "Equipment list";
}
return View(DiaryResourceView);
}
// this method, called from the index page, sets a session variable for
// the user that gets looped back to the index page to tell it what view
// to display. branch/employee or Equipment.
public ActionResult setView(string ResourceView)
{
Session["ResourceView"] = ResourceView;
return RedirectToAction("Index");
}
When we have a single resource view with no child resources, and we click on a cell to create a new event, it is clear we are clicking on a cell that represents a single resource....
style="width: 440px; height: 329px" alt="Image 8" data-src="/KB/scripting/1117424/TargetResource1.png" class="lazyload" data-sizes="auto" data->
When we have a parent/child relationship however, it's a different story, we have to keep track of where we are, and implement rules depending on where the user clicks...
width="416px" alt="Image 9" data-src="/KB/scripting/1117424/TargetResource2.png" class="lazyload" data-sizes="auto" data->
To help manage this situation, we keep an in-memory list of our resources and their associations, and then use the OnClick/Select events of FullCalendar to perform a lookup against the cell selected and decide if we need to implement a rule.
// use this function to get a local list of employees/branches/equipment etc
// and populate arrays as appropriate for checking business rules etc.
function GetLocationsAndEmployees() {
$.ajax({
url: '/home/GetSetupInfo',
cache: false,
success: function (resultData) {
ClearLists();
EmployeeList = resultData.Employees.slice(0);
BranchList = resultData.Branches.slice(0);
EquipmentList = resultData.Equipment.slice(0);
ClientList = resultData.Clients.slice(0);
}
});
In this example, we don't allow users to click on "office" rows, so we raise an alert if the user makes a mistake...
office
style="width: 633px; height: 291px" alt="Image 10" data-src="/KB/scripting/1117424/TargetResource3.png" class="lazyload" data-sizes="auto" data->
var employeeResource =
EmployeeList.find( // if the row clicked on is NOT in the known array of employeeID,
// then drop out (and alert...)
function (employee)
{ return employee.EmployeeID == resourceObj.id; }
)
Having the capability to be able to drag/drop events onto a diary is useful. However, we need to be able to tell the diary on the drop event, something about the event being dropped. For this, we attach information to the object being dropped in a particular manner.
drop
To identify items to be dropped, we mark them with a class 'draggable'. To attach data to them, we call a function that iterates through everything marked with this class, and assign data as follows:
// set up for drag/drop of unassigned tasks into scheduler
// *example only - if using a large data feed from a table*
function InitDragDrop() {
$('.draggable').each(function () {
// create an Event Object
// ref: ()
// it doesn't need to have a start or end
var table = $('#UnScheduledEvents').DataTable();
var eventObject = {
id: $(table.row(this).data()[0]).selector,
clientId: $(table.row(this).data()[1]).selector,
start: $(table.row(this).data()[2]).selector,
end: $(table.row(this).data()[3]).selector,
title: $(table.row(this).data()[4]).selector,
duration: $(table.row(this).data()[5]).selector,
notes: $(table.row(this).data()[6]).selector,
color: 'tomato'
}
// gotcha: MUST be named "event", for *external dropped objects* and
// some rules:
$(this).data('event', eventObject);
// make the event draggable using jQuery UI
$(this).draggable({
activeClass: "ui-state-hover",
hoverClass: "ui-state-active",
zIndex: 999,
revert: true, // will cause the event to go back to its
revertDuration: 0 // original position after the drag
});
});
};
When the item is dropped, it is hooked by the FullCalendar 'eventReceive' method. In our example, we ask the user to confirm before proceeding:
eventReceive
eventReceive: function (event) {
var confirmDlg = confirm('Are you sure you wish to assign this event?');
if (confirmDlg == true) {
var eventDrag = {
title: event.title,
start: new Date(event.start),
resourceId: event.resourceId,
clientId: null,
duration: 30,
equipmentId: null,
BranchID: null,
statusString: "",
notes: "",
}
UpdateEventMove(eventDrag, null);
}
Now, here's a minor gotcha to note.... you will recall earlier in the article we discussed the 'resourceId' property of an event object being a 'moving target'... well, here it is in action. Depending on the view the user is looking at (say employee or equipment), then the 'resourceId' value in the associated event, will *either* refer to the ID of an employee OR a piece of equipment. Due to this, here we example the view type before proceeding and sending the data to the server. Of course this is my example implementation, there are many ways to work the logic - the point of this is to point out that you need to take it into consideration.
function UpdateEventMove(event, view) {
// determine the view and from this set the correct EmployeeID or ResourceID
// before sending down to server
if (ResourceView == 'employees')
event.employeeId = event.resourceId;
else {
event.employeeId = $('#newcboEmployees').val();
}
var dataRow = {
'Event': event
}
$.ajax({
type: 'POST',
url: "/Home/PushEvent",
dataType: "json",
contentType: "application/json",
data: JSON.stringify(dataRow)
});
}
The final thing I want to demonstrate is how we can put repeat or recurring events into our solution. I have done this using a combination of two very useful open course libraries, one that operates browser-side, one server-side. Before we look at the code and components we use, it would be good to understand how repeats work and how we can represent recurring events in our solution.
A CRON JOB (from chronological), is well known in the UNIX operating system as a command to tell the system to execute a job at a particular time. CRON has a syntax that when formulated, both a computer and human can read, it describes the exact date/time a job should be triggered and is very flexible. A CRON string is not restricted to describing a single date and time (example: 1st Jan 2016), it can also be used to describe recurring time patterns (example: The third of each month at 9am). There are a few different implementations of the CRON syntax, and you can expand to it yourself if you need to. For example, if a basic CRON descriptor only allowed for repeat events, you might decide to add limitations of a 'between X and Y date' to your business logic (ie: 'The third of each month at 9am, but only if that's a Tuesday and between the 1st of June 2016 and the 30th of August 2016).
Standard CRON consists of five fields, each separated by a space:
minute hour day-of-month month-of-year day-of-week
Each field, can contain one or more characters that describe the contents of the field:
Here are some examples showing CRON in action:
* * * * * * Each minute
45 17 7 6 * * Every year, on June 7th at 17:45
* 0-11 * * * Each minute before midday
0 0 * * * * Daily at midnight
0 0 * * 3 * Each Wednesday at midnight
You can find out more detailed information on CRON here (where some examples came from) and here.
Whenever possible, I try not to reinvent the wheel. I came across the really useful JQuery-Cron builder from Shawn Chin a few years back and have used it in multiple projects since very successfully. Instead of forcing users to enter cryptic expressions to specify a cron expression, this JQuery plugin allows users to select recurring times from an easy to use GUI. It is designed as a series of drop-boxes that the users chooses values from. Depending on the initial selections, the interface changes to offer appropriate follow-on values. Once the user has set the repeat/recurring time they require, you can call a function that gives you back the CRON expression for the visual values the user selected.
Using the Cron builder is the usual JQuery style. We declare a div, then call the plugin against it:
div
<div id="repeatCRON"></div>
$('#repeatCRON').cron();
Here are some examples of it rendered in the browser:
style="width: 440px; height: 370px" alt="Image 11" data-src="/KB/scripting/1117424/CronExamples.png" class="lazyload" data-sizes="auto" data->
When we save the diary event, we query the CronBuilder plugin and get the CRON expression - once we have this, we can then save it as a string into a field in our database... in my case, I named this field 'Repeat'.
CronBuilder
Repeat
$('#submitButton').on('click', function (e) {
e.preventDefault();
SelectedEvent.title = $('#title').val();
SelectedEvent.duration = $('#duration').val();
SelectedEvent.equipmentId = $('#cboEquipment').val();
SelectedEvent.branchId = $('#branch').val();
SelectedEvent.clientId = $('#cboClient').val();
SelectedEvent.notes = $('#notes').val();
SelectedEvent.resourceId = $('#cboEmployees').val();
SelectedEvent.statusString = $('#cboStatus').val();
SelectedEvent.repeat = $('#repeatCRONEdit').cron("value")
UpdateEventMove(SelectedEvent, null);
doSubmit();
});
We can also do the reverse - take a CRON string, and pass it into the JQuery builder, and it will display the UI that represents the CRON expression.
if (SelectedEvent.repeat != null)
$('#repeatCRONEdit').cron("value", SelectedEvent.repeat);
else
$('#repeatCRONEdit').cron("value", "* * * * *");
Ok, so we have the building and storage of the repeat/recurring event information and the UI taken care of - now let's look at what we can do to decide WHEN/IF to display these recurring events in our diary.
In UNIX, a CronTab is a file that contains a list of CRON expressions, and a command for the system to execute once that CRON time gets triggered. From a Windows perspective, the equivalent is the Windows task scheduler service.
NCrontab is a library written in C# 6.0 that provides the following facilities:
This library does not provide any scheduler or is not a scheduling facility like cron from Unix platforms. What it provides is parsing, formatting and an algorithm to produce occurrences of time based on a give schedule expressed in the crontab format. (src: NCrontab)
In this example project, I am using a NCrontab library method to examine a stored CRON expression string, against the date range that the user has selected in the FullCalendar, and determine if any of my stored repeat values occur within that date range.
NCrontab
Let's look at how the code works for this:
start
end
resourceView
public JsonResult GetScheduleEvents(string start, string end, string resourceView)
.. <etc>
repeatEvents = testHarness.ScheduleEvents.Where(s => (s.repeat != null));
NCronTab.CrontabSchedule
if (repeatEvents!=null)
foreach (var rptEvnt in repeatEvents)
{
var schedule = CrontabSchedule.Parse(rptEvnt.repeat);
var nextSchdule = schedule.GetNextOccurrences(Start, End);
foreach (var startDate in nextSchdule)
{
ScheduleEvent itm = new ScheduleEvent();
itm.id = rptEvnt.EventID;
if (rptEvnt.title.Trim() == "")
itm.title = rptEvnt.clientName;
else itm.title = rptEvnt.title;
itm.start = startDate.ToString("s");
itm.end = startDate.AddMinutes(30).ToString("s");
itm.duration = rptEvnt.duration.ToString();
itm.notes = rptEvnt.notes;
itm.statusId = rptEvnt.statusId;
itm.statusString = rptEvnt.statusString;
itm.allDay = false;
itm.EmployeeId = rptEvnt.EmployeeId;
itm.clientId = rptEvnt.clientId;
itm.clientName = rptEvnt.clientName;
itm.equipmentId = rptEvnt.equipmentID;
itm.EmployeeName = rptEvnt.EmployeeName;
itm.repeat = rptEvnt.repeat;
itm.color = rptEvnt.statusString;
if (resourceView == "employees")
itm.resourceId = rptEvnt.EmployeeId;
else itm.resourceId = rptEvnt.equipmentID;
EventItems.Add(itm);
}
}
The final thing to note in the above code are the last few lines - again, we go back to our 'moving target' issue. Depending on the view that the user has selected, we need to set the correct resourceId value so it will render correctly once it is returned to the browser.
That pretty much wraps up the article. If you need to implement a fully loaded diary/calendar/appointment solution that incorporates multi resources in a very powerful way, you should strongly consider the combination of FullCalendar, JQueryCron and NCronTab. I have attached a working example with the article that shows all of the functionality we have discussed working - please download and play with it. Please note, this is not a standalone project - it demonstrates specific functionality. You should use it in conjunction with my other article explaining FullCalendar (and its accompanying code) to implement your own particular working solution.
JQueryCron
NCronTab
Finally, as always, please consider voting for this article if you liked it!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Congratulations !!
This project is really useful, Adding multi user and resource capabilities to Full Calendar in .Net MVC, but I would need help to replace the TESTHARNESS with my database.
For example, I have a "Holidays" table and would need it to be displayed for all "Resources" users, but I do not get it in the annual view.
You could help me replace Test Harness with my database that contains the events and show them in the annual view.
Thank you very much for your help.
Greetings.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/1117424/Multi-user-Resource-Web-Diary-in-Csharp-MVC-with-R?msg=5586105
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
3D Volume Plots in Python
How to make 3D Volume volume plot with
go.Volume shows several partially transparent isosurfaces for volume rendering. The API of
go.Volume is close to the one of
go.Isosurface. However, whereas isosurface plots show all surfaces with the same opacity, tweaking the
opacityscale parameter of
go.Volume results in a depth effect and better volume rendering.
Simple volume plot with go.Volume¶
In the three examples below, note that the default colormap is different whether isomin and isomax have the same sign or not.
import plotly.graph_objects as go import numpy as np X, Y, Z = np.mgrid[-8:8:40j, -8:8:40j, -8:8:40j] values = np.sin(X*Y*Z) / (X*Y*Z) fig = go.Figure(data=go.Volume( x=X.flatten(), y=Y.flatten(), z=Z.flatten(), value=values.flatten(), isomin=0.1, isomax=0.8, opacity=0.1, # needs to be small to see through all surfaces surface_count=17, # needs to be a large number for good volume rendering )) fig.show()
|
https://plotly.com/python/3d-volume-plots/
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Arduino RTC Tutorial: Using DS1307 RTC with Arduino
Do you want to maintain hours, minutes, and seconds, as well as, day, month, and year information for your Arduino Project? Well then using an RTC (Real Time Clock), will be for you!
Through this blog, you will learn how to use the DS1307 RTC module with your Arduino in a few steps!
Today’s blog will cover:
- What is an RTC (Real Time Clock) and Why do we need RTC for Arduino?
- What is DS1307 RTC?
- Grove RTC
- Tutorial: Using DS1307 RTC with Arduino
- RTC Arduino Project Ideas.
What is a DS1307 RTC?
- Also known as Real-Time Clock (RTC), this RTC module is based on the clock chip DS1307 that provides you with seconds, minutes, hours, day, date, month, and year information for your projects.
- However, you may be wondering, why do you need a separate module when your Arduino already has a built-in timekeeper?
- The answer is because as the RTC runs on a lithium battery, you can continue to keep track of the time even if you need to reprogram your Arduino or disconnect it from the main power.
- We choose to use the RTC based on DS1307 as it is low cost and also very energy efficient. It can run for years on a very small coin cell!
Grove – RTC ($6.90)
- This is Seeed very own RTC based on the clock chip DS1307 and supports I2C communication!
- It uses a Lithium cell battery (CR1225). The clock/calendar provides seconds, minutes, hours, day, date, month, and year information where the end of the month date is automatically adjusted for months with fewer than 31 days, including corrections for leap year valid up to 2100.
- The clock operates in either the 24-hour or 12-hour format with AM/PM indicator.
- It is 56-Byte, Battery-Backed and has Nonvolatile (NV)RAM for data storage.
- It works on 5V DC supply and able to consume less than 500nA in Battery-Backup Mode with Oscillator Running.
- It is also programmable with a Square-Wave output signal and has an automatic power-fail detect and switch circuitry.
We also offer a similar product: Grove – High Precision RTC. Compared to the Grove-RTC, this module can provide a more accurate result and provide a programmable clock output for peripheral devices as well as minute and half minute interrupt.
Without further ado let us jump right into the tutorial on how to use the DS1307 RTC with Arduino
Tutorial: Using DS1307 RTC with Arduino
Through this tutorial, you will learn how to use the DS1307 RTC with your Arduino in a few simple steps.
What do you need?
- Seeeduino V4.2 (Arduino UNO Compatible Board)
- Grove – RTC
- Base Shield V2 (For easy connection, optional)
Step by step instructions using the RTC with your Arduino
Step 1: Connecting the Hardware
- Connect Grove-RTC to port I2C of Grove-Base Shield.
- If you do not have a Grove- Base Shield, you can directly connect the Grove-RTC to the Arduino Board by following the table as shown below:
- Plug Grove – Base Shield into Seeeduino.
- Connect Seeeduino to PC via a USB cable.
- Do note that for a robust performance, it is highly recommended to put a 3-Volt CR1225 Lithium cell in the battery-holder. If you only use the primary power to run the module, the module may not work normally as the crystal may not oscillate.
- Your connection should look like this currently:
Step 2: Using Software
- Download the RTC library and install.
- Do not know how to install the library? Refer to our guide on How To Install Library!
- Create a new Arduino sketch and paste the codes below to it or open the code directly by the path: File -> Example ->RTC->SetTimeAndDisplay
#include <Wire.h> #include "DS1307.h" DS1307 clock;//define a object of DS1307 class void setup() { Serial.begin(9600); clock.begin(); clock.fillByYMD(2013,1,19);//Jan 19,2013 clock.fillByHMS(15,28,30);//15:28 30" clock.fillDayOfWeek(SAT);//Saturday clock.setTime();//write time to the RTC chip } void loop() { printTime(); } /*Function: Display time on the serial monitor*/ void printTime() { clock.getTime();); Serial.print("*"); switch (clock.dayOfWeek)// Friendly printout the weekday { case MON: Serial.print("MON"); break; case TUE: Serial.print("TUE"); break; case WED: Serial.print("WED"); break; case THU: Serial.print("THU"); break; case FRI: Serial.print("FRI"); break; case SAT: Serial.print("SAT"); break; case SUN: Serial.print("SUN"); break; } Serial.println(" "); }
- Set the time. Change function arguments to current date/time. Please pay attention to arguments’ format.
clock.fillByYMD(2013,1,19);//Jan 19,2013 clock.fillByHMS(15,28,30);//15:28 30" clock.fillDayOfWeek(SAT);//Saturday
- Upload the code and open the serial monitor to receive the sensor’s data
That’s all on how to use the DS1307 with the Arduino! Want to do more with RTC with Arduino? Here are some RTC Arduino project ideas to get you started!
RTC Arduino Project Ideas
RTC Arduino Real-Time Garden Watering System
Tired of your plants dying because you forgot to water them or just lazy to water your plants? Why not try this automated plant watering project to save your plants today with the a RTC!
What do you need?
- Grove – RTC
- Arduino Nano v3 / Seeeduino Nano
- Water level switch
- 12V DC Water Pump
- Grove – Buzzer
- Grove – Relay
- 20 Litre Water Canister
- Weatherproof Electric Box
- 12V Power Pack
- Nano Terminal Adapter
- Arduino IDE Software
Interested? You can check out the full tutorial by Maximilian Dullo on Arduino Project Hub!
Compact Alarm with Card Reader and RTC
With this alarm, you can tell whenever someone enters your house which can be turned on and off with a card reader and also automatically activated with the RTC!
What do you need?
- Arduino Uno Rev3 / Seeeduino V4.2 ($6.90)
- Bread board Clear – 8.2 x 5.3cm
- LED (Generic)
- Resistor 221 ohm
- Grove – Ultrasonic Distance Sensor
- Grove – Buzzer
- Grove – RTC
- RC522 Card Reader
Interested? You can check out the full tutorial by Simonee Adobs on Arduino Project Hub here!
OLED RTC Clock
With an RTC module, you can of course also make your OLED digital clock with the Arduino which shows the date, time and day!
What do you need
- Arduino Uno Rev3 / Seeeduino V4.2 ($6.90) or
Arduino Nano v3 / Seeeduino Nano
- Grove – RTC
- Grove – OLED Display 1.12” V2
- 2 x Grove – Button
- 32.768KHz crystal oscillator
- 2 x 10K ohm resistor
- 3V coin cell battery
Interested? You can find the full tutorial on Simple Projects!
Summary
That’s all on Arduino Tutorial: Using DS1307 RTC with Arduino! With the DS1307 RTC, you can now keep time and make awesome projects that involve data-loggers or clocks! As long as your project requires consistent timekeeping, using an RTC module would be the way to go.
If you have any questions regarding this tutorial or DS1307 Grove RTC, do leave them down in the comments section down below!
|
https://www.seeedstudio.com/blog/2019/11/19/arduino-tutorial-using-ds1307-rtc-with-arduino/
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Set the POSIX flags in a spawn attributes object
#include <spawn.h> int posix_spawnattr_setflags( posix_spawnattr_t *attrp, short flags);
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.().
To set the extended flags, use posix_spawnattr_setxflags(). The posix_spawnattr_setflags() function doesn't affect any extended flags that you previously set in the spawn attributes object.
See posix_spawn().
|
https://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/posix_spawnattr_setflags.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Details
- Feature Request
- Status: Resolved (View Workflow)
- Major
- Resolution: Done
- None
-
- None
- None
Description
At present hiberante is mapping incorrectly text fields doing this:
It creates a CLOB and assign the oid create to the text columns.
This causes several problems described in this
And this is the root cause we have this scripts
that is should not be needed as text field should store the data.
This contributor should fix the problem as hibernate is walking another path regarding how to fix the mapping (they will move the mapping to oid as they are creating a clob and seems the spec requires to turn @Lob fields into clob and not text)
public class TextContributorType extends StandardBasicTypeTemplate<String> { private static final long serialVersionUID = 1619875355308645967L; public TextContributorType() { super(LongVarcharTypeDescriptor.INSTANCE, StringTypeDescriptor.INSTANCE, StandardBasicTypes.MATERIALIZED_CLOB.getName()); } }
|
https://issues.redhat.com/browse/JBPM-9939?workflowName=GIT+Pull+Request+workflow+v1.0&stepId=4
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Translation(s): none
A2DP is the "Advanced Audio Distribution Profile", a standard for how Bluetooth devices can stream high-quality audio to remote devices. This is most commonly used for linking wireless headphones and speakers to your PC. The instructions in this page should apply to any A2DP-compatible device.
Contents
- Pre-configuration
- Pairing
- Troubleshooting
- Refused to switch profile to a2dp_sink: Not connected
- Unable to control volume with volumeicon-alsa
- a2dp-sink profile connect failed [...]: Protocol not available
- AptX, LDAC, and AAC codecs are not available with PulseAudio
- See also
Pre-configuration
In short: To connect to a given device, you need Bluetooth hardware on your PC (either built-in, or in the form of a USB dongle), the Bluez daemon, and a compatible audio server (either PulseAudio or PipeWire).
Firmware
If your hardware supports Bluetooth but Debian is unable to find any Bluetooth devices, you may have a dongle based on a Broadcom BCM203x chipset, requiring extra firmware to be installed.
Add a non-free component to your apt sources and install the bluez-firmware package.
PulseAudio
PulseAudio is the default audio server in Debian. Unless you know what you're doing, you probably want to follow these instructions.
Install the pulseaudio-module-bluetooth package if it's not already installed. You probably also want pavucontrol (or pavucontrol-qt on LXQt or Plasma desktops) to configure your device after connecting it.
Once you have installed the Bluetooth module, it may be necessary to restart the bluetooth and pulseaudio services:
# service bluetooth restart $ killall pulseaudio
After connecting your device (see the "Pairing" section), your device will appear in Pavucontrol, where you can set it as your default audio output device, change individual applications to output using it, configure its profile, etc.
PipeWire
These instructions are mutually exclusive to the PulseAudio section, for users that are using the newer PipeWire audio server instead. This is also documented on the PipeWire wiki page in brief. Note that simply having the pipewire package installed does not mean this section is relevant to you, as it needs to have also been specially configured to replace PulseAudio.
In Debian, PipeWire supports more modern codecs than PulseAudio without the need to install any external modules. In particular, PipeWire 0.3.26 supports mSBC, SBC, SBC-XQ, LDAC, AptX, and AptX-HD. It also supports the HSP_HS, HSP_AG, HFP_HF, and HFP_AG headset roles. Support for more codecs is in-progress.
At minimum, you will need to install the libspa-0.2-bluetooth package, remove the pulseaudio-module-bluetooth package (if previously installed), and then either reboot your computer or restart the PipeWire services, otherwise device connections will fail with "Protocol not available".
Note that, if you're using the GNOME desktop, the gnome-core package has a hard dependency on pulseaudio-module-bluetooth. Attempting to remove it will also prompt to remove your desktop. If you run into issues when attempting to use Bluetooth with this package installed, you may still have to use PulseAudio in order to have functioning Bluetooth audio.
PipeWire will attempt to choose the best possible codec by default. You can override this, and tweak many other related settings, in the /etc/pipewire/media-session.d/bluez-monitor.conf file. You can edit this directly, or store local per-user changes by copying the file to ~/.config/pipewire/media-session.d/bluez-monitor.conf and editing that instead. You can check the currently-used codec with pactl list sinks
ALSA only
If you want to completely avoid using a higher-level audio server like PipeWire or PulseAudio, see BlueALSA. Currently only available in Debian Unstable.
Pairing
It is also highly recommended to install a graphical pairing tool. GNOME relies on gnome-bluetooth, after which you can find a "Bluetooth" section of your settings. KDE Plasma relies on bluedevil, which is a module for your system settings, a system tray applet, and a wizard for connecting to your devices. Other desktops can use the agnostic blueman tool.
More information, and instructions on using the CLI bluetoothctl tool, can be found on the main BluetoothUser page.
Troubleshooting
Refused to switch profile to a2dp_sink: Not connected
Your Bluetooth headset is connected, but Debian-gdm user:
chown Debian-gdm:Debian-gdm /var/lib/gdm3/.config/pulse/client.conf
You may also need to disable PulseAudio startup (however in Debian 10/Buster and newer, this has already been removed in the gdm3 postinst):
rm /var/lib/gdm3/.config/systemd/user/sockets.target.wants/pulseaudio.socket
In order to auto-connect A2DP for some devices, add this to /etc/pulse/default.pa:
load-module module-switch-on-connect
Reboot.
Now your audio device should be accessible through pavucontrol and your desktop's standard audio settings.
Workaround 2: Disable PulseAudio's Bluetooth in GDM
The actual solution package maintainers are looking into next is to simply disable the Bluetooth sink in the GDM PulseAudio daemon so that it doesn't take over the device. Add this to /var/lib/gdm3/.config/pulse/default.pa:
#!/usr/bin/pulseaudio -nF # # load system wide configuration .include /etc/pulse/default.pa ### unload driver modules for Bluetooth hardware .ifexists module-bluetooth-policy.so unload-module module-bluetooth-policy .endif .ifexists module-bluetooth-discover.so unload-module module-bluetooth-discover .endif
This was first discovered in the Arch wiki.
Solution
The actual solution is for PulseAudio to release the Bluetooth device when it is not in use. This is discussed in the PulseAudio 845938 which has a few upstream bugs pending as well that are related.
Unable to control volume with volumeicon-alsa
The volumeicon tray icon may not automatically recognize a Bluetooth A2DP device when a connection is established. See issue #73, "volumeicon does not work to adjust bluetooth volume" and issue #49, "change of the default device not automatically detected" for discussion and possible workarounds / fix. You might also try simply restarting Volumeicon, or adjusting your PulseAudio configuration to switch on connect.
a2dp-sink profile connect failed [...]: Protocol not available
This error can appear when using PipeWire as your audio server and attempting to pair a device via Bluetooth, without first uninstalling the pulseaudio-module-bluetooth package.
If you're using PulseAudio, PulseAudio may not be properly connecting to the device. It might be because it was already playing. Stopping anything playing on PulseAudio, restarting PulseAudio, and reconnecting to the device may fix the problem.
In addition, you need the following settings in /etc/pulse/default.pa or /etc/pulse/default.pa.d/bluez5.pa:
load-module module-bluez5-device load-module module-bluez5-discover
Then restart pulseaudio.
AptX, LDAC, and AAC codecs are not available with PulseAudio
While newer audio codecs such as AptX and LDAC are available in PipeWire, they're still unavailable for PulseAudio users in Debian. AAC is unavailable outright because the library is non-free. However, PulseAudio has recently gained support for all of these codecs via GStreamer. Unfortunately, GStreamer is only supporting these codecs from v1.20 onwards. This means that support for modern codecs with PulseAudio is not available in Debian 10 or Debian 11. It is expected to land in Debian 12.
A third-party project adds support for these additional codecs as well. It is deprecated and the creator recommends users either avoid it entirely, or switch to PipeWire. Nonetheless, it's still a fully functional option in Debian 10:
Additionally, a third-party script for Debian 10 is available which will automatically configure and install the additional codecs via the deprecated pulseaudio-modules-bt project:
If the PulseAudio sink adjusts automatically to SBC-sink (not A2DP-sink with aptX or LDAC), just reconnect your device.
See also
BluetoothUser - Main page for Bluetooth in Debian
BlueDevil - Bluetooth with KDE Plasma
BlueALSA - Bluetooth over ALSA alone
CategorySound CategoryWireless
|
https://wiki.debian.org/BluetoothUser/a2dp
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
The reduce(fun,seq) applies a specific function to all of the list components mentioned in the sequence handed along. The “functools” module contains the definition for this function. To reduce the list on a single value, the reduce() function applies the fn function with two arguments cumulatively to the list items, from left to right. reduce(), unlike the map() and filter() procedures, is not a Python built-in function. The reduce() function belongs to the functools package. To utilize the reduction() function, add the following statement to the head of the program to import it from the functools module:
reduce() in Python
from functools import reduce functools.reduce(myfunction, iterable, initializer)
The argument function is applied cumulatively to arguments in the list from left to right. The first parameter is the result of the function in the first call, and the third item in the list is the second. This process is continued until all of the items on the list have been checked off.
Working conditions:
- The first two elements of the sequence are chosen in the first step, and the result is achieved.
- The result is then saved after applying the same function to the previously obtained result and the number preceding the second element.
- This method is repeated until there are no more elements in the container.
- The final result is returned to the console and printed.
# python code to demonstrate the working of reduce() # importing functools for reduce() import functools # initializing list new_list = [11, 13, 15, 16, 12, ] # reduce is applicable in computing a lists' sum print("The list's element sum is : ", end="") print(functools.reduce(lambda x, y: x+y, new_list)) # using reduce to compute the lists' maximum element print("The lists' maximum element is : ", end="") print(functools.reduce(lambda x, y: x if x > y else y, new_list))
Making use of Operator Functions
Reduce() can also be used in conjunction with operator functions to achieve comparable functionality to lambda functions while improving readability.
# python code to demonstrate the working of reduce() # using operator functions # importing functools for reduce() import functools # importing operator for operator functions import operator # initializing list new_list = [11, 13, 15, 16, 12, ] # reduce() is useful in computing the lists' sum # using operator functions print("The lists' sum of elements is : ", end="") print(functools.reduce(operator.add, new_list)) # product computation using reduce # through operator functions print("The lists' elements product : ", end="") print(functools.reduce(operator.mul, new_list)) # string concatenation using reduce print("The concatenated product is : ", end="") print(functools.reduce(operator.add, ["codeunderscored", "for", "codeunderscored"]))
accumulate() vs reduce()
The summation of a sequence’s elements can be calculated using both reduce() and accumulate(). However, the practical components of both of these are different.
The reduce() and accumulate() functions are defined in the “functools” and “itertools” modules, respectively.
reduce() keeps track of the intermediate result and only returns the summing value at the end. accumulate(), on the other hand, returns an iterator containing the intermediate results. The summation value of the list is the last integer returned by the iterator.
reduce(fun,seq) takes a function as the first parameter and sequence as the second. On the other hand, accumulate(seq, fun) accepts sequence as the first parameter and function as the second.
# python code to demonstrate summation # using reduce() and accumulate() # importing itertools for accumulate() import itertools # importing functools for reduce() import functools # initializing list new_list = [11, 13, 14, 20, 14] #accumulate() can help in printing summation print("The lists' summation by using accumulate is :", end="") print(list(itertools.accumulate(new_list, lambda x, y: x+y))) # summation printing by use of reduce() print("List summation in reduce is as follows :", end="") print(functools.reduce(lambda x, y: x+y, new_list))
reduce() as a three-parameter function
In Python 3, the reduction function, i.e., reduce(), works with three parameters or two. To put it another way, if the third parameter is present, reduce() inserts it before the value of the second one. As a result, if the second argument is an empty sequence, the third argument becomes the default.
Python program to illustrate the two number summation
def twoNumberSummation(function, iterable, initial_val=None): it = iter(iterable) if initial_val is None: value = next(it) else: value = initial_val for e in it: value = function(value, e) return value # Note that the initial_val, when not None, is used as the first value instead of the first value # from iterable and after the whole iterable. tuple_contents = (7,6,5,7,7,5,5,7) print(twoNumberSummation(lambda x, y: x+y, tuple_contents,6))
Reducing a List
You may want to reduce a list to a single value on occasion. Consider the following scenario: you have a list of numbers:
student_scores = [75, 65, 80, 95, 50]
A for loop can also come in handy in calculating the sum of all elements in the scores list:
student_scores = [75, 65, 80, 95, 50] total = 0 for score in student_scores: total += score print(total)
In this case, we’ve condensed the entire list to a single value, which is the sum of all of the list’s members. The example demonstrates how to use the reduce() function to calculate the sum of the scores list’s elements.
from functools import reduce def sumVals(x, y): print(f"x={x}, y={y}, {x} + {y} ={x+y}") return x + y result_vals = [75, 65, 80, 95, 50] final_total = reduce(sumVals, result_vals) print(final_total)
The reduce() function, as you can see from the output, gradually adds two components of the list from left to right and reduces the entire list to a single value. Instead of creating the sumVals() function, you can use a lambda expression to make the code more concise:
from functools import reduce scores = [75, 65, 80, 95, 50] total = reduce(lambda a, b: a + b, scores) print(total)
reduce() example
The mult() function returns the product of two numbers in the example below. This function is used with a range of numbers between 1 and 4, which are 1,2, and 3, in the reduce() function. The result is a 3 factorial value.
import functools def mult(x,y): print("x=",x," y=",y) return x*y fact=functools.reduce(mult, range(1, 4)) print ('Factorial of 3: ', fact)
Conclusion
The functools module contains the reduce() method. The reduce() function, like the map and filter methods, takes two arguments: a function and an iterable. However, it does not return another iterable but rather a single value. reduce() is a Python function that reduces the size of a string. In addition, reduce() is a Python method that condenses a list into a smaller number of items. The reduce() function has the following syntax: reduce(fn,list)
|
https://www.codeunderscored.com/reduce-in-python-with-examples/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Bluetooth Low Energy Overview#
The Qt Bluetooth Low Energy API enables communication between Bluetooth Low Energy devices.
deviceDiscovered() signal and start the search with
start() :
m_deviceDiscoveryAgent = QBluetoothDeviceDiscoveryAgent(self) m_deviceDiscoveryAgent.setLowEnergyDiscoveryTimeout(5000) connect(m_deviceDiscoveryAgent, QBluetoothDeviceDiscoveryAgent.deviceDiscovered, self, DeviceFinder.addDevice) connect(m_deviceDiscoveryAgent, QBluetoothDeviceDiscoveryAgent.errorOccurred, self, DeviceFinder::scanError) connect(m_deviceDiscoveryAgent, QBluetoothDeviceDiscoveryAgent.finished, self, DeviceFinder.scanFinished) connect(m_deviceDiscoveryAgent, QBluetoothDeviceDiscoveryAgent.canceled, self, DeviceFinder.scanFinished) m_deviceDiscoveryAgent.start(QBluetoothDeviceDiscoveryAgent.LowEnergyMethod)
Since we are only interested in Low Energy devices we filter the device type within the receiving slot. The device type can be ascertained using the
coreConfigurations() flag:
def addDevice(self, device): # If device is LowEnergy-device, add it to the list if (device.coreConfigurations() QBluetoothDeviceInfo.LowEnergyCoreConfiguration) { m_devices.append(), self) connect(m_control, QLowEnergyController.serviceDiscovered, self, DeviceHandler::serviceDiscovered) connect(m_control, QLowEnergyController.discoveryFinished, self, DeviceHandler::serviceScanDone) connect(m_control, QLowEnergyController.errorOccurred, self, [self](QLowEnergyController.Error error) { Q_UNUSED(error) setError("Cannot connect to remote device.") }) connect(m_control, QLowEnergyController.connected, self, [self]() { setInfo("Controller connected. Search services...") m_control.discoverServices() }) connect(m_control, QLowEnergyController.disconnected, self, [self]() { setError("LowEnergy controller disconnected") }) # Connect m_control.connectToDevice()
Service Search#
The above code snippet how the application initiates the service discovery once the connection has been established.
The
serviceDiscovered() slot below is triggered as a result of the
serviceDiscovered() signal and provides an intermittent progress report. Since we are talking about the heart listener app which monitors HeartRate devices in the vicinity we ignore any service that is not of type
HeartRate .
def serviceDiscovered(self, gatt): if (gatt == QBluetoothUuid(QBluetoothUuid.ServiceClassUuid.HeartRate)) { setInfo("Heart Rate service discovered. Waiting for service scan to be done...") m_foundHeartRateService = True
Eventually the
discoverDetails() :
# If heartRateService found, create new service if (m_foundHeartRateService) m_service = m_control.createServiceObject(QBluetoothUuid(QBluetoothUuid.ServiceClassUuid.HeartRate), self) if (m_service) { connect(m_service, QLowEnergyService.stateChanged, self, DeviceHandler.serviceStateChanged) connect(m_service, QLowEnergyService.characteristicChanged, self, DeviceHandler.updateHeartRateValue) connect(m_service, QLowEnergyService.descriptorWritten, self, DeviceHandler.confirmedDescriptorWrite) m_service.discoverDetails() else: setError("Heart Rate Service not found.")
During the detail search the service’s
state() transitions from
RemoteService to
RemoteServiceDiscovering and eventually ends with
RemoteServiceDiscovered :
def serviceStateChanged(self, s): switch (s) { QLowEnergyService.RemoteServiceDiscovering: = case() setInfo(tr("Discovering services...")) break QLowEnergyService.RemoteServiceDiscovered: = case() setInfo(tr("Service discovered.")) hrChar = m_service.characteristic(QBluetoothUuid(QBluetoothUuid.CharacteristicType.HeartRateMeasurement)) if (not hrChar.isValid()) { setError("HR Data not found.") break m_notificationDesc = hrChar.descriptor(QBluetoothUuid.DescriptorType.ClientCharacteristicConfiguration) if (m_notificationDesc.isValid()) m_service.writeDescriptor(m_notificationDesc, QByteArray.fromHex("0100")) break default: #nothing for now break aliveChanged.emit()
properties() must have the
Notify flag set and a descriptor of type
ClientCharacteristicConfiguration must exist to confirm the availability of an appropriate notification.
Finally, we process the value of the HeartRate characteristic, as per Bluetooth Low Energy standard:
def updateHeartRateValue(self, c, value): # ignore any other characteristic change -> shouldn't really happen though if (c.uuid() != QBluetoothUuid(QBluetoothUuid.CharacteristicType.HeartRateMeasurement)) return data = quint8 (value.constData()) flags = data #Heart Rate hrvalue = 0 if (flags 0x1) // HR 16 bit? otherwise 8 bit hrvalue = int>(qFromLittleEndian<quint16(data[1])) else: hrvalue =:
advertisingData = QLowEnergyAdvertising.ServiceClassUuid.HeartRate) leController = QScopedPointer(QLowEnergyController.createPeripheral()) service = QScopedPointer.)
In general characteristic and descriptor value updates on the peripheral device use the same methods as connecting Bluetooth Low Energy devices.
Note
To use QtBluetooth (in both central and peripheral roles) on iOS, you have to provide an Info.plist file containing the usage description. According to the CoreBluetooth’s documentation:
“Important
Your app will crash if its Info.plist doesn’t include usage description keys for the types of data it needs to access. To access Core Bluetooth APIs on apps linked on or after iOS 13, include the NSBluetoothAlwaysUsageDescription key. In iOS 12 and earlier, include NSBluetoothPeripheralUsageDescription to access Bluetooth peripheral data.”
|
https://doc-snapshots.qt.io/qtforpython-dev/overviews/qtbluetooth-le-overview.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
How do you get Jetty not to work on request?
I'm studying Java, Jetty, Servlet. I want to study Spring later. Is that what the question is, how do you get the server to do my code permanently or with some frequency without a client's request?
For example, I have sensors that need to gather information in advance, and later, at the client ' s request, just get it out quickly. There are also executive devices that need to be forced to work without the participation of a person (open the tap, insert the ventilation).
Server's very notion is that he's dumb to wait for him to get kicked and can't do it any other way. I understand that you can get the planner to do every five seconds a request for a server, but it's a crutch, I'd like to hear the right academic solution, as they do (e.g. in enterprises)?
I don't want to move the Jetty core of the system to some other place or to double the logic by doing the outside violin.
I still don't know what application server is, maybe I need it. I mean, maybe appserver is different from the usual web server and the silver container that he can do the code himself for some internal dial and trigger?
A quick look at Spring's book has noticed that there are some tasks and a planner, maybe that's what I need?
I'm planning on using SBD and client for Androids, and maybe there's no point in keeping the logic of my program in the ears, but moving it to the extra program or to the CSB, and Jetty leaving it for the WEB-page? How is it right to do the perfect architecture?
I'll notice that it's all on ancient Raspberry Pi and I'm thinking about using it as a controller or as a server. Electricity can be saved as well as easily out of the situation by replacing the malfunctioning device and re-establishing the system from a backup copy. Besides, it's a messy system.
The replacement of Java technology is requested not to offer, the choice is made (long learning the popularity of programming languages, etc.). The plans are great, but I'm a newcomer, and I'm very smart not to be explained unless there's another way.
We need to look at the ServletContextListener and asynchronic. ServletContextListener redefinition contextInitialized and launch the necessary asynchronous disk:
public class BackgroundJobManager implements ServletContextListener {
private ScheduledExecutorService scheduler; @Override public void contextInitialized(ServletContextEvent event) { scheduler = Executors.newSingleThreadScheduledExecutor(); scheduler.scheduleAtFixedRate(new SomeDailyJob(), 0, 1, TimeUnit.DAYS); // или что-то вроде этого... } @Override public void contextDestroyed(ServletContextEvent event) { // убить таск при останове сервера: scheduler.shutdownNow(); }
Of course, contextListener needs to be registered on the web.xml.
A more complete example can be found here:
|
https://software-testing.com/topic/888188/how-do-you-get-jetty-not-to-work-on-request
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Level of Difficulty: Intermediate to Senior.
So you’ve reached created a database, you’ve created schemas and tables. You’ve managed to create an automation Python script that communicates with the database, however, you’ve reached a crossroad where you need to validate your SQL tables. Some of the validation might include ensuring that the tables exist and were created correctly before referencing them in the script. In the case of a failure, a message should be sent to a Microsoft Teams channel.
If that’s the case, this post might have just the code to help!
What are the steps?
The steps that the script will be following are:
- Create a webhook to your Teams channel
- If a schema was provided
- Get all the tables in the schema
- Validate that all the tables have a primary key
- If a schema was not provided but tables were
- Validate that each table exists
- Validate that each table has a primary key
Create a Webhook to a Teams Channel
To create a webhook to your teams channel, click on the three dots next to the channel name and select connectors:
Search for “webhook” and select “Configure” on the Incoming Webhook connector:
Provide your webhook with a name and select “Create”:
Be sure to copy your webhook and then hit “Done”:
Create The Python Script That Does The Validation
Import Libraries
import pandas as pd import pyodbc from sqlalchemy import create_engine, event import urllib import pymsteams
Create a Connect to SQL Method
The purpose of this method is to allow the reusable method to be accessed each time SQL needs to be accessed.
def connect_to_sql(server, user, password, database, auth): driver = '{SQL Server}' if user == None: params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};'.format(driver, server, database)) conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params) engine = create_engine(conn_str) return engine else: params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(driver, server, database, user, password)) conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params) engine = create_engine(conn_str) return engine
Create Send Teams Message Method
The method below enables the sending of a teams message to a channel. In order to configure this to work, add the webhook configured above to the method below.
def send_teams_message(text): webhook = '<add webhook>' # You must create the connectorcard object with the Microsoft Webhook URL myTeamsMessage = pymsteams.connectorcard(webhook) # Add text to the message. myTeamsMessage.text(text) # send the message. myTeamsMessage.send()
Create Validate Primary Keys Method
This method ensures that primary keys are validated and returns the status as well as the text to be sent to MS Teams.
def ValidatePrimaryKeys(engine):]" df = pd.read_sql(query, engine) keysValid = True text = "" if len(df) > 0: text = 'Primary Key Validation Failed.\n\n' for index, element in df.iterrows(): text += '{} does not have a primary key\n\n'.format(element['table_name']) keysValid = False # send_teams_message(text) return keysValid, text
Create Validate Tables Method
This method validates that the tables exist within the current database and returns the status as well as the text to be sent to MS Teams.
def ValidateTables(engine, tables): query = "select tab.[name] as table_name from sys.tables tab" df = pd.read_sql(query, engine) tables_split = [] tables = tables.replace(' ','') if ';' in tables: tables_split = tables.split(';') elif ',' in tables: tables_split = tables.split(',') elif len(tables) != 0: tables_split = [tables] text = "" tablesValid = True for table in tables_split: if table not in df['table_name'].tolist() and (table != '' and table != None): text += 'Table not found in database: {}\n\n'.format(table) tablesValid = False if tablesValid: text = 'Table Validation Passed\n\n' else: text = 'Table Validation Failed\n\n' + text #send_teams_message(text) return tablesValid, text
Create Validate Schema Method
This method validates that the schema exists. Once the schema is validated, all tables in the schema are retrieved and their primary keys are validated.
def ValidateFromSchema(schemas, engine): text = "" tableText = "" schemas_split = [] schemas = schemas.replace(' ','') if ';' in schemas: schemas_split = schemas.split(';') elif ',' in schemas: schemas_split = schemas.split(',') elif len(schemas) != 0: schemas_split = [schemas] isValid = True for schema in schemas_split: if (isValid): query = "SELECT schema_name FROM information_schema.schemata WHERE schema_name = '{}'".format(schema) df = pd.read_sql(query, engine) if (len(df) > 0): query = "select t.name as table_name from sys.tables t where schema_name(t.schema_id) = '{}'".format(schema) df = pd.read_sql(query, engine) tables = ",".join(list(df["table_name"])) validateTables = ValidateTables(engine, tables) isValid = validateTables[0] tableText += "{}\n\n".format(validateTables[1]) else: isValid = False text += "Schema Validation Failed\n\n" text += "Schema not found in database: {}\n\n".format(schema) if (isValid): text = "Schema Validation Passed\n\n" text = "{}\n\n{}".format(text, tableText) return isValid, text
Create Validate SQL Method (Equivalent to “main”)
This method acts as the main method and encapsulates all the methods in the correct order and executes the proceeding tasks.
def ValidateSQL(project, server, database, schemas, tables, auth, username, password): engine = connect_to_sql(server, username, password, database, auth) summaryText = None if (schemas != None): validateSchemas = ValidateFromSchema(schemas, engine) isValid = validateSchemas[0] text = validateSchemas[1] else: validateTables = ValidateTables(engine, tables) isValid = validateTables[0] text = validateTables[1] if isValid: summaryText = 'Primary Key Validation Passed\n\n' validatePrimaryKeys = ValidatePrimaryKeys(engine) isKeysValid = validatePrimaryKeys[0] pkText = validatePrimaryKeys[1] if isKeysValid: text += summaryText else: text += pkText else: summaryText = text text = "<strong><u>{}<u><strong>:\n\n{}".format(project, text) send_teams_message(text) return isValid
Calling the Validate SQL Method
The below is how you’d initialise the variables and use them to call the ValidateSQL method.
server = '<server>' user = '<user>' password = '<password>' database = '<database>' auth = 'SQL' # or Windows schemas = '<schemas comma separated>' tables = '<tables comma separated>' project = '<project>' ValidateSQL(project, server, database, schemas, tables, auth, user, password)
And that’s a wrap Pandalorians! The Github repo containing the script is available here. Did this help? Did you get stuck anywhere? Do you have any comments or feedback? Please pop it down below or reach out – jacqui.jm77@gmail.com
|
https://thejpanda.com/2020/09/02/python-using-python-for-sql-schema-and-table-validation-and-logging-progress-to-microsoft-teams/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
PKCS12_create —
create a PKCS#12 structure
#include
<openssl/pkcs12.h>
PKCS12 *
PKCS12_create(const char *pass,
const if
set to
KEY_EX it can be used for signing and
encryption. This option was useful for old export grade software which could
use signing only keys of arbitrary size but had restrictions on the
permissible sizes of keys which could be used for encryption.
If a certificate contains an alias or keyid then this will be used for the corresponding friendlyName or localKeyID in the PKCS12 structure.
PKCS12_create() returns a valid
PKCS12 structure or
NULL if an
error occurred.
crypto(3), d2i_PKCS12(3), PKCS12_new(3), PKCS12_newpass(3), PKCS12_parse(3), PKCS12_SAFEBAG_new.
|
https://man.openbsd.org/OpenBSD-6.7/PKCS12_create.3
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
How to Send Email with Nodemailer
March 16th, 2021
What You Will Learn in This Tutorial
Learn how to configure an SMTP server and send email from your app using Nodemailer. Also learn how to use EJS to create dynamic HTML templates for sending email.
Table of Contents
Master Websockets — Learn how to build a scalable websockets implementation and interactive UI.
To get started, we need to install the nodemailer package via NPM:
npm install nodemailer
This will add Nodemailer to your app. If you're using a recent version of NPM, this should also add
nodemailer as a dependency in your app's
package.json file.
Choosing an SMTP Provider
Before we move forward, we need to make sure we have access to an SMTP provider. An SMTP provider is a service that provides access to the SMTP server we need to physically send our emails. While you can create an SMTP server on your own, it's usually more trouble than it's worth due to regulatory compliance and technical overhead.
SMTP stands for Simple Mail Transfer Protocol. It's an internet standard communication protcol that describes the protocol used for sending email over the internet.
When it comes to using SMTP in your app, the standard is to use a third-party SMTP service to handle the compliance and technical parts for you so you can just focus on your app. There are a lot of different SMTP providers out there, each with their own advantages, disadvantages, and costs.
Our recommendation? Postmark. It's a paid service, however, it has a great user interface and excellent documentation that save you a lot of time and trouble. If you're trying to avoid paying, an alternative and comparable service is Mailgun.
Before you continue, set up an account with Postmark and then follow this quick tutorial to access your SMTP credentials (we'll need these next).
Alternatively, set up an account with Mailgun and then follow this tutorial to access your SMTP credentials.
Once you have your SMTP provider and credentials ready, let's keep moving.
Configuring Your SMTP Server
Before we start sending email, the first step is to configure an SMTP transport. A transport is the term Nodemailer uses to describe the method it will use to actually send your email.
import nodemailer from 'nodemailer'; const smtp = nodemailer.createTransport({ host: '', port: 587, secure: process.env.NODE_ENV !== "development", auth: { user: '', pass: '', }, });
First, we import
nodemailer from the
nodemailer package we installed above. Next, we define a variable
const smtp and assign it to a call to
nodemailer.createTransport(). This is the important part.
Here, we're passing an options object that tells Nodemailer what SMTP service we want to use to send our email.
Wait, aren't we sending email using our app?
Technically, yes. But sending email on the internet requires a functioning SMTP server. With Nodemailer, we're not creating a server, but instead an SMTP client. The difference is that a server acts as the actual sender (in the technical sense), while the client connects to the server to use it as a relay to perform the actual send.
In our app, then, calling
nodemailer.createTransport() establishes the client connection to our SMTP provider.
Using the credentials you obtained from your SMTP provider earlier, let's update this options object. While they may not be exact, your SMTP provider should use similar terminology to describe each of the settings we need to pass:
{ host: 'smtp.postmarkapp.com', port: 587, secure: process.env.NODE_ENV !== "development", auth: { user: 'postmark-api-key-123', pass: 'postmark-api-key-123', }, }
Here, we want to replace
host,
port, and the
user and
pass under the nested
auth object.
host should look something like
smtp.postmarkapp.com.
port should be set to 587 (the secure port for sending email with SMTP).
Note: If you're using Postmark, you will use your API key for both the username and password here.
Double-check and make sure you have the correct settings and then we're ready to move on to sending.
Sending Email
Sending email with Nodemailer is straightforward: all we need to do is call the
sendMail method on the value returned from
nodemailer.createTransport() that we stored in the
smtp variable above, like this:
smtp.sendMail({ ... })
Next, we need to pass the appropriate message configuration for sending our email. The message configuration object is passed to
smtp.sendMail() and contains settings like
to,
from,
subject, and
html.
As a quick example, let's pass the bare minimum settings we'll need to fire off an email:
[...] smtp.sendMail({ to: 'somebody@gmail.com', from: 'support@myapp.com', subject: 'Testing Email Sends', html: '<p>Sending some HTML to test.</p>', });
Pretty clear. Here we pass in a
to,
from,
subject, and
html setting to specify who our email is going to, where it's coming from, a subject to help the recipient identify the email, and some HTML to send in the body of the email.
That's it! Well, that's the basic version. If you take a look at the message configuration documentation for Nodemailer, you'll see that there are several options that you can pass.
To make sure this is all clear, let's look at our full example code so far:
import nodemailer from 'nodemailer'; const smtp = nodemailer.createTransport({ host: 'smtp.someprovider.com', port: 587, secure: process.env.NODE_ENV !== "development", auth: { user: 'smtp-username', pass: 'smtp-password', }, }); smtp.sendMail({ to: 'somebody@gmail.com', from: 'support@myapp.com', subject: 'Testing Email Sends', html: '<p>Sending some HTML to test.</p>', });
Now, while this technically will work, if we copy and paste it verbatim into a plain file, when we run the code, we'll send our email immediately. That's likely a big oops.
Let's modify this code slightly:
import nodemailer from 'nodemailer'; const smtp = nodemailer.createTransport({ host: 'smtp.someprovider.com', port: 587, secure: process.env.NODE_ENV !== "development", auth: { user: 'smtp-username', pass: 'smtp-password', }, }); export default (options = {}) => { return smtp.sendMail(options); }
Wait! Where did our example options go?
It's very unlikely that we'll want to send an email as soon as our app starts up. To make it so that we can send an email manually, here, we wrap our call to
smtp.sendMail() with another function that takes an
options object as an argument.
Can you guess what that options object contains? That's right, our missing options.
The difference between this code and the above is that we can import this file elsewhere in our app, calling the exported function at the point where we want to send our email.
For example, let's assume the code above lives at the path
/lib/email/send.js in our application:
import sendEmail from '/lib/email/send.js'; import generateId from '/lib/generateId.js'; export default { createCustomer: (parent, args, context) => { const customerId = generateId(); await Customers.insertOne({ _id: customerId, ...args.customer }); await sendEmail({ to: 'admin@myapp.com', from: 'support@myapp.com', subject: 'You have a new customer!', text: 'Hooray! A new customer has signed up for the app.', }); return true; }, };
This should look familiar. Again, we're using the same exact message configuration object from Nodemailer here. The only difference is that now, Nodemailer won't send our email until we call the
sendEmail() function.
Awesome. So, now that we know how to actually send email, let's take this a step further and make it more usable in our application.
Creating Dynamic Templates with EJS
If you're a Pro Subscriber and have access to the repo for this tutorial, you'll notice that this functionality is built-in to the boilerplate that the repo is based on, the CheatCode Node.js Boilerplate.
The difference between that code and the examples we've looked at so far is that it includes a special feature: the ability to define custom HTML templates and have them compile automatically with dynamic data passed when we call to
sendEmail.
Note: The paths above the code blocks below map to paths available in the repo for this tutorial. If you're a CheatCode Pro subscriber and have connected your Github account on the account page, click the "View on Github" button at the top of this tutorial to view the source.
Let's take a look at the entire setup and walk through it.
/lib/email/send.js
import nodemailer from "nodemailer"; import fs from "fs"; import ejs from "ejs"; import { htmlToText } from "html-to-text"; import juice from "juice"; import settings from "../settings"; const smtp = nodemailer.createTransport({ host: settings?.smtp?.host, port: settings?.smtp?.port, secure: process.env.NODE_ENV !== "development", auth: { user: settings?.smtp?.username, pass: settings?.smtp?.password, }, }); export default ({ template: templateName, templateVars, ...restOfOptions }) => { const templatePath = `lib/email/templates/${templateName}.html`; const options = { ...restOfOptions, }; if (templateName && fs.existsSync(templatePath)) { const template = fs.readFileSync(templatePath, "utf-8"); const html = ejs.render(template, templateVars); const text = htmlToText(html); const htmlWithStylesInlined = juice(html); options.html = htmlWithStylesInlined; options.text = text; } return smtp.sendMail(options); };
There's a lot of extras here, so let's focus on the familiar stuff first.
Starting with the call to
nodemailer.createTransport(), notice that we're calling the exact same code above. The only difference is that here, instead of passing our settings directly, we're relying on the built-in settings convention in the CheatCode Node.js Boilerplate.
Code Tip
Notice that weird syntax we're using in our call to
createTransport()like
settings?.smtp?.host? Though it may look like a mistake, this is known as optional chaining and is a recent edition to the ECMAScript language specification (the technical name for the specification that JavaScript is based on).
Optional chaining helps us to avoid runtime errors when accessing nested properties on objects. Instead of writing
settings && settings.smtp && settings.smtp.host, we can simiplify our code by writing
settings?.smtp?.hostto achieve the same result.
Next, we want to look at the very bottom of the file. That call to
smtp.sendMail(options) should look familiar. In fact, this is the exact same pattern we saw above when we wrapped our call in the function that took the options object.
Adding the Templating Functionality
Now for the tricky part. You'll notice that we've added quite a few imports to the top of our file. In addition to
nodemailer, we've added:
fs- No install required. This is the File System package that's built-in to the Node.js core. It gives us access to the file system for things like reading and writing files.
ejs- The library we'll use for replacing dynamic content inside of our HTML email template.
html-to-text- A library that we'll use to automatically convert our compiled HTML into text to improve the accessibility of our emails for users.
juice- A library used for automatically inlining any
<style></style>tags in our HTML email template.
If you're not using the CheatCode Node.js Boilerplate, go ahead and install those last three dependencies now:
npm install ejs html-to-text juice
Now, let's look a bit closer at the function being exported at the bottom of this example. This function is technically identical to the wrapper function we looked at earlier, with one big difference: we now anticipate a possible
template and
templateVars value being passed in addition to the message configuration we've seen so far.
Instead of just taking in the
options object blindly, though, we're using JavaScript object destructuring to "pluck off" the properties we want from the options object—kind of like grapes. Once we have the
template and
templateVars properties (grapes), we collect the rest of the options in a new variable called
restOfOptions using the
...JavaScript spread operator.
Next, just inside the function body at the top of the function we define a variable
templatePath that points to the planned location of our HTML email templates:
/lib/email/templates/${templateName}.html.
Here, we pass the
templateName property that we destructured from the
options object passed to our new function (again, the one that's already included in the CheatCode Node.js Boilerplate). It's important to note: even though we're using the name
templateName here, that value is assigned to the options object we pass as
template.
Why the name change? Well, if we look a bit further down, we want to make sure that the variable name
template is still accessible to us. So, we take advantage of the ability to rename destructured properties in JavaScript by writing
{ template: templateName }. Here, the
template tells JavaScript that we want to assign the value in that variable to a new name, in the scope of our current function.
:after
To be clear: we're not permanently changing or mutating the options object here. We're only changing the name—giving it an alias—temporarily within the body of this function; nowhere else.
Next, once we have our template path, we get to work.
First, we set up a new
options object containing the "unpacked" version of our
restOfOptions variable using the JavaScript spread operator. We do this here because at this point, we can only know for certain the options object passed to our function contains the Nodemailer message configuration options.
In order to determine if we're sending our email using a template, we writing an
if statement to say "if there's a
templateName present and
fs.existsSync(templatePath) returns true for the
templatePath we wrote above, assume we have a template to compile."
If either
templateName or the
fs.existsSync() check were to fail, we'd skip any template compilation and hand off our
options object directly to
smtp.sendMail().
If, however, we do have a template and it does exist at the path, next, we use
fs.readFileSync() to get the raw contents of the HTML template and store them in the
template variable. Next, we use the
ejs.render() method, passing the HTML template we want to replace content within, followed by the
templateVars object containing the replacements for that file.
Because we're writing our code to support any template (not a specific one), let's take a quick look at an example HTML template to ensure this isn't confusing:
/lib/email/templates/reset-password.html
<html> <head> <title>Reset Password</title> </head> <style> body { color: #000; font-family: "Helvetica Neue", "Helvetica", "Arial", sans-serif; font-size: 16px; line-height: 24px; } </style> <body> <p>Hello,</p> <p>A password reset was requested for this email address (<%= emailAddress %>). If you requested this reset, click the link below to reset your password:</p> <p><a href="<%= resetLink %>">Reset Your Password</a></p> </body> </html>
Here, we have a plain HTML file with a
<style></style> tag containing some generic color and font styles and a short
<body></body> containing the contents of our email.
Notice that inside we have some strange, non-standard HTML tags like
<%= emailAddress =>. Those are known as EJS tags and are designed to be placeholders where EJS will "spit out" the corresponding values from our
templateVars object into the template.
In other words, if our
templateVars object looks like this:
We'd expect to get HTML back like this from EJS:
<body> <p>Hello,</p> <p>A password reset was requested for this email address (pizza@test.com). If you requested this reset, click the link below to reset your password:</p> <p><a href=" Your Password</a></p> </body>
Now, back in our JavaScript code, after we've gotten back our
html string from
ejs.render(), we pass it to the
htmlToText() method we imported to get back an HTML-free, plain-text string (again, this is used for accessibility—email clients fall back to the
text version of an email in the event that there's an issue with the HTML version).
Finally, we take the
html once more and pass it to
juice() to inline the
<style></style> tag we saw at the top. Inlining is the process of adding styles contained in a
<style></style> tag directly to an HTML element via its
style attribute. This is done to ensure styles are compatible with all email clients which, unfortunately, are all over the map.
Once we have our compiled
htmlWithStylesInlined and our
text, as our final step, at the bottom of our
if statement, we assign
options.html and
options.text to our
htmlWithStylesInlined and our
text values, respectively.
Done! Now, when we call our function, we can pass in a
template name (corresponding to the name of the HTML file in the
/lib/email/templates directory) along with some
templateVars to send a dynamically rendered HTML email to our users.
Let's take a look at using this function to wrap things up:
await sendEmail({ to: args.emailAddress, from: settings?.support?.email, subject: "Reset Your Password", template: "reset-password", templateVars: { emailAddress: args.emailAddress, resetLink, }, });
Nearly identical to what we saw before, but notice: this time we pass a
template name and
templateVars to signal to our function that we want to use the
reset-password.html template and to replace its EJS tags with the values in the
templateVars object.
Make sense? If not, feel free to share a comment below and we'll help you out!
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode.
|
https://cheatcode.co/tutorials/how-to-send-email-with-nodemailer
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Cookies play an important role while dealing a user’s session on a web application. In this chapter, you will learn about working with cookies in Laravel based web applications.
Cookie can be created by global cookie helper of Laravel. It is an instance of Symfony\Component\HttpFoundation\Cookie. The cookie can be attached to the response using the withCookie() method. Create a response instance of Illuminate\Http\Response class to call the withCookie() method. Cookie generated by the Laravel are encrypted and signed and it can’t be modified or read by the client.
Here is a sample code with explanation.
//Create a response instance $response = new Illuminate\Http\Response('Hello World'); //Call the withCookie() method with the response method $response->withCookie(cookie('name', 'value', $minutes)); //return the response return $response;
Cookie() method will take 3 arguments. First argument is the name of the cookie, second argument is the value of the cookie and the third argument is the duration of the cookie after which the cookie will get deleted automatically.
Cookie can be set forever by using the forever method as shown in the below code.
$response->withCookie(cookie()->forever('name', 'value'));
Once we set the cookie, we can retrieve the cookie by cookie() method. This cookie() method will take only one argument which will be the name of the cookie. The cookie method can be called by using the instance of Illuminate\Http\Request.
Here is a sample code.
//’name’ is the name of the cookie to retrieve the value of $value = $request->cookie('name');
Observe the following example to understand more about Cookies −
Step 1 − Execute the below command to create a controller in which we will manipulate the cookie.
php artisan make:controller CookieController --plain
Step 2 − After successful execution, you will receive the following output −
Step 3 − Copy the following code in
app/Http/Controllers/CookieController.php file.
app/Http/Controllers/CookieController.php
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; use Illuminate\Http\Response; use App\Http\Requests; use App\Http\Controllers\Controller; class CookieController extends Controller { public function setCookie(Request $request) { $minutes = 1; $response = new Response('Hello World'); $response->withCookie(cookie('name', 'virat', $minutes)); return $response; } public function getCookie(Request $request) { $value = $request->cookie('name'); echo $value; } }
Step 4 − Add the following line in app/Http/routes.php file.
app/Http/routes.php
Route::get('/cookie/set','CookieController@setCookie'); Route::get('/cookie/get','CookieController@getCookie');
Step 5 − Visit the following URL to set the cookie.
Step 6 − The output will appear as shown below. The window appearing in the screenshot is taken from firefox but depending on your browser, cookie can also be checked from the cookie option.
Step 7 − Visit the following URL to get the cookie from the above URL.
Step 8 − The output will appear as shown in the following image.
13 Lectures 3 hours
35 Lectures 3.5 hours
7 Lectures 1.5 hours
42 Lectures 1 hours
165 Lectures 13 hours
116 Lectures 13 hours
|
https://www.tutorialspoint.com/laravel/laravel_cookie.htm
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Python Connector Libraries for Evernote Data Connectivity. Integrate Evernote with popular Python tools like Pandas, SQLAlchemy, Dash & petl. Easy-to-use Python Database API (DB-API) Modules connect Evernote data with Python and any Python-based applications.
Features
- Fully Compatible with the Evernote Thrift API
- Powerful metadata querying enables SQL-like access to non-database sources
- Push down query optimization pushes SQL operations down to the server whenever possible, increasing performance
- Client-side query execution engine, supports SQL-92 operations that are not available server-side
- Connect to live Evernote Evernote with bi-directional access.
- Write SQL, get Evernote data. Access Evernote through standard Python Database Connectivity.
- Integration with popular Python tools like Pandas, SQLAlchemy, Dash & petl.
- Simple command-line based data exploration of Evernote Notebooks, Notes, Searches, Tags, and more!
- Full Unicode support for data, parameter, & metadata.
CData Python Connectors in Action!
Watch the video overview for a first hand-look at the powerful data integration capabilities included in the CData Python Connectors.WATCH THE PYTHON CONNECTOR VIDEO OVERVIEW
Python Connectivity with Evernote
Full-featured and consistent SQL access to any supported data source through Python
- Universal Python Evernote Connectivity
Easily connect to Evernote Evernote Evernote Connector includes a library of 50 plus functions that can manipulate column values into the desired result. Popular examples include Regex, JSON, and XML processing functions.
- Collaborative Query Processing
Our Python Connector enhances the capabilities of Evernote with additional client-side processing, when needed, to enable analytic summaries of data such as SUM, AVG, MAX, MIN, etc.
- Easily Customizable and Configurable
The data model exposed by our Evernote Evernote with Python
CData Python Connectors leverage the Database API (DB-API) interface to make it easy to work with Evernote from a wide range of standard Python data tools. Connecting to and working with your data in Python follows a basic pattern, regardless of data source:
- Configure the connection properties to Evernote
- Query Evernote to retrieve or update data
- Connect your Evernote data with Python data tools.
Connecting to Evernote in Python
To connect to your data from Python, import the extension and create a connection:
import cdata.evernote as mod conn = mod.connect("User=user@domain.com; Password=password;") #Create cursor and iterate over results cur = conn.cursor() cur.execute("SELECT * FROM Notebooks") rs = cur.fetchall() for row in rs: print(row)
Once you import the extension, you can work with all of your enterprise data using the python modules and toolkits that you already know and love, quickly building apps that help you drive business.
Visualize Evernote Data with pandas
The data-centric interfaces of the Evernote Python Connector make it easy to integrate with popular tools like pandas and SQLAlchemy to visualize data in real-time.
engine = create_engine("evernote///Password=password&User=user") df = pandas.read_sql("SELECT * FROM Notebooks", engine) df.plot() plt.show()
More Than Read-Only: Full Update/CRUD Support
Evernote Connector goes beyond read-only functionality to deliver full support for Create, Read Update, and Delete operations (CRUD). Your end-users can interact with the data presented by the Evernote Connector as easily as interacting with a database table.
|
https://www.cdata.com/drivers/evernote/python/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Glob is a generic term that refers to matching given patterns using Unix shell rules. Glob is supported by Linux and Unix systems and shells, and the function glob() is available in system libraries.
In Python, the glob module finds files/pathnames that match a pattern. The glob pattern rules are the same as the Unix path expansion rules. It is also projected that, based on benchmarks, it will match pathnames in directories faster than other approaches. Apart from exact string search, we can combine wildcards (“*,?, [ranges]) with Glob to make path retrieval more straightforward and convenient. Note that this module is included with Python and does not need to be installed separately.
Glob in Python
Programmers can use the Glob() function to recursively discover files starting with Python 3.5. The glob module in Python helps obtain files and pathnames that match the specified pattern passed as an argument.
The pattern rule of the Glob is based on standard Unix path expansion rules. Researchers and programmers conducted a benchmarking test, and it was discovered that the glob technique is faster than alternative methods for matching pathnames within directories. Other than string-based searching, programmers can use wildcards (“*,?, etc.) with Glob to extract the path retrieval technique more efficiently and straightforwardly.
To use Glob() to find files recursively, you need Python 3.5+. The glob module supports the “**” directive(which is parsed only if you pass a recursive flag), which tells Python to look recursively in the directories.
The syntax is as follows: glob() and iglob():
glob.glob(path_name, *, recursive = False) glob.iglob(path_name, *, recursive = False)
The recursive value is set to false by default.
For example,
import glob for filename in glob.iglob('src/**/*', recursive=True): print(filename)
Using an if statement, you can check the filename for whatever condition you wish. You can use os.walk to recursively walk the directory and search the files in older Python versions. The latter is covered in a later section.
“Global patterns specify sets of filenames containing wildcard characters,” according to Wikipedia. These patterns are comparable to regular expressions, but they’re easier to use.
- The asterisk (*) indicates a match of zero or more characters.
- The question mark (?) corresponds to a single character.
# program for demonstrating how to use Glob with different wildcards import glob print('Named explicitly:') for name in glob.glob('/home/code/Desktop/underscored/data.txt'): print(name) # Using '*' pattern print('\nNamed with wildcard *:') for name in glob.glob('/home/code/Desktop/underscored/*'): print(name) # Using '?' pattern print('\nNamed with wildcard ?:') for name in glob.glob('/home/code/Desktop/underscored/data?.txt'): print(name) # Using [0-9] pattern print('\nNamed with wildcard ranges:') for name in glob.glob('/home/code/Desktop/underscored/*[0-9].*'): print(name)
To search files recursively, use the Glob() method.
To get paths recursively from directories/files and subdirectories/subfiles, we can utilize the glob module’s glob.glob() and glob.iglob().
The syntax is as follows:
glob.glob(pathname, *, recursive=False)
and
glob.iglob(pathname, *, recursive=False)
When recursion is set to True, any file or directory will be matched by “**” followed by path separator(‘./**/’).
Example: Python program to find files
# recursively find files using Python # Python program to find files # recursively using Python import glob # Shows a list of names in list files. print("Using glob.glob()") files = glob.glob('/home/code/Desktop/underscored/**/*.txt', recursive = True) for file in files: print(file) # It is responsible for returning an iterator which will is simultaneously printed. print("\nUsing glob.iglob()") for filename in glob.iglob('/home/code/Desktop/underscored/**/*.txt', recursive = True): print(filename)
For previous Python versions, see:
The most straightforward technique is to utilize os.walk(), which is built and optimized for recursive directory tree exploration. Alternatively, we may use os.listdir() to acquire a list of all the files in a directory and its subdirectories, which we can then filter out.
Let’s look at it through the lens of an example:
# program for finding files recursively by using Python import os # Using os.walk() for dirpath, dirs, files in os.walk('src'): for filename in files: fname = os.path.join(dirpath,filename) if fname.endswith('.c'): print(fname) """ Alternatively, let us use fnmatch.filter() for filtering out results. """ for dirpath, dirs, files in os.walk('src'): for filename in fnmatch.filter(files, '*.c'): print(os.path.join(dirpath, filename)) # employ os.listdir() path = "src" dir_list = os.listdir(path) for filename in fnmatch.filter(dir_list,'*.c'): print(os.path.join(dirpath, filename))
Example: Glob() with the Recursive parameter set to False
import glob print('Explicitly mentioned file :') for n in glob.glob('/home/code/Desktop/underscored/anyfile.txt'): print(n) # The '*' pattern print('\n Fetch all with wildcard * :') for n in glob.glob('/home/code/Desktop/underscored/*\n'): print(n) # The '?' pattern print('\n Searching with wildcard ? :') for n in glob.glob('/home/code/Desktop/underscored/data?.txt \n'): print(n) # Exploring the pattern [0-9] print('\n Using the wildcard to search for number ranges :') for n in glob.glob('/home/code/Desktop/underscored/*[0-9].* \n'): print(n)
In the example above, we must first import the glob module. Then we must supply the path to the Glob () method, which will look for any subdirectories and print them using the print() function. Next, we’ll append different patterns to the end of the path, such as * (asterisk),? (wildcard), and [range], so that it can fetch and display all of the folders in that subdirectory.
Example: Glob() with the Recursive parameter set to True
import glob print("The application of the glob.glob() :-") fil = glob.glob('/home/code/Desktop/underscored/**/*.txt', recursive = True) for f in fil: print(f) # an iterator responsible for printing simultaneously is returned print("\n Applying the glob.iglob()") for f in glob.iglob('/home/code/Desktop/underscored/**/*.txt', recursive = True): print(f)
It is another program that demonstrates recursive traversal of directories and subdirectories. We must first import the glob module. Then we must supply the path to the Glob () method, which will look for any subdirectories and print them using the print() function.
Then we’ll utilize patterns like ** and * to represent all sub-folders and folders within that path string. The first parameter is the string, while the second parameter, recursive = True, determines whether or not to visit all sub-directories recursively. The same is true with iglob(), which stands for iterator glob and produces an iterator with the same results as Glob () but without storing them all at once.
Conclusion
The process of accessing files recursively in your local directory is a crucial approach that Python programmers must implement in their applications when searching for a file. The concept of the regular expression can be used to do this. Regular Expressions, often known as regex, play a crucial role in recursively discovering files in Python programming.
Glob is a term that refers to a variety of ways for matching preset patterns according to the Unix shell’s rules. Some systems, such as Unix, Linux, and shells, support Glob and render the Glob() function in system libraries.
Glob() and iglob() are two fundamental methods that, depending on the second parameter value (True/False), run over the path either straightway or recursively. Because Python has made it efficient as a method, it is more beneficial than any other manual way.
In this tutorial, you’ve learned how to use the Glob () function in Python programs to discover files recursively. We are hopeful of its informativeness, and you enjoyed it as we did.
|
https://www.codeunderscored.com/how-to-use-the-glob-function-to-find-files-recursively-in-python/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
- Introduction
- Adding Structure to the Tower of Babel
- XML-RPC
- Simple Object Access Protocol (SOAP)
- Conclusion
- Additional Resources
Simple Object Access Protocol (SOAP)
The Simple Object Access Protocol (SOAP) grew out of XML-RPC and has eclipsed XML-RPC in terms of media coverage due to its use throughout Microsoft's .NET architecture. On the surface, SOAP and XML-RPC look very much alike, as both are XML-based RPC mechanisms that make use of HTTP as a communications transport. While XML-RPC is extremely simple, this simplicity can also be limiting. SOAP goes several steps further by allowing support for XML namespaces, interface discovery, and custom data types. All SOAP messages are contained within an envelope, which is used to describe the message being sent as well as the sender. Depending on your application, XML-RPC may very well meet all your needs. However, if you're required to send complex data types over the wire or need to control how your message is processed on the server-side, SOAP fits the bill. The following code snippet demonstrates a SOAP message and response, using our informIT.getUserAddress RPC call from earlier in this article.
Message: <?xml version='1.0'?> <SOAP:Envelope xmlns: <SOAP:Body> <i:getUserAddress xmlns: <userID>198172</userID> </i:getUserAddress> </SOAP:Body> </SOAP:Envelope> Response: <SOAP:Envelope xmlns: <SOAP:Body> <i:fResponse xmlns: <address>123 N. Elm Street Fargo, North Dakota </address> </i:fResponse> </SOAP:Body> </SOAP:Envelope>
|
https://www.informit.com/articles/article.aspx?p=23580&seqNum=5
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Programs Using Loops
Q20.Write a program to display all the even numbers from 25 to 60.
Q21.Write a program to display all the odd numbers from 230 to 190 (in reverse order )
Q22.Write a program to display all the multiples of 37 from 380 to 600. (use while loop )
Q23.Write a program to accept a number then display its multiplication table. If the entered number is 5 then it should display as below-
5x1=5 5x2=10 . . 5x10=50Q24.Write a program to accept a number the display next 10 odd or even numbers.
Q25.Write a program to accept the range ( lower limit and upper limit) then display all the even or odd numbers within the range as per users choice
Program-20
#include <iostream.h> void main(){ int i; for(i=25; i<=60; i++){ if(i%2==0){ cout<< i <<" "; } } }In the above program note that we have generating all the numbers from 25 to 60 then we are checking if each number is divisible by 2, if yes we are displaying it as the number is even. In this proagram "if" statement is a time consuming statement if you can avoid it then your program will run faster, though is this small program it does not matter still you should do your best to make your program efficient one in all respect. Therefore let us write the same program without using the "if" statement. One more thing see that after the second pair of "<<" (output operator) a gap within quotes is used, it formats the output as 26 28 30 .... If it is not given the output will be as 262830... You can use "\n" which will display the output in seperate lines.
Program-20 (without if)
#include <iostream.h> void main(){ int i; for(i=26; i<=60; i=i+2){ cout<<i<<" "; } }In the above program see that the loop started with 26 not 25; since we have to display even numbers so starting even numbers within the range 25 to 60 is 26. Also see that the looping variable is incremented by 2 ( i=i+2 cab be written as i=+2) hence it will always generate the next even numbers. We should not worried about the upper limit wheather it even or odd, the looping statement will take care of it automatically
Program-21
#include <iostream.h> void main(){ int i; for(i=229; i>=190; i=i-2){ cout<< i <<" "; } }In the above program see that the loop started with 229 do not write 231(common mistake students do), for the upper limit leave it as it is, the looping condition is i>=190 because it must be true to run the loop, the looping variable to be decremented by 2
Program-22
#include <iostream.h> void main(){ int i=407; while(i<=600){ cout<<i<<" "; i=i+37; } }In the above program see that the loop started with 407 which is the first multiple of 37 within the range 380 to 600, looping variable "i" is incremented by 37 to get the next multiple of 37.
Program-23
#include <iostream.h> void main(){ int i, a; cout<<"Enter the number="; cin>>a; for(i=1; i<=10; i++){ cout<< a << "x" << i << "=" << a*i << "\n"; } }Output of the above program as below
Program-24
#include <iostream.h> void main(){ int i, a; cout<<"Enter the number="; cin>>a; for(i=1; i<=10; i++){ cout<< a << " "; a=a+2; } }In the above see that the loop is used to count the number of numbers but not to generate the numbers. The numbers are being generated by the statement a=a+2
Program-25
#include <iostream.h> void main(){ int i, L, U, choice; cout<< "Enter the lower limit="; cin>>L; cout<< "Enter the upper limit="; cin>>U; cout<<"Enter 1 for odd numbers or 2 for even numbers="; cin>>choice; switch(choice){ case 1: if(L%2==0){ L=L+1; } for(i=L; i<=U; i=i+2){ cout<< i << " "; } break; case 1: if(L%2==1){ L=L+1; } for(i=L; i<=U; i=i+2){ cout<< i << " "; } break; default: cout<< "Sorry you have not choosen an allowed option !!!"; } }In the above program see that a variable "choice" is taken to accept the user choice ( even or odd ) by allowing him to choose 1 or 2 as per his need. If he chooses 1 then the "case 1" of the switch statement would execute and sililarly would execute "case 2", but if he chooses any other number he will get the message in the "default case". Now see that under "case 1" we have to generate odd number as per the option given, so we have to check if the lower limit (variable L ) is an even number, if yes then we have to start from the next number, so it is L=L+1. Same is the in case of even number we have to check if L is an odd number and treat in the same way. Take care of proper case (upper or lower) of the variable name because C++ is case sensitive
Tweet
|
https://www.mcqtoday.com/CPP/flowloop/programsUsingLoop.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
User Name:
Published: 14 Oct 2008
By: Faisal Khan
Review of the book Silverlight 2.0 In Action
With the invention of Silverlight, we already stepped into the future of web application. With the release of Silverlight 2, the door has been opened for us to use the power of the .NET Framework to reach beyond the desktop to deliver fascinating experiences across a variety of web browsers, platforms and devices. The implementation of this new technology has been explained wonderfully by Chad Campbell and John Stockton in their book, “Silverlight 2 In Action”, as they teach Silverlight step by step. One of the excellent features of this book is that the chapters have been ordered logically to help developers understand the in-depth concepts behind Silverlight 2. Whether you’re a professional or a novice programmer and want to get your feet wet with the world of RIA (Rich Internet Application), this is exactly the book you want. I’m expressing my personal opinion here about each chapter of this book.
In chapter 1 productivity, performance and portability are discussed nicely. The Designer/Developer role is so important in the world of RIA and that has been described by the writers of this book in an innovative way. How XAML enables a refreshing, collaborative experience between designers and developers that allows us to create valuable software solutions in shorter amounts of time has also been explained in this chapter with some great examples.
One of the highlights of this chapter is an in-depth analysis of XAML and how XAML pages support the concept of code-behind pages to enable us to separate code from design by placing the UI related code in the XAML and the executable code within a linked source file. The authors did a great job of clarifying this concept. You’ll find all the crucial issues such as namespaces, compound properties and attached properties in this chapter.
The first walkthrough demonstrates the seamless integration between the Microsoft Visual Studio and Blend tools. How expression Blend tools can be used to stylize the media has been shown in this walkthrough. After reading this chapter, I have come in contact with the power of XAML and the most important stuff, using Expression Blend and Visual Studio together. This is vital for any integration scenario where design and development needs to be combined to make a good Silverlight application.
In Chapter 2 you’ll gain a deeper understanding of the Silverlight plug-in. You’ll learn the relationship between Silverlight and the HTML DOM and how to manage HTML DOM elements from managed code. For example, how can we place Silverlight anywhere we want within a web property and how DOM allows us to embed a Silverlight plug-in within it, etc. You’ll also find the HTML DOM, Silverlight plug-in and the Silverlight Object Model described in detail here. You’ll gain knowledge about creating the Silverlight control and how this control can be created at least in two different ways.
Many core concepts are discussed in this chapters, including:
The last section of this chapter discusses how Silverlight allows us to create a bridge between the scripting and managed code which includes basic steps intended to expose managed code elements to the scripting world. It describes how we are free to reference the managed elements from JavaScript. After reading this chapter, I’ve strengthened my previous knowledge of HTML DOM concepts. I was a little confused about embedding Silverlight plug-in within the DOM and managing web page from code but this chapter has removed my confusion.
Chapter 3 is all about layout panels and text. The writers have shown here the use of the Canvas element to deliver rich ink content. They’ve shown us how the content within a Canvas is automatically arranged for us. How the attach property can be used to move the content at some other places, has been explained nicely. The use of the StackPanel to represent a grouping of visual elements has been illustrated with some useful examples such as how each successive visual element is positioned in a vertical or horizontal fashion within a single row or column; or how we can use the Orientation property of the StackPanel to specify whether child elements are stacked in a vertical or a horizontal manner, etc.
Canvas
StackPanel
Orientation
The power of the Grid control has been shown with its ability to easily layout content in a tabular format. Use of Silverlight Grids, Rows and Columns two distinct collections, ColumnDefinitions and RowDefinitions are explained quite intelligently. All the features of the Grid have been represented by showing some declarative and procedural examples. Text plays a vital role in Silverlight applications. You’ll see some examples of the TextBlock element to flexibly display text in various situations. The next section of this book covers the UIElement and FramworkElement. This section discusses how a significant amount of flexibility enhances our applications in innovative ways. A number of valuable and interesting methods and properties of UIElement and FrameworkElement have been described here. In this chapter I’ve learnt about the pillars of the Silverlight UI, which are crucial to structure any good Silverlight UI. Another basic concept, which I think is necessary for anyone who wants to grasp a good understanding of UI principles, is the concept of UIElement and FrameworkElement. This chapter does a nice job of simplifying this issue.
Grid
Grids
Rows
Columns
ColumnDefinitions
RowDefinitions
TextBlock
UIElement
FramworkElement
FrameworkElement
FrameworkElement
In Chapter 4, you’ll step into the world of user interaction. You’ll learn in this section how user input is a vital part of virtually every web application and how to collect this input. You’ll become familiar with the issues regarding Silverlight’s direct support for input devices like keyboard, mouse, and stylus through the use of the System.Windows.Input namespace. Here you’ll gain a strong understanding of focus in order to handle your key strokes. Two events directly related to the keyboards are explained in this part nicely.
System.Windows.Input
You’ll learn how to deal with modifier keys. Modifier keys are keys that are used in combination with other keys. These are necessary because the KeyEventArgs class only exposes information about the currently pressed key. Because of this, if you select something like the SHIFT or CTRL key, and then another key, the initially selected key data will be lost. To overcome this problem, you learn how you can take advantage of the statically visible keyboard class that has been described here with a procedural code example. You’ll also learn how to trap the mouse. Mouse related events available in Silverlight are discussed greatly in this section. The mouse is slightly more complex than the keyboard. In addition to button related events, the mouse can also respond to movement. Movements can be measured to enhance experience alongside the keyboard and to take the measurements and deliver a user-friendly drag-and-drop interface, including some very useful tips that you’ll find here.
KeyEventArgs
The next section of this chapter focuses on delivering the text controls. Over the course of this section, you will learn how to handle basic text entry with the TextBox. In addition, you will see how to collect hand written data through the powerful InkPresenter control. You’ll learn how selected text can be programmatically retrieved through three TextBox properties. To gather and display ink with the InkPresenter you’ll learn three important steps, including how to create the InkCanvas and also how to style the ink. In the next part you’ll become familiar with the general implementation of a Button and how it is spread across two classes, ButtonBase and ContentControl. ButtonBase is an abstract base class used by all Buttons in Silverlight. About three members of ButtonBase class which are directly related to user’s interaction etc.
TextBox
InkPresenter
TextBox
InkCanvas
Button
ButtonBase
ContentControl
Buttons
Some important thing about ContentControl and how it is designed to display a single content has been described here. The other sections of this chapter deal with HyperlinkButton, RadioButton, CheckBox, ItemControls, ListBox, TabControl, Date Controls, Calendar, Datepicker which are so important from a Silverlight developer perspective. Silverlight enables you to simulate a dialog box through a control called the Popup. You’ll learn all about Popup in this section. You’ll understand the Popup behavior and positioning the Popup to get your feet wet. Next you’ll learn how to prompt for a file using the OpenFileDialog class. Throughout this section you will learn the three steps involved in interacting with an OpenFileDialog. Next you’ll find two more important elements, Border and Slider. You’ll understand what RangeBase class actually does. This chapter clarifies the main characteristics and purpose of the interaction.
ContentControl
HyperlinkButton
RadioButton
CheckBox
ItemControls
ListBox
TabControl
Calendar
Datepicker
Popup
Popup
OpenFileDialog
OpenFileDialog
Border
Slider
RangeBase
In Chapter 5 you’ll find all the data related issues. Throughout this chapter you will see how to handle data once it is in memory. You’ll see the different sources of data that you can bind to. The data binding mechanism available in Silverlight has been presented here with some good examples. You’ll also learn to convert data when it comes in different formats. From there you will experience how to work with data through a DataGrid. Finally, this chapter ends with an overview of the distinguishing data querying enhancement known as LINQ. I’ve strengthened my knowledge of querying data with LINQ from this chapter.
DataGrid
You’ll get a clearer view of networking and communication in this chapter. How networking and communications enable us to send and receive data with variety of technologies is a major aspect of this chapter. Concepts behind working with technologies like SOAP services, XML, JSON, RSS, ATOM and Sockets are discussed here. You’ll learn the basics of connecting to different services and how to parse data once received. You’ll also find a couple of ways to enable push communications in this chapter. You’ll become aware of the very important concept of cross-domain and the clientaccesspolicy.xml file. This chapter shows why it’s necessary to put a clientaccesspolicy.xml file at the root of the domain hosting any web service that is allowed to be accessed from a different domain. Limitations of the browser connection limits and asynchronous calls have also been discussed. To me this, is one of the most important chapters as far as sending and receiving data is concerned and this really helped me to work with JSON, RSS and ATOM in Silverlight.
In chapter 7 you will learn how to use various items from within the System.Windows.Controls namespace. This chapter shows how to manage media experience through the use of playlists and interactive playback. From there you’ll learn how to access protected content and finally, you’ll see how to incorporate traditional images and utilize the existing DeepZoom features. This chapter shows how Silverlight goes far beyond standard web capabilities by providing a full screen mode. You’ll learn how Silverlight enables you to view two different screen modes, toggling between screen modes, etc. The authors have skillfully presented the issues related to using protected content, requesting protected content, retrieving the PlayReady components, unlocking protected content, using Images, basic imaging, showing an Image with the MultiIScaleImage control to get the DeepZoom experience, deploying MultiScaleImages and still more. to take you to the next crucial round of your Silverlight journey. Anyone who is very interested in DeepZoom will find this chapter extremely helpful.
System.Windows.Controls
DeepZoom
PlayReady
Images
Image
MultiIScaleImage
MultiScaleImages
In chapter 8 you’ll get a grip on Silverlight graphics which plays such an important role in attracting the user’s attention. You’ll learn here that the graphics within Silverlight are mathematically based objects and how this makes them ideal for Internet distribution. Concepts of geometries have been discussed here and after this you’ll learn how to paint your shapes and alter the way in which they are rendered. You’ll become familiar with all of the graphical elements like Shape, Rectangle, Line, Ellipse, Polyline, Polygon etc.
Shape
Rectangle
Line
Ellipse
Polyline
Polygon
To paint these elements, the usage of different Brushes like SolidColorBrush, LinearGradientBrush, RadialGradientBrush, ImageBrush and VideoBrush are covered. The next section of this book shows the uses of the Transform element to alter the appearance of any UI Element within Silverlight. You’ll learn the usage of RotateTransform, ScaleTransform, SkewTransform and TranslateTransform. You will see how to create some of these graphical features using Blend. From there, you will know how to write the code that responds to a user’s actions to interact with a graphical element. I gathered useful knowledge on graphics and their transformations from this chapter.
SolidColorBrush
LinearGradientBrush
RadialGradientBrush
ImageBrush
VideoBrush
RotateTransform
ScaleTransform
SkewTransform
TranslateTransform
Chapter 9 Focuses on Animation. Throughout this chapter you’ll learn how to bring life to the various elements and others objects with the help of Silverlight’s built in animation. You’ll see the practical uses of animation to dramatically improve a user’s experience. How animation in Silverlight changes a single value of property over a period of time has been discussed in this chapter with some great examples. The chapter focuses all the important terms to master this area of Silverlight. Mastering the Timeline, what type of property you’re animating, three types of animations which will assist you in creating the dramatic effects, where are you starting from and where are you going, how long the animation should run, throttling the animation, an in-depth analysis of storyboard, the usage of storyboard as a resource and trigger, key framing, interpolations like Linear Interpolation, Spline Interpolation, Discrete Interpolation and timing are all demonstrated in this chapter.
After completing this chapter you’ll gain the complete understanding of Silverlight animation. This is one of the most amazing chapters of this book. For someone who is interested in stepping into the rich and cool world of animation, this chapter is ideal. This chapter helped me to really comprehend the tips and tricks of animation.
This is a crucial and a fun chapter and you’ll learn very important and enjoyable features of Silverlight. The cornerstone of this chapter is to give your application’s UI a different look and feel to bring richness by using Style, Template and VisualStateManager. You’ll also learn contents that are non executable pieces of data and the image, media file as well as the XAML file fall into one of three categories: declarative resources, loose resources and embedded files.
Style
Template
VisualStateManager
You’ll learn here why a resource is a vital part of a Silverlight application. In the declarative resources section you’ll know what ResourceDictionary actually is and where and how to define it; how to use storyboards, styles and template items as resources, how resources can be defined at design time and run time. In the loose resources section of this chapter, you’ll see how Silverlight provides the ability to access loose resources. Two different ways of accessing local resources have been discussed in a section of this chapter. In the bundling resources section you’ll come to know two types of bundled resources and their uses.
ResourceDictionary
In the next section of this chapter you’ll learn how files can be added as content to a project then built to a xap file structure. You’ll also become aware of the fact that embedded files in Silverlight can also be an application or a library. The next section discusses giving styles to the elements, and how to define their look by setting the Styles property, which is available to every FrameworkElement. The “Creating templates” section will show you how to redefine the entire visual representation of elements to get the desired look. In the “Dealing with visual state” section you’ll learn how to transition from one state of a control to another. In the following section you’ll know how VisualStatesManager relies on variety of components to do its job and why these components are referred to as the parts and states model. This chapter has given me the concepts of logical and visual structure of the control and enough ideas to apply themes to the application by the use of resources in Silverlight.
Styles
VisualStatesManager
Chapter 11 discusses how to enrich the user’s experience using Silverlight’s runtime features, and how these features give us the ability to store and retrieve data on the user’s machine through isolated storage. You’ll see the process of loading XAML, processing data, and downloading files without interrupting the UI. You’ll know here how to access the isolated storage area for a user and use it to manage a virtual file system. Listing the contents of the virtual file system, removing items from Isolated Storage, creating directories with Isolated Storage, checking the available space, requesting more space, how a file within the isolated storage can be created and retrieved through a file stream, administering the isolated storage are all discussed in this chapter.
You’ll also learn how to use XAML at runtime. In the following section of this chapter you’ll learn all about the Background worker. How it can be used for performing a task behind the scenes and how it enables us to asynchronously perform a task on a thread separate from the UI thread. You’ll gain an understanding of how the BackgroundWorker is useful for web service calls, complex calculations and other time-consuming operations.
You’ll get a grip on retrieving content on demand with WebClient class. How this class acts as a special utility class that enables us to asynchronously download the content has been clarified by the authors in this chapter. Requesting string and binary data, managing larger download requests, loading the string content and media, loading fonts, loading compressed packages, loading application modules, how to deal with the termination when On Demand download requests stopped in one of two ways: intentionally or unintentionally. This chapter finishes with one of the most powerful Dynamic Language Runtime issues. How Dynamic Language Runtime (DLR) enables on-the-fly compilation and execution of a variety of scripting languages such as Jscript, IronPython and IronRuby. You’ll know how these languages take advantage of features such as garbage collection, memory management, and type-safety checking. This section discusses the advantages and disadvantages of two interesting characteristics of dynamic languages. I left this chapter with a crystal clear idea of DLR, WebClient and BackgroundWorker.
Chapter 12 has shown the process of creating Silverlight user controls. Some key factors have been presented here. For example, how a user control can be used as a page in Silverlight. After completing this chapter you’ll be able to define the appearance and behavior of the user control. You’ll learn how to enhance CLR properties. You’ll know how to take advantage of the dynamic characteristics of the Dependency property.
This chapter also shows how to include user controls in your application by adding xml namespace that references the location of the user control. Silverlight does not provide an elegant way to switch between XAML files directly out-of-the- box. To use more than one XAML file and switch between them how you have to architect your application and you’ll learn all about those tricks in this chapter. The discussion here uses an imaginary master-detail implementation to demonstrate how this architecture works. You’ll learn how to create a custom splash screen in a scenario where your application is larger in size or when network will run slow. Here you splash screen’s appearance will be fixed and how splash screen is integrated is shown in this section. You’ll also know how to monitor the progress and update the splash screen when download is proceeding.
The last section of this chapter describes Silverlight streaming. Silverlight streaming is a free hosting service which give you several gigabytes of space to rapidly deploy Silverlight content. You’ll learn how this content can be hosted in the form of an audio, video or image file. You’ll get the concept of packaging your Silverlight content to be deployed over Silverlight Streaming. Initially, I was little confused about the Dependency property in Silverlight and streaming, but chapter has removed all the confusion.
This is such a wonderful book that it has thrust me into the realm of innovation. I’m strongly recommending this book to everyone, whether you’re a professional developer or just a beginner and want to learn the basics of Silverlight to experience the tips and tricks of building RIAs (Rich Internet Applications) . After thoroughly reading this book, I have come to the conclusion that this book is enough to turn this world around with Silverlight. Kudos to Chad Campbell and John Stockton for their excellent writing..
David J Kelley
Silverlight MVP
|
http://dotnetslackers.com/articles/silverlight/Review-Silverlight-2-In-Action.aspx
|
crawl-002
|
en
|
refinedweb
|
User Name:
Published: 22 Feb 2008
By: Alessandro Gallo
Download Sample Code
Get up to speed with ASP.NET AJAX by building a simple Virtual Earth mashup.
In the first part of this series, we started with an example of a
simple web page that displays a Virtual Earth map on screen. The purpose of the code was to highlight some bad habits in writing markup and
JavaScript code. We promised ourselves to rewrite the same page in order to take advantage of ASP.NET and the new AJAX framework. This is
what we are going to do starting from part two of this series. a consequence, our ASP.NET AJAX page will contain a Panel control, which is responsible for rendering the div element that
hosts the Virtual Earth map:
div
So far, everything is straightforward. In order to host a Virtual Earth map inside the div element, we need a Server Control
that performs two tasks:
script
The code in listing 1 shows this Server Control declared in the ASP.NET AJAX page.
The Server Control in the previous listing is not a typical ASP.NET control. In fact, it's a new kind of control - provided by
ASP.NET AJAX - called an Extender.
One reason for creating an Extender is that it lets you inject a script tag programmatically by instantiating an object of
type ScriptReference. However, this can be done with classic ASP.NET controls, simply by invoking one of the methods of the
ClientScriptManager object, which is accessible through the Page.ClientScript property. Actually, we are already gaining
something because we are encapsulating this logic inside a dedicated object, instead of interacting directly with the ClientScriptManager
class.
Page.ClientScript
Where the Extender really shines is in its ability to instantiate and set up JavaScript objects. How can you do that with classic ASP.NET?
Well, you could build the JavaScript code as a big string and inject it in the page using the ClientScriptManager. No need to say, this isn't
a good approach. With ASP.NET, we can render markup dynamically, and yet we're relying on strings in order to inject JavaScript code in the
page.
With an Extender, we can solve this problem in the following way: we can associate a special JavaScript object, called a client Control,
to a HTML element in the page. In terms of ASP.NET markup, this means creating an Extender and associating it with another Server Control -
the "extended" control. Take a look at listing 1. Notice how we are associating the VirtualEarthExtender to the Panel through the
TargetControlID property.
TargetControlID
At this point, the Extender is able to generate the JavaScript code needed to create an instance of the client Control and wire it to the
HTML rendered by the extended control (the Panel in our example). Furthermore, we can control - from the server side- how the JavaScript
object is initially configured.
Based on the previous discussion, this is what you are going to do in the following paragraphs:
ASP.NET AJAX ships with a client library - written in JavaScript - called the Microsoft Ajax Library. With the help of this library, you
will create a client Control that encapsulates an instance of a Virtual Earth map.
Why create a wrapper over a VEMap object? There are multiple reasons. One is that the Microsoft Ajax Library enables you to do
component-based development on the client side. However, what interests us now is the possibility to use a client Control to manage the
lifecycle of our JavaScript object. Let's start by taking a look at the following listing, which reports the code for the
Mashup.VirtualEarth client Control.
Mashup.VirtualEarth
The first line tells the Microsoft Ajax Library to create a namespace called Mashup. Under this namespace, the
VirtualEarth control is declared as a function and then registered as a Control. This happens in the last line - the first after
the function body - through the call to the registerClass method. This method "upgrades" our JavaScript function to an ASP.NET
AJAX client class and makes it inherit from Sys.UI.Control, which is the base class for client Controls.
Mashup
VirtualEarth
registerClass
Sys.UI.Control
The base Sys.UI.Control class provides two interesting methods called initialize and dispose.
These are the best places to perform the initial setup and the final cleanup of an instance. An example is attaching event handlers in the
initialize method and then detaching the same handlers in the dispose method. As a consequence, this lets you
abandon the bad habit of hooking-up event handlers in the HTML. Furthermore, it enables you to detach handlers before the instance is
destroyed, thus avoiding potential memory leaks.
initialize
dispose
Managing the lifecycle of a JavaScript object is a good practice, and it's just one of the many features provided by client Controls.
Finally, look at how the Mashup.VirtualEarth function is declared. It accepts an argument called element and passes
the same argument to the base class by calling the initializeBase method. Every client Control is associated with one and only
one HTML element. In this example, the associated element will act as the container for the Virtual Earth map, since it's being passed to the
VEMap constructor inside the initialize method.
element
initializeBase
VEMap
Now that you've got a client Control, you need to wire it to a server Control - an Extender. This will enable you to instantiate and
configure the client Control from the server side, without writing a single line of JavaScript code.
Now you are going to create a VirtualEarth Extender that lets you instantiate and configure an instance of the
Mashup.VirtualEarth client Control. To create an Extender, you have to declare a class that inherits from the base
ExtenderControl class, as shown in the following listing.
ExtenderControl
Just before the class declaration, there's a TargetControlType attribute that specifies which server Controls can
be extended by our VirtualEarth Extender. In this case, you are restricting the extended Control to be a Panel, since it will render the div
element that will host the Virtual Earth map.
TargetControlType
The next thing to do is declare some properties that allow setting the values of the counterpart properties of the client Control. The
following listing shows how it's done.
The InitialLatitude and InitialLongitude properties let you specify the coordinates of the initial
location at which the map is centered. When the Mashup.VirtualEarth instance is created on the client side, these values are
mapped to the initialLatitude and intialLongitude properties. The same thing happens for the ZoomLevel
property. The purpose of these server properties is to let the developer configure a JavaScript object - specifically, a client Control -
from the server side.
InitialLatitude
InitialLongitude
initialLatitude
intialLongitude
ZoomLevel
Finally, we need to write the code that creates an instance of the client Control and injects the references to the script files that
contain the JavaScript code. You accomplish this task by overriding two methods of the base ExtenderControl class:
GetScriptReferences and GetScriptControls. This is demonstrated in the following listing.
GetScriptReferences
GetScriptControls
The GetScriptReferences method returns a collection of ScriptReference objects. Each object specifies
the path to a JavaScript file. These paths (or URLs) will be injected in a script tag rendered in the page by the ScriptManager
control.
ScriptReference
The RegisterScriptDescriptors method returns a collection of ScriptDescriptor objects. Script Descriptors are
used by the ScriptManager to render the JavaScript code necessary for creating and configuring an instance of a client component. In this
case, we are using a ScriptControlDescriptor because we are going to instantiate a client Control.
RegisterScriptDescriptors
ScriptDescriptor
ScriptControlDescriptor
Now that the Extender is ready, let's browse the ASP.NET page and take a look at the source code. The first thing to notice is that the
VirtualEarth Extender injected two script tags, just after those that reference the Microsoft Ajax Library files:
The first script tag references the Virtual Earth API, using the default URL or the one specified in the
ApiUrl property of the Extender. The second script tag references the Mashup.VirtualEarth client Control. Both
these scripts were referenced in the GetScriptReferences method of the Extender.
ApiUrl
Finally, let's take a look at the JavaScript code injected in the page by the Virtual Earth Extender:
Sys.Application is a global object created by the Microsoft Ajax Library at runtime. This object accomplishes
some interesting tasks, such as hosting the client components instantiated on the client side. By calling the add_init method,
you can participate in the Init stage of the client side page lifecycle. There, you have the opportunity to instantiate and configure the
Mashup.VirtualEarth client Control. You can do that by calling the $create method and passing the configuration parameters, as
shown in the previous code snippet. Not only does $create offer a convenient syntax to instantiate client Controls; it also
allows Extenders to leverage the same syntax in order to automatically create instances of client components at runtime. As you can see,
everything has been setup on the server side, without actually writing a single line of JavaScript code.
Sys.Application
add_init
$create
In the next part of this series, we will expand our Virtual Earth Control and explore other kinds of interactions between the server side
and the client side of an ASP.NET AJAX Web Application.
In the second part of this introduction to ASP.NET AJAX, we introduced client Controls and Extenders. Thanks to the Microsoft Ajax
Library, we can do component-based development on the client side. We are able to encapsulate JavaScript code into a component that can be
initialized, set up and disposed in a standard way.
By creating an Extender, we can wire a client Control to a server Control. Then, we can configure the client Control on the server side
through the Extender's properties. Finally, Extenders take care of injecting in the page the necessary script references and the JavaScript
code that instantiates a client component.
|
http://dotnetslackers.com/columns/ajax/ASPNETAJAXMeetsVirtualEarthPartTwo.aspx
|
crawl-002
|
en
|
refinedweb
|
- about flash cs3'components
- null object reference
- Class Clarification
- Get a text field to mirror the contents of another text field [renamed]
- How to simply call a function?
- Access properties of getChildAt
- Displaying Library Content in AS 3.0 code in a class??
- all I wanna do
- Help with a for statement in AS3/Flash9
- holly cow amf rocks the network
- i think i've found my error
- XML Search
- Write to file, in flash ??
- onLoadInit help ?
- what data type to use?
- Scroll sequence of images?
- Packages
- Is it possible to adjust width / height for an external loaded swf?
- need help urgently
- Stop sound from playing continuosly on keyisdown
- Would an interactive CD-ROM require a preloader?
- Install Flash 8 and CS 9
- item movement on mousemove....
- ExternalInterface : any luck?
- AS3 and button functions
- Use of packages
- [Question]Export MXML from FireworksCS3
- AS3 component architecture
- A good AS3 tutorial, anyone know one?
- How can I sequentially load xml files?
- Filename as variable
- Apollo compiler and mx: classes
- clickTag in AS3?
- AS3 source architecture
- as3 calllater
- Help with modal support in actionscript 3
- AS3 custom component does not appear on stage
- [as3] Flash Memory Use, THE SEQUEL!
- UpdateAfterEvent
- URLLoader and Variables HELP!!!!
- Targeting Instances within Assets through Classes
- HELP with embedding Snocap Music Store in my Flash Website
- FLASH: TypeError: Error #1010 when calling a class (Flash 9 - AS3)
- Weird problem... preloading stuff
- problem with text loading in XML file
- Access a Linkage Clip From an External Class
- Custom Class Optimization?
- Dispatch an event from root which should received by childs.
- Dynamic Class Instantiation
- Post method Proxying Data
- 1037 Error
- Best onRollOver substitute?
- Loader class to load swf
- Security Sandbox Violation
- flex2,php,mysql
- unload() and NetStreams
- how to make a preloader in CS3?
- Bitwise gems
- tweening and color
- Table variable
- XML parsing speed...
- Class glue
- hitTestObject area
- Downloadable .fla preloader
- Flex/AS3 casting question
- alternative for eval [] in AS3 ?
- Tracking graphics loading proccess
- AS3 conclusion: dynamic things are evil
- How to attach button behaviour to displayObject?
- attachSound gone - So how can we load random sounds from library?
- dropTarget
- Using packages obtained from BOOKS...
- Help with XML: Traversing
- Error doesn't make sense help please
- Getting duration from netstream or video
- horizontal slide gallery
- class App extends Box
- MP3 header navigation
- [AS3] Waiting on a frame for a random duration?
- custom events AHHHHHHRGH
- Little help with Dynamic Text, please
- Set Cursor's Location?
- Is this a correct way to capture keys?
- coded Tweens mysteriously abort
- Google Adwords Flash banner requirements
- volume control
- Dynamic Text on a curve, Flash.
- textInput.restrict -- a better way???
- Whats wrong HERE?!
- Help with creating bubbles with Actionscript
- Accessing clips I attached?
- Actionscript 3.0 tutorial and a textfield creation problem
- AS Puzzle
- How do I use timers to repeat a function, and what else r they for?
- [as3] Motion Guides and Motion Code
- for loop + addChild. How to get unique instance names?
- Associative arrays: Array or Object?
- cs3 component creation
- What do I refer the mouse as? (instead of _xmouse in AS2)
- addChild then Tween them !-(Rain drops example)
- Why doesn't this code work (trying to do parent["b.b"+i] = new MovieClip();
- Changing movieclips depth??
- Dynamic Mc access to stage
- Function.apply()
- Actionscript 2 or 3?
- String.fromCharCode and keyCode
- Removing a DisplayObject from memory
- Checking if event listeners have been created...
- [as3] When and How to use ints
- Moving assets for a title game with AS3
- Declaring a shared object called "DB"
- [AS3] How do I load external images into a movie clip then add children?
- 3D Circular Menu
- Including exteral .as files and, why aren't my .as files using AS3?
- using the 'parent' property
- Error 1119
- Why AS3?
- Memory Optimization
- How do I rotate objects and still hold it's position or obstructability?
- movie clip with sub movies
- XML bug?
- XML Editor?
- Ever wondered what Graphic symbols really are?
- Has anyone made Google adWord Banners with AS3?
- Accessing atext field inside a movieclip & simple question
- duplicate movieclip in AS3, help plz
- TypeError: Error #1009 :(
- Text+Alpha help.
- Finding the class of an object
- how useful is a non hexadecimal RGB value
- Should I wait To Learn AS3
- Rotation impact in performance
- Flawless rollover buttons?
- [as3] Flash Memory Use: the Timeline
- How big should be a XML file.
- Roll-over area in actionscript.
- random sort with Arrays in AS 3.0
- removeChild does not remove certain movieClips. Please Help!
- array access notation in AS3
- Creating a list of objects on stage
- some begginer questionsabout Movieclips and Dictionary
- check varaibles from aspx file
- stopping movement in actionscript.
- [as3 in CS3] Does SWF asset embedding work?
- passing variables between swf files
- namespaces in xml with e4x
- How do i fix this coding?......
- Code isn't working.. player movement screwed up..
- forEach method
- unescape method change in ActionScript 3
- Customizing the ComboBox component
- Name of a Layer by script
- What object do I pass hitTestObject(...)?
- [as3] frame rate browser issues
- spreadshirt.com
- spectrum analyzer
- Unload/StopAll/Eliminate Sounds
- flash touchscreen kiosks
- Array item doesn't exist, so let's get stroppy... or not?
- AS3 event propagation thru alpha channel mask?
- Local reference is it clear on exit?
- Issue with gotoandplay
- referencing dynamic movieclip names?
- AS3 animation framework
- how would i use attachMovie in AS3?
- Saving Sprites or Shapes with ByteArray
- [as3] Drawing BitmapData to erase BitmapData
- Do I have to "Anti-Alias" ? sometimes I really need alias
- lib-linkaged-BitmapData ignores the width and height?
- Collision detection for a player with walls.. ahh!!
- Bug ?
- Talking between Classes
- Simple AS3 Animation Engine
- Good Third-Party AS3 Editors
- as3 equivalent of sortable list
- Nesting MovieClip Objects?
- AS3 Animation Issue
- XML: AppendChild
- how to make a instance delete it's self
- Color fill png dynamically?
- Random Organic Motion
- Multidimensional Array , Slice and Sort
- Change XML Variables on fly
- extends more than one class
- Can't load external .swf which us a DocumentClass?
- fscommand showmenu
- getting files from a folder
- Using 'Loader' in AS3
- hi, theCanadian
- Memory problem
- BUG: stageWidth/Height
- variables in a loaded swf - HELP
- URL Query String
- Embedding assets in a AS3 class with Flash CS3
- URLLoader issue - executing PHP
- Error #1023: Incompatible override.
- AS3 Memory Management Tips
- eval()
- reserve words in parameters
- Measuring performance in AS3
- fun with kuler
- class definitions
- widht & height of a mc = 100% ???
- Fullscreen mode not working
- How to code components in AS3?
- [FLASH+FLEX] call a method in a flex swf from as3
- Custom cursor trouble..?
- Security passing variables...
- Playing a random embedded sound
- Rotate TextField
- Sound.length Formatting
- as3 migration -- variables within MCs (beginner)
- How to save XML files with PHP to a server
- How I do to verify if a child was added or not, even it's not a null value
- Weird error #1010: I found a solution but...
- Send a variable to eventListener function
- Class Question
- how to load an swf and gotoAndPlay?
- sendToURL screws up .Net
- 31 fps still optimum frame rate in AS3?
- How do you track just the mouseY movement?
- Accessing the stage
- E4X doubt
- rigidbody vs circle collision?
- Depth: works in a difference way in Flash than on-line
- Can't Override Array Methods
- Covering buttons
- Best practice for preloaders
- When does one make classes?
- Add AS to a button.
- sprite or movie clip ?
- passing variables from eventListener to function AS3
- flash cs3 + vista + actionscript panel
- delete TextField instances
- Centering a linked MC in AS3
- AS2 to AS3 Problem
- triyng to get eventdispatcher trigger listener in multiple instances
- eventDispatcher help please !
- [Discussion] Comments helpful or hurtful?
- Quick Buttons
- xml total children
- targeting dynamically generated MCs
- SoundMixer ... is this a BUG or a FEATURE?
- AS3 Dynamic Text Kerning
- Depth stupid issue
|
http://www.kirupa.com/forum/archive/index.php/f-141-p-2.html
|
crawl-002
|
en
|
refinedweb
|
asctime
Transform binary date and time values
DescriptionThe functions ctime, gmtime and localtime all take as an argument a time value representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970; see reference reference:tzset). The function localtime uses reference:tzset to initialize time conversion information if reference:tzset has not already been called by the process. After filling in the tm structure, localtime sets the tm_isdst 'th element of
tznameto a pointer to an ASCII
Example:
Example - Transform binary date and time values
Problem
The following example uses the time function to calculate the time elapsed (in seconds), since Epoch, then local-time() to convert this value into a broken-down time, and finally asctime() to create a printable string from the broken down string.
Workings
#include <time.h> #include <stdio.h> int main(void) { time_t result; result = time(NULL); struct tm* brokentime = localtime(&result); printf("%s%ju secs since the Epoch\n", asctime(brokentime), (long)result); return(0); }
Solution
Output:
Fri Jan 4 21:36:16 2008 17593385526992 secs since the Epoch
All the fields have constant width. The ctime_r function provides the same functionality as ctime except the caller must provide the output buffer
bufto store the result, which must be at least 26 characters long. The localtime_r and gmtime_r functions provide the same functionality as localtime and gmtime respectively, except the caller must provide the output buffer
result.
Example:
Example - Transform binary date and time values
Problem
A repeat the above example, this time using localtime_r;
Workings
#include <time.h> #include <stdio.h> int main(void) { time_t result; result = time(NULL); struct tm* brokentime = new tm(); localtime_r(&result, brokentime); printf("%s%ju secs since the Epoch\n", asctime(brokentime), (long)result); delete(brokentime); return(0); }
The asctime function converts the broken down time in the structure
tmpointed at by
*tmto the form shown in the example above. The asctime_r function provides the same functionality as asctime except the caller provide the output buffer
bufto store the result, which must be at least 26 characters long. The functions mktime and timegm convert the broken-down time in the structure pointed to by tm into a time value with the same encoding as that of the values returned by the reference:time function (that is, seconds from the Epoch, UTC ). The mktime function interprets the input structure according to the current timezone setting (see reference:tzset). The timegm function interprets the input structure as representing Universal Coordinated Time (UTC.) The original values of the tm_wday and tm_yday components of the structure time is in effect. The field tm_gmtoff is the offset (in seconds) of the time represented from UTC, with positive values indicating east of the Prime Meridian.
|
http://www.codecogs.com/library/computing/c/time.h/ctime.php?alias=asctime
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Before you can run your training application with Cloud Machine Learning Engine, Cloud ML Engine incurs charges to your account for the resources used.
Before you begin
Before you can move your training application to the cloud, you must complete the following steps:
- Configure your development environment, as described in the getting-started guide. Cloud ML Engine runtime that you're using.
Using
gcloudto package and upload your application (recommended)
The simplest way to package your application and upload it along with its dependencies is to use the
gcloudtool. You use the same command (
gcloud ml-engine ml-engine jobs submit trainingcommand that packages an application and submits the training job:
gcloud ml-engine GCP project must have access to this Cloud Storage bucket, and the bucket should be in the same region that you run the job. See the available regions for Cloud ML Engine services. If you don't specify a staging bucket, Cloud ML Engine Cloud ML Engine services.
--package-pathspecifies the local path to the root directory of your application. Refer to the recommended project structure.
- Cloud ML Engine, such as
--module-name,
--runtime-version, and
--job-dir, must come before the empty
--flag. The Cloud ML Engine service interprets these flags.
- The
--job-dirflag, if specified, must come before the empty
--flag, because Cloud ML Engine. Cloud ML Engine passes
--user_first_arg,
--user_second_arg, and so on, through to your application.
You can find out more about the job-submission flags in the guide to running a training job
Working with dependencies
Dependencies are packages that you
importin your code. Your application may have many dependencies that it needs to make it work.
When you run a training job on Cloud ML Engine,.pyscript. Cloud ML Engine uses
pipto install your package on the training instances that it allocates for your job. The
pip installcommand looks for configured dependencies and installs them.
Create a file called
setup.pyin the root directory of your application (one directory up from your
trainerdirectorycommand-line tool to submit your training job, it automatically uses your
setup.pyfile. Cloud ML Engine uses
pip installto install custom dependencies, so they can have standard dependencies of their own in their
setup.pyscripts.
If you use the
gcloudtool to run your training job, you can specify local directories and the tool will stage them in the cloud for you. Run the
gcloud ml-engine jobs submit trainingcommand:
Set the
package-pathflag to specify your training application either as a path to the directory where your source code is stored or as the path to a built package.
Set the
--packagesflag to include the dependencies in a comma-separated list.
Each URI you include is the path to a package, formatted as a tarball (*.tar.gz) or as a wheel. Cloud ML Engine installs each package using
pip installon every virtual machine it allocates for your training job.
The example below specifies packaged dependencies named
dep1.tar.gzand
dep2.whl(one each of the supported package types) along with a path to the application's sources:
gcloud ml-engine jobs submit training $JOB_NAME \ --staging-bucket $PACKAGE_STAGING_PATH \ --package-path /Users/mlguy/models/faces/trainer \ --module-name $MAIN_TRAINER_MODULE \ --packages dep1.tar.gz,dep2.whl \ --region us-central1 \ -- \ --user_first_arg=first_arg_value \ --user_second_arg=second_arg_value
Similarly, the example below specifies packaged dependencies named
dep1.tar.gzand
dep2.whl(one each of the supported package types), but with a built training application:
gcloud ml-engine Cloud ML Engine root directory of your
‘docutils>=0.3'.
_packages_ set to
find_packages().
_include_package_data_ set to
True.
Run
python setup.py sdistto create your package.
Recommended project structure
You can structure your training application in any way you like. However, the following structure is commonly used in Cloud ML Engine samples, and having your project's organization be similar to the samples can make it easier for you to follow the samples.
Use a main project directory, containing your
setup.pyfile. Cloud ML Engine samples, the
trainerdirectory usually contains the following source files:
task.pycontains the application logic that manages the training job.
model.pycontains the TensorFlow graph code—the logic of the model.
util.pyif present, contains code to run the training application.
If you use the
gcloudtool to package your application, you don't need to create a
setup.pyor any
__init__.pyfiles. When you run
gcloud ml-engine jobs submit training, you can set the
--package_pathflag to the path of your main project directory, or you can run the tool from that directory and omit the flag altogether.
Python modules
Your application package can contain multiple modules (Python files). You must identify the module that contains your application entry point. The training service runs that module by invoking Python, just as you would run it locally.
When you make your application into a Python package, you create a namespace. For example, if you create a package named
trainer, and your main module is called
task.py, you specify that package with the name
trainer.task. So, when running
gcloud ml-engine jobs submit training, set the
--module-nameflag to
trainer.task.
Refer to the Python guide to packages for more information about modules.
Using the
gcloudtool to upload an existing package
If you build your package yourself, you can upload it with the
gcloudtool.here) that is in the same directory where you run the command. The main function is in a module called
task.py:
gcloud ml-engine jobs submit training $JOB_NAME \ --staging-bucket $PACKAGE_STAGING_PATH \ --job-dir $JOB_DIR \ --packages trainer-0.0.1.tar.gz \ --module-name $MAIN_TRAINER_MODULE \ --region us-central1 \ -- \ --user_first_arg=first_arg_value \ --user_second_arg=second_arg_value
Using the
gcloudtool to use an existing package already in the cloud
If you build your package yourself and upload it to a Cloud Storage location, you can upload it with
gcloud. ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ --packages $PATH_TO_PACKAGED_TRAINER \ --module-name $MAIN_TRAINER_MODULE \ --region us-central1 \ -- \ --user_first_arg=first_arg_value \ --user_second_arg=second_arg_value
Where
$PATH_TO_PACKAGED_TRAINER Cloud ML Engine API directly to start your training job. The easiest way to manually upload your package and any custom dependencies to your Cloud Storage bucket is to use the
gsutiltool:
gsutil cp /local/path/to/package.tar.gz gs://bucket/path/
However, if you can use the command line for this operation, you should just use
gcloud ml-engine jobs submit trainingto.
|
https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Troubleshooting Exceptions: System.NullReferenceException
A NullReferenceException occurs when you try to use a method or property of a reference type (C#, Visual Basic) whose value is null. For example, you may have tried to use an object without first using the new keyword (New in Visual Basic), or tried to use an object whose value was set to null (Nothing in Visual Basic).
Most of the examples in this article use one or both of these classes:
Any reference type variable can be null. Local variables, the properties of a class, method parameters, and method return values can all contain a null reference. Calling methods or properties of these variables when they are null causes a NullReferenceException. Specific cases:
A local variable or member field is declared but not initialized
A property or field is null
A method parameter is null
The return value of a method is null
An object in a collection or array is null
An object is not created because of a condition
An object passed by reference to a method is set to null
This simple error happens most frequently in Visual Basic code. Except in situations like declaring a variable to be passed as an out parameter, the C# compiler does not allow the use of a local reference variable unless it's initialized. The Visual Basic compiler generates a warning.
In the following C# code, the highlighted line generates this compiler error:
Use of unassigned local variable 'engine'
In the Visual Basic code, the highlighted line generates compiler warning BC42104:
Variable 'engine' is used before it has been assigned a value. A null reference exception could result at runtime.
And the line does throw a NullReferenceException when it runs.
Common causes of NullReferenceExceptions
The fields and properties of a class are automatically initialized to their default value when the class is created. The default value of a reference type is null (Nothing in Visual Basic) Calling member methods on a field or property of a parent class when the field or property value is null causes a NullReferenceException.
In this example, the highlighted line throws a NullReferenceException because the Engine property of car is auto-initialized to null.
Common causes of NullReferenceExceptions
A method parameter that is a reference type can be null (Nothing in Visual Basic). Calling member methods or properties on a parameter value that is null causes a NullReferenceException.
In this example, the highlighted line throws a NullReferenceException because BadEngineInfoPassedToMethod calls NullReferenceFromMethodParameter with a parameter that is null.
Common causes of NullReferenceExceptions
A method that returns a reference type can return null (Nothing in Visual Basic). Calling methods or properties of the returned reference type causes a NullReferenceException when the reference is null.
In this example, the highlighted line throws a NullReferenceException because the call to BadGetEngineInfo returns a null reference in the NullReferenceFromMethodParameter method.
Common causes of NullReferenceExceptions
A list or array of reference types can contain an item that is null. Calling methods or properties of a list item that is null causes a NullReferenceException.
In this example, the highlighted line in NullReferenceFromListItem() throws a NullReferenceException because the call to BadGetCarList returns an item that is null.
public Automobile[] BadGetCarList() { var autos = new Automobile[10]; for (int i = 0; i autos.Length; i++) { if (i != 6) { autos[i] = new Automobile(); } } return autos; } public void NullReferenceFromListItem() { var cars = BadGetCarList(); foreach (Automobile car in cars) { Console.WriteLine(car.ToString()); } }
Common causes of NullReferenceExceptions
If a reference type is initialized in a conditional block, the object is not created when the condition is false.
In this example the highlighted line in NullReferenceFromConditionalCreation throws a NullReferenceException because it initializes the engine variable only if the DetermineTheCondition() method returns true.
public bool DetermineTheCondition() { return false; } public void NullReferenceFromConditionalCreation() { EngineInfo engine = null; var condition = DetermineTheCondition(); if (condition) { engine = new EngineInfo(); engine.Power = "Diesel"; engine.Size = 2.4; } Console.WriteLine(engine.Size); }
Common causes of NullReferenceExceptions
When an object is passed as a parameter to a method by value (without use of the ref or out keyword in C# or the ByRef keyword in Visual Basic), the method can't change the memory location of the parameter—what the parameter points to—but it can change the properties of the object.
In this example, the NullPropertyReferenceFromPassToMethod method creates an Automobile object and initializes the Engine property. It then calls BadSwapCarEngine, passing the new object as the parameter. BadSwapCarEngine sets the Engine property to null, which causes the highlighted line in NullPropertyReferenceFromPassToMethod to throw a NullReferenceException.
Common causes of NullReferenceExceptions
When you pass a reference type as a parameter to a method by reference (using the ref or out keyword in C# or the ByRef keyword in Visual Basic), you can change the memory location that the parameter points to.
If you pass a reference type by reference to a method, the method can set the referenced type to null (Nothing in Visual Basic).
In this example, the highlighted line in NullReferenceFromPassToMethodByRef throws a NullReferenceException because the call to the BadEngineSwapByRef method sets the stockEngine variable to null.
Common causes of NullReferenceExceptions
Use data tips, the Locals window, and watch windows to see variables values
Walk the call stack to find where a reference variable is not initialized or set to null
Set conditional breakpoints to stop debugging when an object is null (Nothing in Visual Basic)
Rest the pointer on the variable name to see its value in a data tip. If the variable references an object or a collection, you can expand the data type to examine its properties or elements.
Open the Locals window to examine the variables that are active in the current context.
Use a watch window to focus on how a variable changes as you step through the code.
Finding the source of a null reference exception during development
The Visual Studio Call Stack window displays a list of the names of methods that have not completed when the debugger stops at an exception or breakpoint. You can select a name in the Call Stack window and choose Switch to frame to change the execution context to the method and examine its variables.
Finding the source of a null reference exception during development
You can set a conditional breakpoint to break when a variable is null. Conditional breakpoints can be helpful when the null reference does not occur often—for example, when an item in a collection is null only intermittently. Another advantage of conditional breakpoints is that they enable you to debug an issue before you commit to a particular handling routine.
Finding the source of a null reference exception during development
An invariant is a condition that you are sure is true. A Debug.Assert (System.Diagnostics) statement is called only from debug builds of your apps and is not called from release code. If the invariant condition is not true, the debugger breaks at the Assert statement and displays a dialog box. Debug.Assert provides a check that the condition has not changed while you are developing the app. An assertion also documents for others who read your code that the condition must always be true.
For example, the MakeEngineFaster method assumes that its engine parameter will never be null because its only caller method (TheOnlyCallerOfMakeEngineFaster) is known to fully initialize the EngineInfo. The assert in MakeEngineFaster documents the assumption and provides a check that the assumption is true.
If someone adds a new caller method (BadNewCallerOfMakeEngineFaster) that does not initialize the parameter, the assert is triggered.
private void TheOnlyCallerOfMakeEngineFaster() { var engine = new EngineInfo(); engine.Power = "GAS"; engine.Size = 1.5; MakeEngineFaster(engine); } private void MakeEngineFaster(EngineInfo engine) { System.Diagnostics.Debug.Assert(engine != null, "Assert: engine != null"); engine.Size *= 2; Console.WriteLine("The engine is twice as fast"); } private void BadNewCallerOfMakeEngineFaster() { EngineInfo engine = null; MakeEngineFaster(engine); }
Avoiding NullReferenceExceptions
To avoid many NullReferenceExceptions, fully initialize reference types as close to their creation as possible.
Add full initialization to your own classes
If you control the class that throws a NullReferenceException, consider fully initializing the object in the type’s constructor. For example, here’s a revised version of the example classes that guarantees full initialization:
public class Automobile { public EngineInfo Engine { get; set; } public Automobile() { this.Engine = new EngineInfo(); } public Automobile(string powerSrc, double engineSize) { this.Engine = new EngineInfo(powerSrc, engineSize); } } public class EngineInfo { public double Size {get; set;} public string Power {get; set;} public EngineInfo() { // the base engine this.Power = "GAS"; this.Size = 1.5; } public EngineInfo(string powerSrc, double engineSize) { this.Power = powerSrc; this.Size = engineSize; } }
Check for null (Nothing in Visual Basic) before you use a reference type
Use try-catch-finally (Try-Catch-Finally in Visual Basic) to handle the exception
It's.
Here are two ways to handle NullReferenceException in release code.
Using an explicit test for null before you use an object avoids the performance penalty of try-catch-finally constructs. However, you still have to determine and implement what to do in response to the uninitialized object.
In this example, the CheckForNullReferenceFromMethodReturnValue tests the return value of the BadGetEngineInfo method. If the object is not null, it is used; otherwise, the method reports the error.
public EngineInfo BadGetEngineInfo() { EngineInfo engine = null; return engine; } public void CheckForNullReferenceFromMethodReturnValue() { var engine = BadGetEngineInfo(); if(engine != null) { // modify the info engine.Power = "DIESEL"; engine.Size = 2.4; } else { // report the error Console.WriteLine("BadGetEngine returned null") } }
Handling NullReferenceExceptions in release code
Using the built-in exception handling constructs (try, catch, finally in C#, Try, Catch, Finally in Visual Basic) offers more options for dealing with NullReferenceExceptions than checking whether an object is not null.
In this example, CatchNullReferenceFromMethodCall uses two asserts to confirm the assumption that its parameter contains a complete automobile, including an engine. In the try block, the highlighted line throws a NullReferenceException because the call to RarelyBadEngineSwap can destroy the car's Engine property. The catch block captures the exception, writes the exception information to a file, and reports the error to the user. In the finally block, the method insures that the state of the car is no worse than when the method began.
public void RarelyBadSwapCarEngine(Automobile car) { if ((new Random()).Next() == 42) { car.Engine = null; } else { car.Engine = new EngineInfo("DIESEL", 2.4); } } public void CatchNullReferenceFromMethodCall(Automobile car) { System.Diagnostics.Debug.Assert(car != null, "Assert: car != null"); System.Diagnostics.Debug.Assert(car.Engine != null, "Assert: car.Engine != null"); // save current engine properties in case they're needed var enginePowerBefore = car.Engine.Power; var engineSizeBefore = car.Engine.Size; try { RarelyBadSwapCarEngine(car); var msg = "Swap succeeded. New engine power source: {0} size {1}"; Console.WriteLine(msg, car.Engine.Power, car.Engine.Size); } catch(NullReferenceException nullRefEx) { // write exception info to log file LogException(nullRefEx); // notify the user Console.WriteLine("Engine swap failed. Please call your customer rep."); } finally { if(car.Engine == null) { car.Engine = new EngineInfo(enginePowerBefore, engineSizeBefore); } } }
Handling NullReferenceExceptions in release code
Design Guidelines for Exceptions (.NET Framework Design Guidelines)
Handling and Throwing Exceptions (.NET Framework Application Essentials)
How to: Receive First-Chance Exception Notifications (.NET Framework Development Guide)
How to: Handle Exceptions in a PLINQ Query (.NET Framework Development Guide)
Exceptions in Managed Threads (.NET Framework Development Guide)
Exceptions and Exception Handling (C# Programming Guide)
Exception Handling Statements (C# Reference)
Try...Catch...Finally Statement (Visual Basic)
Exception Handling (Task Parallel Library)
Exception Handling (Debugging)
Walkthrough: Handling a Concurrency Exception (Accessing Data in Visual Studio)
How to: Handle Errors and Exceptions that Occur with Databinding (Windows Forms)
Handling exceptions in network apps (XAML) (Windows)
|
https://msdn.microsoft.com/en-us/library/sxw2ez55.aspx?cs-save-lang=1&cs-lang=csharp
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
sandbox/explosion.c
Two-dimensional explosion
We solve the Euler equations for a compressible gas.
#include "compressible.h"
We make boundary conditions free outflow.
w.y[top] = neumann(0); w.y[bottom] = neumann(0); w.x[left] = neumann(0); w.x[right] = neumann(0);
The domain spans .
#define LEVEL 7 int main() { origin (-1, -1); size (2.); init_grid (1 << LEVEL); run(); }
Initial conditions come from Toro’s book (Riemann Solvers and Numerical Methods for Fluid Dynamics, 3rd Edition, Springer Ed.) Chapter 17 section 17.1.1 are given in terms of density (), pression (), velocity () both at the left and right side of the discontinuity placed at .
event init (t = 0) { double R = 0.4 ; double rhoL = 1., rhoR = 0.125 ; double VmL = 0.0, VmR = 0.0 ; double pL = 1.0, pR = 0.1 ;
Left and right initial states for , and energy .
foreach() { double r = sqrt(sq(x) + sq(y)); double p; if (r <= R) { ρ[] = rhoL; w.x[] = w.y[] = VmL; p = pL; } else { ρ[] = rhoR; w.x[] = w.y[] = VmR; p = pR; } E[] = ρ[]*sq(w.x[])/2. + p/(gammao - 1.); w.x[] *= x*ρ[]/r; w.y[] *= y*ρ[]/r; } } event print (t = 0.25) {
At we output the values of and the normal velocity as functions of the radial coordinate.
foreach() { double r = sqrt(sq(x) + sq(y)); double wn = (w.x[]*x + w.y[]*y)/r; printf ("%g %g %g\n", r, ρ[], wn/ρ[]); }
For reference we also output a cross-section at .
for (double x = 0; x <= 1; x += 1e-2) fprintf (stderr, "%g %.4f %.4f\n", x, interpolate (rho, x, 0.), interpolate (w.x, x, 0.)); } web trial
On quadtrees, we adapt the mesh by controlling the error on the density field.
#if QUADTREE event adapt (i++) { adapt_wavelet ({ρ}, (double[]){1e-5}, LEVEL + 1); } #endif
Results
Results are presented in terms of and normal velocity for Cartesian (7 levels) and adaptive (8 levels) computations. The numerical results compare very well with Toro’s numerical experiments.
Radial density profile
Normal velocity
|
http://basilisk.fr/sandbox/explosion.c
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Architecting Your App in Ext JS 4, Part 1 this article, we’ll take a look at a popular application and discuss how we might architect the user interface to create a solid foundation.
Code Organization
Application architecture is as much about providing structure and consistency as it is about actual classes and framework code. Building a good architecture unlocks a number of important benefits:
- Every application works the same way so you only have to learn it once
- It’s easy to share code between apps because they all work the same way
- You can use Ext JS build tools to create optimized versions of your applications for production use
In Ext JS 4, we have defined conventions that you should consider following when building your applications — most notably a unified directory structure. This simple structure places all classes into the app folder, which in turn contains sub-folders to namespace your models, views, controllers and stores.
While Ext JS 4 offers best practices on how to structure your application, there’s room to modify our suggested conventions for naming your files and classes. For example, you might decide that in your project you want to add a suffix to your controllers with “Controller,” e.g. “Users” becomes “UsersController.” In this case, remember to always add a suffix to both the controller file and class. The important thing is that you define these conventions before you start writing your application and consistently follow them. Finally, while you can call your classes whatever you want, we strongly suggest following our convention for the names and structure of folders (controller, model, store, view). This will ensure that you get an optimized build using our SDK Tools beta.
Striking a Balance
Views
Splitting up the application’s UI into views is a good place to start. Often, you are provided with wireframes and UI mockups created by designers. Imagine we are asked to rebuild the (very attractive) Pandora application using Ext JS, and are given the following mockup by our UI Designer.
What we want to achieve is a balance between the views being too granular and too generic. Let’s start by seeing what happens if we divide our UI into too many views.
Splitting up the UI into too many small views will make it difficult to manage, reference and control the views in our controllers. Also, since every view will be in its own file, creating too many views might make it hard to locate the view file where a piece of the UI or view logic is defined.
On the other hand, we don’t want our views to be too generic because it will impact our flexibility to change things.
In this scenario, each one of our views has been overly simplified. When several parts of a view require custom view-logic, the view class will end up having too many responsibilities, resulting in the view class becoming harder to maintain. In addition, when the designers change their mind about the arrangement of the UI, we will end up having to refactor our view definition and view logic; which can get tedious.
The right balance is achieved when we can easily rearrange the views on the page without having to refactor them every time. For example, we want to make the Ad a separate view, so we can easily move it around or even remove it later.
In this version, we’ve separated our UI by the roles of each view. Once you have a general idea of the views that will make up your UI, you can still tweak the granularity when you’re actually implementing them. Sometimes you may find that two views should really become one, or a view is too generic and should be split into multiple views, but it helps to start out with a good base. I think we’ve done that here.
Models
Now that we have the basic structure of our views in place, it’s time to look at the models. By looking at the types of dynamic data in our UI, we can get an idea of the different models needed for our application.
We’ve decided to use only two models — Song and Station. We could have defined two more models called Artist and Album. However, just as with views, we don’t want to be too granular when defining our models. In this case, we don’t have to separate artist and album information because the app doesn’t allow the user to select a specific song by a given artist. Instead, the data is organized by station, the song is the center point, and the artist and album are properties of the song. That means we’re able to combine the song, artist and album data into one model. This greatly simplifies the data side of our app. It also simplifies the API that we have to implement on the server-side because we don’t have to load individual artists or albums. To summarize, for this example, we’ll only have two models — Song and Station.
Stores
Now that we’ve thought about the models our application will use, lets do the same for stores.
Figuring out the different stores you need is often relatively easy. A good strategy is to determine all the data bound components on the page. In this case, we have a list with all of the user’s favorite stations, a scroller with the recently played songs, and a search field that will display search results. Each of these views will need to be bound to stores.
Controllers
There are several ways you can distribute the application’s responsibilities across your application’s controllers. Let’s start by thinking about the different controllers we need in this example.
Here we have two basic controllers — a SongController and a StationController. Ext JS 4 allows you to have one controller that can control several views at the same time. Our StationController will handle the logic for both creating new stations as well as loading the user’s favorite stations into the StationsList view. The SongController will take care of managing the SongInfo view and RecentSong store as well as the user’s actions of liking, disliking, pausing and skipping songs. Controllers can interact with each other by firing and listening for application events. While we could have created additional Controllers, one for managing playback and another for searching stations, I think we’ve found a good separation of responsibilities.
Measure Twice, Cut Once
I hope that sharing our thoughts on the importance of planning your application architecture prior to writing code was helpful. We find that talking through the details of the application helps you to build a much more flexible and maintainable architecture.
|
http://docs.sencha.com/extjs/4.1.3/?_escaped_fragment_=/guide/mvc_pt1
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Particularly when specifying the identity of an object, sequences are a very useful facility.
DataNucleus supports the automatic assignment of sequence values for
object identities. However such sequences may also have use when a user wishes to assign such
identity values themselves, or for other roles within an application. JDO 2 defines an interface
for sequences for use in an application - known as Sequence.
.
There are 2 forms of "sequence" available through this interface - the ones that DataNucleus provides
utilising datastore capabilities, and ones that a user provides using something known as a
"factory class".
DataNucleus internally provides 2 forms of sequences. When the underlying datastore supports native sequences, then these can be leveraged through this interface. Alternatively, where the underlying datastore doesn't support native sequences, then a table-based incrementing sequence can be used. The first thing to do is to specify the Sequence in the Meta-Data for the package requiring the sequence. This is done as follows
<jdo> <package name="MyPackage"> <class name="MyClass"> ... </class> <sequence name="ProductSequence" datastore- <sequence name="ProductSequenceNontrans" datastore- </package> </jdo>
So we have defined two Sequences for the package MyPackage. Each sequence has a symbolic name that is referred to within JDO (within DataNucleus), and it has a name in the datastore. The final attribute represents whether the sequence is transactional or not.
All we need to do now is to access the Sequence in our persistence code in our application. This is done as follows
PersistenceManager pm = pmf.getPersistenceManager(); Sequence seq = pm.getSequence("MyPackage.ProductSequence");
and this Sequence can then be used to provide values.
long value = seq.nextValue();
Please be aware that when you have a Sequence declared with a strategy of "contiguous" this means "transactional contiguous" and that you need to have a Transaction open when you access it.
JDO3.1 allows control over the allocation size (default=50) and initial value (default=1) for the sequence. So we can do
<sequence name="ProductSequence" datastore-
which will allocate 10 new sequence values each time the allocated sequence values is exhausted.
It is equally possible to provide your own Sequence capability using a factory class. This is a class that creates an implementation of the JDO Sequence. Let's give an example of what you need to provide. Firstly you need an implementation of the JDO Sequence interface, so we define ours like this
public class SimpleSequence implements Sequence { String name; long current = 0; public SimpleSequence(String name) { this.name = name; } public String getName() { return name; } public Object next() { current++; return new Long(current); } public long nextValue() { current++; return current; } public void allocate(int arg0) { } public Object current() { return new Long(current); } public long currentValue() { return current; } }
So our sequence simply increments by 1 each call to next(). The next thing we need to do is provide a factory class that creates this Sequence. This factory needs to have a static newInstance method that returns the Sequence object. We define our factory like this
package org.datanucleus.samples.sequence; import javax.jdo.datastore.Sequence; public class SimpleSequenceFactory { public static Sequence newInstance() { return new SimpleSequence("MySequence"); } }
and now we define our MetaData like this
<jdo> <package name="MyPackage"> <class name="MyClass"> ... </class> <sequence name="ProductSequenceFactory" strategy="nontransactional" factory- </package> </jdo>
So now we can call
PersistenceManager pm = pmf.getPersistenceManager(); Sequence seq = pm.getSequence("MyPackage.ProductSequenceFactory");
|
http://www.datanucleus.org/products/accessplatform_4_2/jdo/sequences.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Retain/recall semantics enable selected data on the call stack to be accessible from descendant frames.They are similar to dynamic scoping and the structural opposite of throw/catch. For many problems that developers currently use global data to address, retain/recall can be more convenient and flexible. The approach is safe for concurrent, recursive, and re-entrant code. In this article, I explain retain/recall and give an efficient implementation of the idea in C++ using the Posix and Win32 threading libraries.
Introducing Retain/Recall
In a typical modern programming language, information comes from:
- local (automatic) data
- passed (argument) data
- instance data if the code is part of an instance of an object.
- class (static) data if the code is part of a class, but not an instance.
- global data.
While global and class (static) data can be used from anywhere, they represent a shared resource that can break recursion, re-entrancy, and multithreading. Other items, though, have more limited accessibility, and a software engineer occasionally discovers that a program requires data to which there is no easy access.
For example, there may be a decoding library that was designed for internal algorithms, but now needs to have access to a system key store, which needs some kind of authorization context in order to open. At this point, the developer typically has two options (both with significant drawbacks):
- Rewrite the software or infrastructure so that the necessary data does get to that point (as instance or argument data). This may be impossible or create an unwieldy solution.
- Add global or class (static) data that is more universally accessible.
Error conditions used to suffer an analogous fate. Before throw/catch, a new error state could be handled in two ways:
- Return a new kind of error status to the caller, and upgrade the infrastructure so this potential error status traversed back to where it could be managed.
- Add global or class (static) data that recorded the error state. Hope that all the intervening code did not really break something before the error state was noticed and managed.
The throw/catch construct addresses this by allowing error conditions to safely work back through the call stack to a handler, even when the condition was not foreseen by the programmer who wrote the intervening text. In a sense, the throw/catch extends the visibility model so that exceptions (data about exceptional situations) can reach arbitrarily back in the call stack to address them.
This allows a third resolution to error handling; namely, throw an exception where it occurs and catch it (lower in the call stack) where you can handle it. This approach addresses the error handling problem without modifying infrastructure between the throw and catch frames or introducing global state. If there are more than one acceptable catch clauses for a given error, the latest one on the stack applies. Thus, nesting and recursion work fine and is there no global state to hinder re-entrance or multithreading.
If passing data from the current frame back in ancestral frames is such a good thing, what about accessing data from ancestral frames from code in the current frame? We call this kind of data visibility retained visibility because data values are retained (remembered) in a frame, which can then be recalled from any later frame. Figure 1 shows how throw/catch and retain/recall are related.
Figure 1: Comparing throw/catch (left) with retain/recall (right). The stack grows upward in the illustration.
In Figure 1, a throw unwinds the stack to the first matching catch, but the text for a recalled value executes in the current frame. Note that a thrown value is caught once, but a retained value can be recalled zero or more times. Note that, for throw/catch,
text1 may rethrow a in order to activate
text2(). Similarly, for retain/recall, a deepening recall would give access to
a2, which is hidden from a simple recall by
a1. Having retained visibility allows for a new solution to the data access problem: Retain a value where you know it; recall it (higher in the call stack) when you need it.
Retain/Recall Semantics
The semantics for retain/recall in a language that supports retained visibility are as follows:
- Retain a value for a type. This must be a statement within executable text. From the point of retention to the forget (explicit) or the end of the enclosing block/frame (implicit), we say that value is retained as that type, and structurally is a new block. It should be possible to retain any value derivable from the given type.
- Forget a value for a type. This must happen after a retain for the same type and value in the same block. It may be implicit at the end of the block containing a retain (but in the reverse order in the case of multiple retains).
- Retain a value for a type if a condition is true. Retain as above if the condition is true. A language may use a short-circuited evaluation of value in cases where the condition is false ( so the value is unused).
- Recall a value for a type. This must be an expression within executable text. A language may match this to a retain narrowly or broadly. A recall looks successively through each ancestral block/frame; the first block/frame that retained a value as the same type (narrow) or a derived type (broad) resolves the recalled value. Note that a given retain can resolve many recalls. If there is no matching retain, the recall fails.
- Deepening recall of a given type. There must be a mechanism for obtaining the sequence of retained values matching a given recall in frame deepening order.
- Failure. It must be possible to determine if a recall or the next step in a deepening recall will fail.
A broad matching of a recall to an ancestral retain would keep retain/recall in structural symmetry with throw/catch, since a catch will typically match a throw of a derived type value. However, this may not be in the best interest of the programmer. Allowing catch-alls is a convenience when trying to account for any number of error conditions, as might be the case nearer the root of the call tree. But recalls tend toward the branches of the call tree, where a recall of something specific is likely to be more useful. It also reduces the possibility of a hidden retention, where the desired value is retained, but a retention of a different (but derivable) type effectively hides it from recall (but not a deepening recall).
Implementing Retained Visibility in C++
Figure 2 illustrates how the retain of a given type can be implemented for a given thread. The structure has to be repeated for each thread and each type for which there are retained values.
Figure 2: Retained structure.
Each frame that has a retain for the given type is part of a linked list on the stack.
s_current is a thread-local value that refers to the top most element in this list. Each retain keeps the address of the value it is retaining as. To recall a value, we only need to look at the
m_as field of the
s_current record.
Retain-if records where the condition is false are skipped in the linked list.
For the sake of simplicity, consider the following retain class and recall function to support retained values of type
T in a single-threaded environment. For the nonce, I have ignored template, access, and initialization notational details.
class retain { static retain* s_current=0; // initial null/0 value retain* m_previous; // previous retained T* m_as; // what this retains as retain(T* as) { m_previous=s_current; s_current=this; m_as=as; } ~retain() { s_current=m_previous; } }; bool retained() { return retain::s_current != 0; } T* recall() { return retain::s_current->m_as; }
Now consider the following test of retain/recall (used as a template):
#include <assert.h> #include "retain.hpp" void inc() { int *p=recall<int>(); ++(*p); } int main() { int x=0,y=0; { auto retain<int> as(&x); inc(); assert(x == 1); { auto retain<int> as(&y); inc(); assert(y == 1); } inc(); assert(x == 2); } return 0; }
There is a static value,
retain, for the address of the current retain record, and the construction of a new retain instance creates two pointer values: one that stores the previous retain record, and one pointer to what the record is retaining,
as. Recalling is simply returning the
as pointer from the retained static value. In the multithreaded case, the ideas are all the same, except that the static
s_current value is kept in thread-local storage instead. The header file
retain.hpp defines a thread-safe version of this idea as a template using Posix or Win32 thread-local storage. See the section on using the reference C++ Posix/Win32 implementation toward the end of this article for an explanation of the various APIs that are based on this approach.
|
http://www.drdobbs.com/cpp/access-data-items-in-ancestor-stack-fram/240155450?cid=SBX_ddj_related_news_default_why_im_not_an_architect&itc=SBX_ddj_related_news_default_why_im_not_an_architect
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Here's the code:
with open("input.txt", "r") as f:
text = f.read()
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
res = {}
kol = 0
for buk in alphabet:
if buk in text:
kol += 1
if kol > 0:
for bukwa in text:
if bukwa in alphabet:
if bukwa not in res:
res[bukwa.upper()] = text.count(bukwa)
elif bukwa not in alphabet:
if bukwa not in res:
res[bukwa.upper()] = 0
res = sorted(res)
with open("output.txt", "w") as f:
for key in res:
f.write(key + " " + str(res[key]))
if kol == 0:
with open("output.txt", "w") as f:
f.write(-1)
Traceback (most recent call last):
File "/home/tukanoid/Desktop/ejudge/analiz/analiz.py", line 23, in <module>
f.write(key + " " + str(res[key]))
TypeError: list indices must be integers or slices, not str
The line:
res = sorted(res)
isn't returning what you think it is. Using
sort on a dictionary will sort its keys and return them as a list.
When you do
res[key] inside the context manager, you're indexing the list with a string, resulting in an error.
If you want ordering in your dictionary you can do it in one of two ways:
Rename the list you create:
sorted_keys = sorted(res)
and then iterate through those while indexing the still referencing to the
dict name
res.
or, use
OrderedDict and then iterate through its members as you would with a normal dict:
from collections import OrderedDict # -- skipping rest of code -- # in the context manager for key, val in OrderedDict(res): # write to file
|
https://codedump.io/share/D9ss1jxeAZfZ/1/typeerror-list-indices-must-be-integers-or-slices-not-str-dictionary-python
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Elasticsearch Testing & QA: Testing Levels of Elasticsearch
We're please to bring your our second article on on our testing and QA processes for Elasticsearch. If you missed last Thursday's piece, Isabel told us all about Elasticsearch Continuous Integration. We'll be bringing you our final installment next Thursday, so stay tuned to this space!
"Works on my machine" is a phrase that became famous for software projects lacking automated testing infrastructure. Even today, when most checks are done automatically on an integration test server, it's still crucial to be able to reproduce bugs and test features on your local box. Ideally, those should be the same tests that are being run by the CI environment (or some stripped down version thereof).
Testing Layers
Elasticsearch tests check the code base from multiple perspectives. Traditional unit tests are the usual check that core algorithm implementations are correct and all methods behave the way they should.
One level up, integration tests run against a locally running cluster, making sure all pieces of the application work nicely together and can be interacted with through the Java Client API. REST tests make sure all REST endpoints work according to their specification.
Backwards compatibility tests are a special case that were introduced recently. Instead of running some test against a cluster containing nodes of only one Elasticsearch version, a previous release can be downloaded, installed and started. Tests then run against a mixed node cluster making sure that everything works as expected and is compatible between releases.
Testing the Elasticsearch Java Layer
Elasticsearch attacks testing from a bunch of different angles. Java code is tested on more than on the unit test level; the Java Client API is also checked by integration tests that pull up complete Elasticsearch clusters to run requests against.
Essentially, the goal for integration tests (based on the class ElasticsearchIntegrationTest) is to make sure Java API calls work against a full running cluster. It is cheap to pull up an example Elasticsearch cluster in terms of CPU power and memory needed, even on an ordinary laptop. When extending the above test class, it also becomes simple in terms of development overhead. The cluster is pulled up for you and reused between tests unless you specify something else in the test's ClusterScope annotation.
Looking at an example integration test, let's walk through the most important annotations and features that makes writing Elasticsearch integration tests so trivial:
01 // make sure all tests in the test suite run on a separate test cluster as we will modify the 02 // cluster configuration 03 @ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE) 04 public class TemplateQueryTest extends ElasticsearchIntegrationTest { 05 06 @Before 07 public void setup() throws IOException { 08 // create an index, make sure it is ready for indexing and add documents to it 09 createIndex("test"); 10 ensureGreen("test"); 11 12 index("test", "testtype", "1", jsonBuilder().startObject().field("text", "value1").endObject()); 13 index("test", "testtype", "2", jsonBuilder().startObject().field("text", "value2").endObject()); 14 refresh(); 15 } 16 17 // for our test we want to make sure the config path of the cluster actually points 18 // to the test resources that we provide - this is the cluster modification referred 19 // to earlier 20 @Override 21 public Settings nodeSettings(int nodeOrdinal) { 22 return settingsBuilder().put(super.nodeSettings(nodeOrdinal)) 23 .put("path.conf", this.getResource("config").getPath()).build(); 24 } 25 26 @Test 27 public void testTemplateInBody() throws IOException { 28 Map<String, Object> vars = new HashMap<>(); 29 vars.put("template", "all"); 30 31 TemplateQueryBuilder builder = new TemplateQueryBuilder( 32 "{\"match_{{template}}\": {}}\"", vars); 33 34 // the search client to use in the test comes pre-configured as part of the 35 // integration test 36 SearchResponse sr = client().prepareSearch().setQuery(builder) 37 .execute().actionGet(); 38 39 // specific assertions make checks very simple 40 assertHitCount(sr, 2); 41 }
In our example, line 03 defines the cluster scope of that test to be only for the test suite. This makes sense if you change the cluster configuration, e.g. hard setting a specific configuration option like we do here in line 22 for the configuration path of the cluster.
Starting in line 06, the setup method simply sets up an index, makes sure it is green before modifying it and adds a couple of test documents to it.
As shown in line 36, the client to use is available as part of the integration test, all pre-configured and ready to use. So are helper assertions like the one in line 40 that make it easy to check the state of results.
In addition to regular integration tests, Elasticsearch also tests backwards compatibility for those versions that should be backwards compatible. Essentially this is achieved in a similar way to how integration tests work. A cluster is started for the test, and the release to check backwards compatibility against is downloaded and installed. Then, for each test, a random number of nodes from the comparison release is added to the test cluster and requests are then executed against this mixed node cluster. This way, we can automatically verify that changes do not break backwards compatibility where they shouldn't, and at the per commit level if needed.
Testing the REST Layer
The Elasticsearch REST API is defined in its own API specification. Based on this spec, tests can be defined declaratively in YAML.
The snippet below defines, in a concise way, a test to check the templating based search query. Line 01 to 04 defines the query to execute. Based on the specification for search requests, this snippet can automatically be turned into a valid GET URL and request body.
01 - do: 02 search_template: 03 body: { "template" : { "query": { "term": { "text": { "value": "{{template}}" } } } }, 04 "params": { "template": "value1" } } 05 06 - match: { hits.total: 1 }
Line 06 then defines the expected result. In this case only the number of hits returned is being checked.
Code Checks Built into Each Compile Run
Tests aren't the only way code quality is ensured within Elasticsearch. On each Maven build, we check for the usage of "forbidden Java APIs." In case there are known faster versions of the same API functionality, the slower API call is detected in the code base, causing the build to fail. Another example would be calls that send output to STDOUT (like calls to System.out.println that are usually a sign of forgotten debug statements), instead of using the logging framework to communicate messages. Also there are certain functions that are downright dangerous to use if one cares about compatibility across systems (think using the default charset of the machine the code is running on instead of explicitly defining which charset is supposed to be used).
For releases, checks are even more restrictive. During development broken tests can be disabled and marked with the special annotation "AwaitsFix." In the case of cutting a release, the AwaitsFix broken tests will fail the build, telling the release manager that there is still functionality that is not yet working.
In our final installment of this series, we'll cover Elasticsearch's randomized testing framework.
|
https://www.elastic.co/blog/elasticsearch-testing-qa-testing-levels-elasticsearch
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Content-type: text/html
cc by locks, which are generally used to protect data that is frequently searched.
Readers/writer locks can synchronize threads in this process and other processes if they are allocated in writable memory and shared among cooperating processes (see mmap(2)), and are initialized for this purpose.
Additionally, readers/writer locks must be initialized prior to use. rwlock.
type may be one of the following:
USYNC_PROCESS The readers/writer lock can synchronize threads in this process and other processes. The readers/writer lock should be initialized by only one process. arg is ignored. A readers/writer lock initialized with this type, must be allocated in memory shared between processses, i.e. either in Sys V shared memory (see shmop(2)) or in memory mapped to a file (see mmap(2)). It is illegal to initialize the object this way and to not allocate it in such shared memory.
USYNC_THREADlock() gets a read lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is currently locked for writing, the calling thread blocks until the write lock is freed. Multiple threads may simultaneously hold a read lock on a readers/writer lock.
rw_tryrdlock() trys to get a read lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is locked for writing, it returns an error; otherwise, the read lock is acquired.
rw_wrlock() gets a write lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is currently locked for reading or writing, the calling thread blocks until all the read and write locks are freed. At any given time, only one thread may have a write lock on a readers/writer lock.
rw_trywrlock() trys to get a write lock on the readers/writer lock pointed to by rwlp. If the readers/writer lock is currently locked for reading or writing, it returns an error.
rw_unlock() unlocks a readers/writer lock pointed to by rwlp, if the readers/writer lock is locked and the calling thread holds the lock for either reading or writing. One of the other threads that is waiting for the readers/writer lock to be freed will be unblocked, provided there is other waiting threads. If the calling thread does not hold the lock for either reading or writing, no error status is returned, and the program's behavior is unknown.
If successful, these functions return 0. Otherwise, a non-zero value is returned to indicate the error.
The rwlock_init() function will fail if:
EINVAL type is invalid.
The rw_tryrdlock() or rw_trywrlock() functions will fail if:
EBUSY The reader or writer lock pointed to by rwlp was already locked.
These functions may fail if:
EFAULT rwlp or arg points to an illegal address.
See attributes(5) for descriptions of the following attributes:
mmap(2), attributes(5)
These interfaces also available by way of:
#include <thread.h>
If multiple threads are waiting for a readers/writer lock, the acquisition order is random by default. However, some implementations may bias acquisition order to avoid depriving writers. The current implementation favors writers over readers.
|
http://backdrift.org/man/SunOS-5.10/man3c/rw_unlock.3c.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Data Structures for Drivers
no-involuntary-power-cycles(9P)
usb_completion_reason(9S)
usb_other_speed_cfg_descr(9S)
usb_request_attributes(9S)
- x86 DMA limits structure
#include <sys/ddidmareq.h>
Solaris x86 DDI specific (Solaris x86 DDI). This interface is obsolete.
A ddi_dma_lim structure describes in a generic fashion the possible limitations of a device or its DMA engine. This information is used by the system when it attempts to set up DMA resources for a device. When the system is requested to perform a DMA transfer to or from an object, the request is broken up, if necessary, into multiple sub-requests. Each sub–request conforms to the limitations expressed in the ddi_dma_lim structure.
This structure should be filled in by calling the routine ddi_dmae_getlim(9F). This routine sets the values of the structure members appropriately based on the characteristics of the DMA engine on the driver's parent bus. If the driver has additional limitations, it can further restrict some of the values in the structure members. A driver should not relax any restrictions imposed by ddi_dmae_getlim().
uint_t dlim_addr_lo; /* low range of 32 bit addressing capability */ uint_t dlim_addr_hi; /* inclusive upper bound of addressing capability */ uint_t dlim_minxfer; /* minimum effective dma transfer size */ uint_t dlim_version; /* version number of structure */ uint_t dlim_adreg_max; /* inclusive upper bound of incrementing addr reg */ uint_t dlim_ctreg_max; /* maximum transfer count minus one */ uint_t dlim_granular; /* granularity (and min size) of transfer count */ short dlim_sgllen; /* length of DMA scatter/gather list */ uint_t dlim_reqsize; /* maximum transfer size in bytes of a single I/O */
The dlim_addr_lo and dlim_addr_hi fields specify the address range that the device's DMA engine can access. The dlim_addr_lo field describes the lower 32–bit boundary of the device's DMA engine. The dlim_addr_hi member describes the inclusive, upper 32–bit boundary. The system allocates DMA resources in a way that the address for programming the device's DMA engine will be within this range. For example, if your device can access the whole 32–bit address range, you can use [0,0xFFFFFFFF]. See ddi_dma_cookie(9S) or ddi_dma_segtocookie(9F).
The dlim_minxfer field describes the minimum effective DMA transfer size (in units of bytes), which must be a power of two. This value specifies the minimum effective granularity of the DMA engine and describes the minimum amount of memory that can be touched by the DMA transfer. As a resource request is handled by the system, the dlim_minxfer value can be modified. This modification is contingent upon the presence (and use) of I/Ocaches and DMA write buffers between the DMA engine and the object that DMA is being performed on. After DMA resources have been allocated, you can retrieve the resultant minimum transfer value using ddi_dma_devalign(9F).
The dlim_version field specifies the version number of this structure. Set this field to DMALIM_VER0.
The dlim_adreg_max field describes an inclusive upper bound for the device's DMA engine address register. This bound handles a fairly common case where a portion of the address register is simply a latch rather than a full register. For example, the upper 16 bits of a 32–bit address register might be a latch. This splits the address register into a portion that acts as a true address register (lower 16 bits) for a 64–kilobyte segment and a latch (upper 16 bits) to hold a segment number. To describe these limits, you specify 0xFFFF in the dlim_adreg_max structure member.
The dlim_ctreg_max field specifies the maximum transfer count that the DMA engine can handle in one segment or cookie. The limit is expressed as the maximum count minus one. This transfer count limitation is a per-segment limitation. Because the limitation is used as a bit mask, it must be one less than a power of two.
The dlim_granular field describes the granularity of the device's DMA transfer ability, in units of bytes. This value is used to specify, for example, the sector size of a mass storage device. DMA requests are broken into multiples of this value. If there is no scatter/gather capability, then the size of each DMA transfer will be a multiple of this value. If there is scatter/gather capability, then a single segment cannot be smaller than the minimum transfer value, but can be less than the granularity. However, the total transfer length of the scatter/gather list is a multiple of the granularity value.
The dlim_sgllen field specifies the maximum number of entries in the scatter/gather list. This value is the number of segments or cookies that the DMA engine can consume in one I/O request to the device. If the DMA engine has no scatter/gather list, set this field to one.
The dlim_reqsize field describes the maximum number of bytes that the DMA engine can transmit or receive in one I/O command. This limitation is only significant if it is less than ( dlim_ctreg_max +1) * dlim_sgllen. If the DMA engine has no particular limitation, set this field to 0xFFFFFFFF.
See attributes(5) for descriptions of the following attributes:
ddi_dmae(9F), ddi_dma_addr_setup(9F), ddi_dma_buf_setup(9F), ddi_dma_devalign(9F), ddi_dma_segtocookie(9F), ddi_dma_setup(9F), ddi_dma_cookie(9S), ddi_dma_lim_sparc(9S), ddi_dma_req(9S)
|
http://docs.oracle.com/cd/E26502_01/html/E29047/ddi-dma-lim-x86-9s.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
How can I import and export a webdriver FireFox profile?
What I wold like to do is something like:
from selenium import webdriver #here I want to import the FF profile from a path if profile: driver = webdriver.Firefox(profile) else: #this is the way I get the WebDriver currently driver = webdriver.Firefox() #doing stuff with driver #Here I want to save the driver's profile #so I could import it the next time
You have to decide on a location to store the cached profile, then use functions in the
os library to check if there is a file in that location, and load it. To cache the profile in the first place, you should be able to get the path to the profile from
webdriver.firefox_profile.path, then copy the contents to your cache location.
All that said, I'd really recommend against this. By caching the profile created at test runtime, you are making your test mutate based upon previous behavior, which means it is no longer isolated and reliably repeatable. I'd recommend that you create a profile separately from the test, then use that as the base profile all the time. This makes your tests predictably repeatable. Selenium is even set up to work well with this pattern, as it doesn't actually use the profile you provide it, but instead duplicates it and uses the duplicate to launch the browser.
Similar Questions
|
http://ebanshi.cc/questions/3007783/how-to-import-and-export-firefox-profile-for-selenium-webdriver-in-python
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Structural design patterns are the type of design that simplifies the design by identifying a simple way to realize relationships between entities. They are of seven types such as Adapter Design Pattern, Bridge Design Pattern, Composite Design Pattern, Decorator Design Pattern, Facade Design Pattern, Flyweight Design Pattern and Proxy Design Pattern. In this article, we will discuss the first two types of design patterns in detail. Let us start with Adapter Design Patterns.
ADAPTER DESIGN PATTERNS
Adapter Design Pattern helps the user to connect two unrelated interfaces so that they can work together. So, the joining between those two interfaces is known as adapter. So, in simple terms it acts as a bridge between two incompatible interfaces. This type of pattern involves a single class which is responsible to join functionalities of independent or incompatible interfaces. In order to have a better understanding, we will take an example. In this example, we will have an interface named as MRBOOLAudioVideoPlayer interface and a concrete class named as MRBOOLAudioPlayer implementing the MRBOOLAudioVdieoPlayer interface. This MRBOOLAudioPlayer can play mp3 format audio files by default. Now, there is another interface named as AdvancedMRBOOLAudioVideoPlayer and concrete classes implementing the MRBOOLAdvancedAudioVideoPlayer interface. These classes can play .avi and .mkv format files.
Now, in this example, we want to make MRBOOLAudioPlayer to play other formats as well. In order to do this, we will create an adapter class named as MRBOOLMediaAdapter which implements the MRBOOLAudioVideoPlayer interface and uses AdvancedMRBOOLAudioVideoPlayer objects to play the required format. So, in this case, MRBOOLAudioPlayer uses the adapter class MRBOOLMediaAdapter passing it the desired audio type without knowing the actual class which can play the desired format. So, let us start coding the above written thing. We have to follow a step by step approach. In the first step, we will create interfaces for MRBOOLAudioVideoPlayer and AdvancedMRBOOLAudioVideoPlayer.
Listing 1: Shows the coding for creating interfaces (for MRBOOLAudioVideoPlayer.java)
public interface MRBOOLAudioVideoPlayer { public void run(String audiovideoType, String fileName); }
Listing 2: Shows the coding for creating interfaces (for AdvancedMRBOOLAudioVideoPlayer.java)
public interface AdvancedMRBOOLAudioVideoPlayer { public void playAvi(String fileName); public void playMkv(String fileName); }
Now, after creating interfaces, we need to create concrete classes which implement the AdvancedMRBOOLAudioVideoPlayer in the second step.
Listing 3: Shows the code for creating concrete classes implementing the AdvancedMRBOOLAudioVideoPlayer (for AviPlayer.java)
public class AviPlayer implements AdvancedMRBOOLAudioVideoPlayer{ @Override public void playAvi(String fileName) { System.out.println("Playing avi file. Name: "+ fileName); } @Override public void playMkv(String fileName) { //do nothing } } Listing 4: Shows the code for creating concrete classes implementing the AdvancedMRBOOLAudioVideoPlayer (for MkvPlayer.java) public class MkvPlayer implements AdvancedMRBOOLAudioVideoPlayer{ @Override public void playAvi(String fileName) { //do nothing } @Override public void playmkv(String fileName) { System.out.println("Playing mkv file. Name: "+ fileName); } }
Now, after this user needs to create adapter class which implements the MRBOOlAudioVideoPlayer interface.
Listing 5: Shows the code to create adapter class implementing the MRBOOLAudioVideoPlayer interface (for MRBOOLMediaAdapter.java)
public class MRBOOLMediaAdapter implements MRBOOLAudioVideoPlayer { AdvancedMRBOOLAudioVideoPlayer advancedMRBOOLMusicPlayer; public MRBOOLMediaAdapter(String audioType){ if(audioType.equalsIgnoreCase("avi") ){ advancedMRBOOLMusicPlayer = new AVIPlayer(); } else if (audioType.equalsIgnoreCase("Mkv")){ advancedMRBOOLMusicPlayer = new MkvPlayer(); } } @Override public void play(String audioType, String fileName) { if(audioType.equalsIgnoreCase("mkv")){ advancedMRBOOLMusicPlayer.playMKV(fileName); }else if(audioType.equalsIgnoreCase("Avi")){ advancedMRBOOLMusicPlayer.playAvi(fileName); } } }
Now, we will create concrete classes for implementing the MRBOOLAudioVideoPlayer interface.
Listing 6: Shows the coding for creating concrete class which implement the MRBOOLAudioVideoPlayer interface (for MRBOOLAudioPlayer.java)
public class MRBOOLAudioPlayer implements MRBOOLAudioVideoPlayer { MRBOOLMediaAdapter MRBOOLmediaAdapter; @Override public void play(String audioType, String fileName) { //inbuilt support to play mp3 music files if(audioType.equalsIgnoreCase("mp3")){ System.out.println("Playing mp3 file. Name: "+ fileName); } //MRBOOLmediaAdapter is providing support to play other file formats else if(audioType.equalsIgnoreCase("avi") || audioType.equalsIgnoreCase("mkv")){ MRBOOLmediaAdapter = new MRBOOLMediaAdapter(audioType); MRBOOLmediaAdapter.play(audioType, fileName); } else{ System.out.println("Invalid media. "+ audioType + " format not supported"); } } }
Now, after this, the next step is to use the MRBOOLAudioPlayer to play different types of format.
Listing 7: Shows the code for doing the above mentioned task (MRBOOLAdapterPattern.java)
public class MRBOOLAdapterPattern { public static void main(String[] args) { MRBOOLAudioPlayer MRBOOLaudioPlayer = new MRBOOLAudioPlayer(); MRBOOL audioPlayer.play("mp3", "carnival of rust.mp3"); MRBOOLaudioPlayer.play("avi", "tokyo drift.avi"); MRBOOLaudioPlayer.play("mkv", "summer of 67.mkv"); MRBOOLaudioPlayer.play("mp4", "traveller.avi"); } }
Now, the last step is to verify the output.
Listing 8: Shows the output of verify the Adapter Design Pattern use
Playing mp3 file. Name: carnival of rust.mp3 Playing avi file. Name: Tokyo drift.avi Playing mkv file. Name: summer of 67.mkv Invalid media. mp4 format not supported
BRIDGE DESIGN PATTERN
Now, we will study about Bridge design pattern in JAVA. This type of design pattern is used when the user wants to decouple an abstraction from its implementation so that the two can vary independently. This pattern involves an interface which acts as a bridge which makes the functionality of concrete classes independent from interface implementer classes. Both types of classes can be altered structurally without affecting each other. Let us take an example to understand this in detail.
Listing 9: Shows the code for creating bridge implementer interface (for DrawDifferentColorCircles.java)
public interface DrawDifferentColorCircles { public void drawCircle(int diameter, int a, int b); }
Listing 10: Shows the code for creating concrete bridge implementer classes for implementing the DrawDifferentColorCircles interface (for BlueCirlce.java)
public class BlueCircle implements DrawDifferentColorCircles { @Override public void drawCircle(int diameter, int a, int b) { System.out.println("Drawing Circle[ color: blue, diameter: " + diameter +", a: " +a+", "+ b +"]"); } }
Listing 11: Shows the code for creating concrete bridge implementer classes for implementing the DrawDifferentColorCircles interface (for GreyCirlce.java)
public class GreyCircle implements DrawDifferentColorCircles { @Override public void drawCircle(int diameter, int a, int b) { System.out.println("Drawing Circle[ color: grey, diameter: " + diameter +", a: " +b+", "+ b +"]"); } }
Listing 12: Shows the code for creating an abstract class Figure using the DrawDifferentColorCirles interface
public abstract class Figure { protected DrawDifferentColorCirles drawdifferentcolorcirles ; protected Shape(DrawDifferentColorCirles drawdifferentcolorcirles){ this. drawdifferentcolorcirles = drawdifferentcolorcirles ; } public abstract void draw(); }
Listing 13: Shows the code for creating concrete class implementing the Figure interface
public class Circle extends Figure { private int a, b, diameter; public Circle(int a, int b, int diameter, DrawDifferentColorCircles drawdifferentcolorcircles) { super(drawdifferentcolorcircles); this.a = a; this.b = b; this.diameter = diameter; } public void draw() { drawdifferentcolorcircles.drawCircle(diameter,a,b); } }
Finally, we can implement this code and verify the code.
Hope you like reading it!
|
http://mrbool.com/learning-about-adapter-and-bridge-design-patterns-in-java/27599
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
.
What is HTML File Upload?
Let’s first remind ourselves what HTML File Upload is. If you don’t need to brush up on HTML file upload then you can just skip to the next section…
You enable support for HTML File Upload in an HTML form by using the attribute enctype=”multipart/form-data” and then have a input field of type “file” like this:
1: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
2: <html>
3: <head>
4: <title>File Upload Sample</title>
5: </head>
6: <body>
7: <form action="" enctype="multipart/form-data" method="POST">
8: What is your name?
9: <input name="submitter" size="40" type="text"><br>
10: What file are you uploading?
11: <input name="data" size="40" type="file">
12: <br>
13: <input type="submit">
14: </form>
15: </body>
16: </html>
This will cause all the data to be encoded using MIME multipart as follows when submitted in an HTTP POST request:
1: Content-type: multipart/form-data, boundary=AaB03x
2:
3: --AaB03x
4: content-disposition: form-data; name="submitter"
5:
6: Henrik Nielsen
7: --AaB03x
8: content-disposition: form-data ; name="data"; filename="file1.txt"
9: Content-Type: text/plain
10:
11: ... contents of file1.txt ...
12: --AaB03x--
Note how the input field names in the form are mapped to a Content-Disposition header in the MIME multipart message. Each form field such as the submitter field above is encoded as its own MIME body part (that is the term for each segment between the boundaries — above it is the string –AaB03x).
In HTML5 many browsers support upload of multiple files within a single form submission using the multiple keyword:
1: <!DOCTYPE HTML>
2: <html>
3: <head>
4: <title>HTML5 Multiple File Upload Sample</title>
5: </head>
6: <body>
7: <form action="" enctype="multipart/form-data" method="POST">
8: What is your name?
9: <input name="submitter" size="40" type="text"><br>
10: What files are you uploading?
11: <input name="data" type=file multiple>
12: <br>
13: <input type="submit" />
14: </form>
15: </body>
16: </html>
17:
The principle in submitting the data is the same as for HTML 4 but you can now select multiple files that each end up in their own MIME body part.
Creating an ApiController
First we create an ApiController that implements an HTTP POST action handling the file upload. Note that the action returns Task<T> as we read the file asynchronously.
Note: We use the new async/await keywords introduced in Visual Studio 11 Beta but you can equally well use Tasks and the ContinueWith pattern already present in Visual Studio 2010. “c:\tmp\uploads”. containing the submitter and the file names we use on the server, Obviously this is not a typical response but this is just so that you can see the information.
1: public class UploadController : ApiController
2: {
3: public async Task<List<string>> PostMultipartStream()
4: {
5: // Verify that this is an HTML Form file upload request
6: if (!Request.Content.IsMimeMultipartContent("form-data"))
7: {
8: throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
9: }
10:
11: // Create a stream provider for setting up output streams that saves the output under c:\tmp\uploads
12: // If you want full control over how the stream is saved then derive from MultipartFormDataStreamProvider
13: // and override what you need.
14: MultipartFormDataStreamProvider streamProvider = new MultipartFormDataStreamProvider("c:\\tmp\\uploads");
15:
16: // Read the MIME multipart content using the stream provider we just created.
17: IEnumerable<HttpContent> bodyparts = await Request.Content.ReadAsMultipartAsync(streamProvider);
18:
19: // The submitter field is the entity with a Content-Disposition header field with a "name" parameter with value "submitter"
20: string submitter;
21: if (!bodyparts.TryGetFormFieldValue("submitter", out submitter))
22: {
23: submitter = "unknown";
24: }
25:
26: // Get a dictionary of local file names from stream provider.
27: // The filename parameters provided in Content-Disposition header fields are the keys.
28: // The local file names where the files are stored are the values.
29: IDictionary<string, string> bodyPartFileNames = streamProvider.BodyPartFileNames;
30:
31: // Create response containing information about the stored files.
32: List<string> result = new List<string>();
33: result.Add(submitter);
34:
35: IEnumerable<string> localFiles = bodyPartFileNames.Select(kv => kv.Value);
36: result.AddRange(localFiles);
37:
38: return result;
39: }
40: }
In the above code we added an extension method for getting the value of the submitter form field as a string. That extension method looks like this (don’t worry, we will integrate this better):
1: public static bool TryGetFormFieldValue(this IEnumerable<HttpContent> contents, string dispositionName, out string formFieldValue)
2: {
3: if (contents == null)
4: {
5: throw new ArgumentNullException("contents");
6: }
7:
8: HttpContent content = contents.FirstDispositionNameOrDefault(dispositionName);
9: if (content != null)
10: {
11: formFieldValue = content.ReadAsStringAsync().Result;
12: return true;
13: }
14:
15: formFieldValue = null;
16: return false;
17: }
Hosting the Controller
In this example we build a simple console application for hosting the ApiController while setting the MaxReceivedMessageSize and TransferMode in the HttpSelfHostConfiguration. We then use a simple HTML form to point to the ApiController so that we can use a browser to upload a file. That HTML form can be hosted anywhere – here we just drop it into the folder C:\inetpub\wwwroot\Samples and serve it using IIS.
1: class Program
2: {
3: static void Main(string[] args)
4: {
5: var baseAddress = "";
6: HttpSelfHostServer server = null;
7:
8: try
9: {
10: // Create configuration
11: var config = new HttpSelfHostConfiguration(baseAddress);
12:
13: // Set the max message size to 1M instead of the default size of 64k and also
14: // set the transfer mode to 'streamed' so that don't allocate a 1M buffer but
15: // rather just have a small read buffer.
16: config.MaxReceivedMessageSize = 1024 * 1024;
17: config.TransferMode = TransferMode.Streamed;
18:
19: // Add a route
20: config.Routes.MapHttpRoute(
21: name: "default",
22: routeTemplate: "api/{controller}/{id}",
23: defaults: new { controller = "Home", id = RouteParameter.Optional });
24:
25: server = new HttpSelfHostServer(config);
26:
27: server.OpenAsync().Wait();
28:
29: Console.WriteLine("Hit ENTER to exit");
30: Console.ReadLine();
31: }
32: finally
33: {
34: if (server != null)
35: {
36: server.CloseAsync().Wait();
37: }
38: }
39: }
40: }
Trying it Out
Start the console application, then point your browser at the HTML form, for example.
Once you have uploaded a file you will find it in the folder c:\\tmp\uploads and also get a content negotiated result which if asking for JSON looke like this:
["henrik","c:\\tmp\\uploads\\sample.random"]
Have fun!
Henrik
Hello Henrik,
Is there a sample d/l of this project that I might be able to look at, I have tried building something similar to what you have, and would like to see a working one in progress.
Thank-You So Much!
Bryan
Good post Henrik.
Is it possible to rename the file before it is saved to the specified location?
Thanks,
– J
so why use this rather than the AsyncFileUpload control?
I try to use this example with HTML5 input multiple tag but not work with IE . Do you have any idea or another exaple that i cn see?
Thanks,
Tony
How can i use webapi to upload a file, when i am forced to do a postback without HTML5 features in IE 9.
Because if i am posting form data , my preference is to return a view with the names of the files uploaded.
I can't do this if i post it to a web api.
Nice tutorials, thank you. I tried it in my code found some issues and below is solution of it:
stackoverflow.com/…/asp-net-webapi-file-upload-issues-with-ie9
Does this work the same way with httpClient?
Any ideas on how a file can be sent from a client that does not contain an HTML form, IOW something like a WinForms app (specifically, a CF app)?
This is cool man….…/ajax-fileupload-control-in-aspnet-or.html
I'm not having any luck getting this to compile, apparently FirstDispositionNameOrDefault() has been removed after "the Beta" so I'm going to look elsewhere now.
stackoverflow.com/…/cant-use-firstdispositionnameordefault-after-asp-net-mvc4-webapi-beta-upgrade
what is the extension method for FileUpload. I tried(FileUpload .Text )but it doesnt work. Please help me out
|
https://blogs.msdn.microsoft.com/henrikn/2012/03/01/asynchronous-file-upload-using-asp-net-web-api/
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Today Curious Greg is going to Houston to visit the Johnson Space Center. Before he leaves he wanted to share the final configuration pieces to the hybrid lab. When we last left the lab we configured our virtual directories. Today we will start with address policy. From the on-premises hybrid server open the Exchange Management Console and navigate to Organization Configuration > Hub Transport. Edit the default email address policy. On the E-mail addresses page select Add to enter the email address for your service-routing namespace. In my case service.edustl.com.
On the SMTP Email Address dialog select the Email address local part check box and select use alias. Also select the accepted domain for the email address and browse to service.edustl.com.
Apply the email address policy immediately.
Enable Outlook Anywhere.
This should be done already and I won’t cover in this blog. To enable check out this.
Configure autodiscover DNS records.
I used an A record for autodiscover.edustl.com and CNAME for autodiscover.service.edustl.com. Since my domain is a split-brain DNS I also configured my internal records.
Configure Federation Gateway
Ensure you have a delegated domain namespace. In my case I named mine exchangefederation.edustl.com.
New-Federationtrust or use EMC. Ensure you use domainproof to get proof for TXT records for both domain and service domain. In my case both edustl.com and service.edustl.com
Once created then you must configure the federation trust. If you don’t get the Application Identifier than your domain proof is probably misconfigured.
Organization Configuration
Next tab over to organization configuration and create new organization relationship. I used the shell but this can be configured in the EMC. Again all this is configured on the hybrid server.
Below I show screenshots of the properties of the org relationship. First one is the free/busy information access I give the cloud tenant.
Second is the external organization properties.
Lots of conflicting information here. I only needed edustl.com and service.edustl.com. Originally I thought I would need the service tenant (*onmicrosoft.com). This is not needed and caused issues with free/busy. I’ve also seen the app URI as both and outlook.com. It worked for me with just outlook.com. Ensure you have WSSecurity at the end of the autdiscover endpoint. Also – if you recreated the virtual directory ensure to add WSSecurity. Also don’t forget the TargetSharingEpr which corresponds to the POD that you see when you remote powershell into your cloud tenant.
The organization relationship must also be configured on the cloud side. I launched powershell and configured the same information.
Set-OrganizationRelationship -Identity "To Cloud" -DomainNames "service.edustl.com","edustl.com" -MailTipsAccessEnabled $True -MailTipsAccessLevel All -DeliveryReportEnabled $True
Set-OrganizationRelationship -Identity "To On-premises" -DomainNames "exchangefederation.edustl.com","edustl.com" -MailTipsAccessEnabled $True -MailTipsAccessLevel All -DeliveryReportEnabled $True
Mailflow
Send and Receive Connectors with on-premises hybrid Server.
Set-SendConnector or EMC. Specify the FQDN for the connector such as mail.edustl.com. Set the Address space for the service domain. *.service.edustl.com. Use DNS and the source server is the hybrid server.
Configure the Receive Connector.
Ensure that the IP addresses you select are from the FOPE configuration. Also ensure you state the subnet mask.
Remote Domain
Next you setup the remote domains on the on-premises server. Inbound and outbound remote domains. My inbound is edustl.com and outbound is service.edustl.com.
Using the Deployment assistant setup the remote domains:
New-ReceiveConnector -Name "From Cloud" -Usage Internet -RemoteIPRanges <FOPE Outbound IP Addresses> -Bindings 0.0.0.0:25 -FQDN mail2.contoso.com -TlsDomainCapabilities mail.messaging.microsoft.com:AcceptOorgProtocol (remember to get IP addresses from FOPE procedure outlined in deployment assistant).
When last command was setup ran into problem with duplicate domain on FOPE. It appears in domains as duplicatedomain-xxxxxxxxxxxxxxxxxxxxxxx(GUID).edustl.com.
If you use ECP and go to Mail control > Domains and Protection. Change from shared to hosted and back to shared. The error clears.
The last thing to configure is the FOPE configuration. You’ll need both inbound and outbound connectors.
From there you are all set! The last thing to do is to configure MX records based on how you want incoming mail. Use both the deployment assistant and your external DNS provider to configure this.
My service.edustl.com was setup to match service-edustl-com.mail.eo.outlook.com in the hosted namespace. My MX record for on-premises was setup for mail.edustl.com.
I’d tell you more but it appears I got in the capsule during a launch and will not be on earth in a few more seconds. Say goodbye to Curious Greg. Take care.
This is great but will this work with SBS 2008? I know if I ran the wizards again it would make a mess….
|
https://blogs.technet.microsoft.com/educloud/2011/10/11/curious-greg-builds-a-lab-part-iv/
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I'm trying to create a cumulative list, but my output clearly isn't cumulative. Does anyone know what I'm doing wrong?
Thanks
import numpy as np
import math
import random
l=[]
for i in range (50):
def nextTime(rateParameter):
return -math.log(1.0 - random.random()) / rateParameter
a = np.round(nextTime(1/15),0)
l.append(a)
np.cumsum(l)
print(l)
The cumulative sum is not taken in place, you have to assign the return value:
cum_l = np.cumsum(l) print(cum_l)
You don't need to place that function in the
for loop. Putting it outside will avoid defining a new function at every iteration and your code will still produce the expected result.
|
https://codedump.io/share/wJjS9Vj9pxdr/1/cumulative-sum-list
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Microsoft IssueVision is a developer sample application that demonstrates the best practices for building Smart Client applications. It was developed specifically for the DevDays 2004 Smart Client track, and it provides a sample help desk management application. The original source code can be downloaded here.
In this article, I describe an implementation of adding WS-SecureConversation to the Microsoft IssueVision sample application using WSE 2.0.
The WS-SecureConversation specification allows clients and Web Services to establish a token-based, secure conversation for the duration of a session. It is analogous to the Secure Sockets Layer (SSL) protocol that provides on-demand, secure communications over the HTTP transport channel. Secure conversation is based on security tokens that are procured by a service token provider. This process involves an initial amount of overhead, but once the channel is established, the client and service exchange a lightweight, signed security context token, which optimizes message delivery time compared with using regular security tokens. The security context token enables the same signing and encryption features as other security tokens, like UsernameToken or X509SecurityToken.
After copying the contents of the source zip file to a location on your local disk, we must complete the following setup steps before we can compile and run the application.
This application uses the sample certificates included in WSE 2.0. Follow the procedures below to install the server certificate. (Note: you should not use these sample certificates in a production environment. Instead, contact a certificate authority, and request your own certificate.)
Note: this certificate will be used to encrypt messages between the applications. The client application will use the public key to encrypt the message and the service will use the private key to decrypt the message. The client needs to have the public portion of the certificate available in the Current User store.
Note: if you don't have an Other People store under Current User, open Internet Explorer, select Tools, Internet Options, Content, and press the Certificates button. You should see an Other People tab in the Certificates dialog. You can import the certificate here through this interface or you can return to MMC and refresh the Current User tree and Other People should now show up.
Note: this certificate only contains the public portion of Server Private.pfx. The client will use this to encrypt messages and the server will use the private key installed in the Local Machine store to decrypt the messages.
To install the sample database, please run DataReset.cmd included in the source zip file. The IssueVision XML Web Service requires SQL Server authentication (mixed mode) to be enabled on the SQL Server 2000 server. Follow the procedures below to verify:
Finally, run the CreateSampleVdir.vbs script included in the source zip file to automatically create the required virtual directory. (You can later uninstall the virtual directory using the DeleteSampleVdir.vbs script.)
A secure conversation is initiated by a client that requires an on-demand secure communication session with a Web Service, and it consists of the following three steps:
The client initiates the secure conversion by issuing a signed request to the secure token service (STS) provider for a security context token. The client uses UsernameToken to sign the security token request, and uses X509SecurityToken to encrypt the SOAP message sender's entropy value. If the request is successful, the client caches the security context token for further communication. The following function RequestSCTByUsername() implements this:
RequestSCTByUsername()
public static SecurityContextToken RequestSCTByUsername(String username,
String password)
{
SecurityContextToken sct = null;
// Request a new security context token
// if one was not available from m_sctData
if (m_sctData.m_username == string.Empty ||
m_sctData.m_password == string.Empty ||
m_sctData.m_username != username || m_sctData.m_password !=
password || m_sctData.m_sct.IsExpired)
{
// Create a UsernameToken to use as the base
// for the security context token
SecurityToken token = new UsernameToken(username,
password, PasswordOption.SendPlainText);
// Retrieve the server certificate
SecurityToken issuerToken = GetServerToken(true);
// Create a SecurityContextTokenServiceClient
// (STSClient) that will get the SecurityContextToken
String secureConvEndpoint = ConfigurationSettings.AppSettings["tokenIssuer"];
SecurityContextTokenServiceClient STSClient = new
SecurityContextTokenServiceClient(new Uri(secureConvEndpoint));
// Request the security context token,
// use the client's signing token as the base
sct = STSClient.IssueSecurityContextTokenAuthenticated(token, issuerToken);
// Cache the security context token in m_sctData
m_sctData = new SctData(username, password, sct);
}
return m_sctData.m_sct;
}
On the server side, in order to validate digital signatures for incoming SOAP messages created using a UsernameToken, we override the AuthenticateToken method of the UsernameTokenManager class. The username and password are verified by checking whether the user exists in the database and by comparing the password hash value stored in the IssueVision database:
AuthenticateToken
UsernameTokenManager
[SecurityPermissionAttribute(SecurityAction.Demand,
Flags=SecurityPermissionFlag.UnmanagedCode)]
public class CustomUsernameTokenManager : UsernameTokenManager
{
protected override String AuthenticateToken( UsernameToken token )
{
// This method returns the password for the provided username
// WSE will make the determination if they match
DataSet dataSet = new DataSet();
string dbPasswordHash;
try
{
SqlConnection conn = new SqlConnection(Common.ConnectionString);
SqlCommand cmd = new SqlCommand("GetUser", conn);
cmd.Parameters.Add("@UserName", token.Username);
cmd.CommandType = CommandType.StoredProcedure;
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dataSet);
}
catch (Exception ex)
{
EventLogHelper.LogFailureAudit(string.Format("The GetUser" +
" stored procedure encounted a problem: {0}",
ex.ToString()));
throw new SoapException(string.Empty,
SoapException.ServerFaultCode, "Database");
}
// does the user exist?
if (dataSet.Tables[0].Rows.Count == 0)
{
EventLogHelper.LogFailureAudit(string.Format("The username" +
" {0} does not exist.", token.Username));
throw new SoapException(string.Empty,
SoapException.ClientFaultCode, "Security");
}
else
{
// we found the user, verify the password
// hash by compare the Salt + PasswordHash
DataRow dataRow = dataSet.Tables[0].Rows[0];
dbPasswordHash = (string)dataRow["PasswordHash"];
string dbPasswordSalt = (string)dataRow["PasswordSalt"];
// create a hash based on the user's salt and the input password
string passwordHash = HashString(dbPasswordSalt + token.Password);
// does the computed hash match the database hash?
if (string.Compare(dbPasswordHash, passwordHash) != 0)
{
EventLogHelper.LogFailureAudit(string.Format("The password" +
" for the username {0} was incorrect.",
token.Username));
throw new SoapException(string.Empty,
SoapException.ClientFaultCode, "Security");
}
else
{
return token.Password;
}
}
}
// generates a hash of the input plain text
private static string HashString(string textToHash)
{
SHA1CryptoServiceProvider SHA1 = new SHA1CryptoServiceProvider();
byte[] byteValue = System.Text.Encoding.UTF8.GetBytes(textToHash);
byte[] byteHash = SHA1.ComputeHash(byteValue);
SHA1.Clear();
return Convert.ToBase64String(byteHash);
}
}
From the client side, every call to the Web Service starts by requesting the security context token cached from the above steps. We then use the security context token to sign and encrypt the SOAP request message:
public static IVDataSet SendReceiveIssues(DataSet changedIssues,
DateTime lastAccessed)
{
IVDataSet data = null;
IssueVisionServicesWse dataService = GetWebServiceReference();
try
{
// Request the security context token
SecurityContextToken sct =
Wse2HelperClient.RequestSCTByUsername(UserSettings.Instance.Username,
UserSettings.Instance.Password);
// Use the security context token to sign
// and encrypt a request to the Web service
SoapContext requestContext = dataService.RequestSoapContext;
requestContext.Security.Tokens.Add(sct);
requestContext.Security.Elements.Add( new MessageSignature( sct ) );
requestContext.Security.Elements.Add( new EncryptedData( sct ) );
data = dataService.SendReceiveIssues(changedIssues, lastAccessed);
}
catch (WebException)
{
HandleWebServicesException(WebServicesExceptionType.WebException);
}
catch (SoapException soapEx)
{
if (soapEx.Actor == "Security")
{
HandleWebServicesException(WebServicesExceptionType.SoapException);
}
else
{
HandleWebServicesException(WebServicesExceptionType.WebException);
}
}
catch (Exception)
{
HandleWebServicesException(WebServicesExceptionType.Exception);
}
return data;
}
On the server side, every incoming request is first verified whether it is signed and encrypted. Also, we check whether it is signed by a security context token. When all these conditions are met, we use the same security context token to sign and encrypt the response SOAP message:
[WebMethod(Description="Synchronize data by" +
" send and recieving from the remote client.")]
public IVDataSet SendReceiveIssues(DataSet changedIssues, DateTime lastAccessed)
{
SoapContext requestContext = RequestSoapContext.Current;
// Reject any requests which are not valid SOAP requests
Wse2HelperServer.VerifyMessageParts(requestContext);
Wse2HelperServer.VerifyMessageSignature(requestContext);
Wse2HelperServer.VerifyMessageEncryption(requestContext);
// Check if the Soap Message is Signed with an SCT.
SecurityContextToken sct =
Wse2HelperServer.GetSigningToken(requestContext)
as SecurityContextToken;
if (sct == null)
{
throw new SoapException("The request is not signed with an SCT.",
SoapException.ServerFaultCode, "Security");
}
// Use the SCT to sign and encrypt the response
SoapContext responseContext = ResponseSoapContext.Current;
responseContext.Security.Tokens.Add(sct);
responseContext.Security.Elements.Add(new MessageSignature(sct));
responseContext.Security.Elements.Add(new EncryptedData(sct));
return new IVData().SendReceiveIssues(changedIssues, lastAccessed);
}
To monitor the actual encrypted and signed SOAP messages sent between the client and the web server, we can check the input and output message trace files InputTrace.webinfo and OutputTrace.webinfo on either the client or the web server side.
Hash.
|
https://www.codeproject.com/articles/10778/implementing-ws-secureconversation-in-microsoft-is?fid=192548&df=90&mpp=10&sort=position&spc=none&select=1154998&tid=1209393
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
The use of JavaScript has exploded over time. Now it is practically unheard of for a website not to utilize JavaScript to some extent. As a web developer who has concentrated on back-end coding in C# and front-end look and feel via HTML and CSS, my skills in JavaScript evolved over time instead of by conscious effort. While this is not uncommon, it can allow for some bad habits to be formed. This set of best practices is my way of taking a step back and addressing JavaScript as a first-class language, with both good parts and bad parts. My concentration will be on just JavaScript, regardless of where it is run. However, you will see references in here to the browser and to Visual Studio. This is simply because that is where I live, not because either are necessary for these best practices to apply. And so, without further ado, let's jump right in and see just how far down this rabbit hole goes.
When testing equality, a lot of languages with syntax similar to JavaScript use the double equals operator (==). However, in JavaScript you should always use triple equals (===). The difference is in how equality is determined. A triple equals operator evaluates the two items based upon their type and value. It makes no interpretations. Let's look at a couple examples:
if(1 === '1') //Returns false
if(1 == '1') //Returns true
if(0 === '') //Returns false
if(0 == '') //Returns true
The first line would equal false because the number one does not equal the character 1. That is what we would expect. The double equals operator, on the other hand, will attempt to coerce the two values to be the same type before comparing equality. This can lead to unexpected results. In the second grouping, the result using the double equals would be true. That probably isn't what we were expecting.
Just to be clear here, the same rule applies to the inequality operator as well. Looking at our above tests, we can see that both types of inequality operators work the same way as their counterparts:
if(1 !== '1') //Returns true
if(1 != '1') //Returns false
if(0 !== '') //Returns true
if(0 != '') //Returns false
The bottom line here is that we should always use the triple equals operator (or !== for not equal) rather than the double equals. The results are far more expected and predictable. The only exception would be once you are positive you understand what is happening and you absolutely need the coercion before comparison.
Most developers won't intentionally fail to put semicolons in the proper places. However, you need to be aware that the browser will usually put in semicolons where it thinks they are necessary. This can enable a bad habit that will come back to bite you down the road. In some instances, the compiler might assume that a semicolon is not needed, which will introduce tricky, hard-to-find bugs into your code. Avoid this by always adding the proper semicolons. A good tool to help you check your JavaScript for forgotten semicolons is JSLint.
As I mentioned above, JSLint is a great tool for helping you identify common problems in your JavaScript code. You can paste your code into the website listed above, you can use a different site like JavaScriptLint.com, or you can use one of the many downloadable JSLint tools. For instance, Visual Studio has an add-in for JSLint that will allow you to check for errors at compile-time (or manually).
Whatever you choose to do, the key point here is to run a tool like JSLint on your code. It will pick up bad code that is being masked by the browser. This will make your code cleaner and it will help to prevent those pesky bugs from showing up in production.
When you first start using JavaScript, the temptation is to just declare everything and use it as needed. This places all of your functions and variables into the global scope. The problem with this, besides it being sloppy, is that it makes your code extremely vulnerable to being affected by other code. For instance, consider the following example:
var cost = 5;
//...time goes by...
console.log(cost);
Imagine your surprise when the alert pops up and says "expensive" instead of 5. When you trace it down, you might find that a different piece of JavaScript somewhere else used a variable called cost to store text about cost for a different section of your application.
The solution to this is namespacing. To create a namespace, you simply declare a variable and then attach the properties and methods you want to it. The above code would be improved to look like this:
var MyNamespace = {};
MyNamespace.cost = 5;
//...time goes by...
console.log(MyNamespace.cost);
The resulting value would be 5, as expected. Now you only have one variable directly attached to the global context. The only way you should have a problem with naming conflicts now is if another application uses the same namespace as you. This problem will be much easier to diagnose, since none of your code will work (all of your methods and properties will be wiped out).
The Eval function allows us to pass a string to the JavaScript compiler and have it execute as JavaScript. In simple terms, anything you pass in at runtime gets executed as if it were added at design time. Here is an example of what that might look like:
eval("alert('Hi');");
This would pop up an alert box with the message "Hi" in it. The text inside the eval could have been passed in by the user or it could have been pulled from a database or other location.
There are a couple reasons why the eval function should be avoided. First, it is significantly slower than design time code. Second, it is a security risk. When code is acquired and executed at runtime, it opens a potential threat vector for malicious programmers to exploit. Bottom line here is that this function should be avoided at all costs.
When is 0.1 + 0.2 not equal to 0.3? When you do the calculation in JavaScript. The actual value of 0.1 + 0.2 comes out to be something like 0.30000000000000004. The reason for this (nope, not a bug) is because JavaScript uses Binary Floating Point numbers. To get around this issue, you can multiply your numbers to remove the decimal portion. For instance, if you were to be adding up the cost of two items, you could multiply each price by 100 and then divide the sum by 100. Here is an example:
var hamburger = 8.20;
var fries = 2.10;
var total = hamburger + fries;
console.log(total); //Outputs 10.299999999999999
hamburger = hamburger * 100;
fries = fries * 100;
total = hamburger + fries;
total = total / 100;
console.log(total); //Outputs 10.3
Most developers that write software in other C-family programming languages use the Allman style of formatting for block quotes. This places the opening curly brace on its own line. This pattern would look like this in JavaScript:
if(myState === 'testing')
{
console.log('You are in testing');
}
else
{
console.log('You are in production');
}
This will work most of the time. However, JavaScript is designed in such a way that following the K&R style of formatting for blocks is a better idea. This format starts the opening curly brace on the same line as the preceding line of code. It looks like this:
if(myState === 'testing') {
console.log('You are in testing');
} else {
console.log('You are in production');
}
While this may only seem like a stylistic difference, there can be times when there is a impact on your code if you use the Allman style. Earlier we talked about the browser inserting semicolons where it felt they were needed. One fairly serious issue with this is on return statements. Let's look at an example:
return
{
age: 15,
name: Jon
}
You would assume that the object would be returned but instead the return value will be undefined. The reason for this is because the browser has inserted a semicolon after the word return, assuming that one is missing. While return is probably the most common place where you will experience this issue, it isn't the only place. Browsers will add semi-colons after a number of keywords, including var and throw.
undefined
It is because of this type of issue that is is considered best practice to always use the K&R style for blocks to ensure that your code always works as expected.
There are a number of shortcuts and one-liners that can be used in lieu of their explicit counterparts. In most cases, these shortcuts actually encourage errors in the future. For instance, this is acceptable notation:
if (i > 3)
doSomething();
The problem with this is what could happen in the future. Say, for instance, a programmer were told to reset the value of i once the doSomething() function was executed. The programmer might modify the above code like so:
i
doSomething()
if (i > 3)
doSomething();
i = 0;
In this instance, i will be reset to zero even if the if statement evaluates to false. The problem might not be apparent at first and this issue doesn't really jump off the page when you are reading over the code in a code review.
Instead of using the shortcut, take the time necessary to turn this into the full notation. Doing so will protect you in the future. The final notation would look like this:
if (i > 3) {
doSomething();
}
Now when anyone goes in to add additional logic, it becomes readily apparent where to put the code and what will happen when you do.
Most languages that conform to the C-family style will not put an item into memory until the program execution hits the line where the item is initialized.
JavaScript is not like most other languages. It utilizes function-level scoping of variables and functions. When a variable is declared, the declaration statement gets hoisted to the top of the function. The same is true for functions. For example, this is permissible (if horrible) format:
function simpleExample(){
i = 7;
console.log(i);
var i;
}
What happens behind the scenes is that the var i; line declaration gets hoisted to the top of the simpleExample function. To make matters more complicated, not only the declaration of a variable gets hoisted but the entire function declaration gets hoisted. Let's look at an example to make this clearer:
var i;
simpleExample
function complexExample() {
i = 7;
console.log(i); //The message says 7
console.log(testOne()); //This gives a type error saying testOne is not a function
console.log(testTwo()); //The message says "Hi from test two"
var testOne = function(){ return 'Hi from test one'; }
function testTwo(){ return 'Hi from test two'; }
var i = 2;
}
Let's rewrite this function the way JavaScript sees it once it has hoisted the variable declarations and functions:
function complexExample() {
var testOne;
function testTwo(){ return 'Hi from test two'; }
var i;
i = 7;
console.log(i); //The message says 7
console.log(testOne()); //This gives a type error saying testOne is not a function
console.log(testTwo()); //The message says "Hi from test two"
testOne = function(){ return 'Hi from test one'; }
i = 2;
}
See the difference? The function testOne didn't get hoisted because it was a variable declaration (the variable is named testOne and the declaration is the anonymous function). The variable i gets its declaration hoisted and the initialization actually becomes an assignment down below.
testOne
In order to minimize mistakes and reduce the chances of introducing hard to find bugs in your code, always declare your variables at the top of your function and declare your functions next, before you need to use them. This reduces the chances of a misunderstanding about what is going on in your code.
It is possible to shorten a long namespace using the with statement. For instance, this is technically correct syntax:
with (myNamespace.parent.child.person) {
firstName = 'Jon';
lastName = 'Smyth';
}
That is equivalent of typing the following:
myNamespace.parent.child.person.firstName = 'Jon';
myNamespace.parent.child.person.lastName = 'Smyth';
The problem is that there are times when this goes badly wrong. Like many of the other common pitfalls of JavaScript, this will work fine in most circumstances. The better method of handling this issue is to assign the object to a variable and then reference the variable like so:
var p = myNamespace.parent.child.person;
p.firstName = 'Jon';
p.lastName = 'Smyth';
This method works every time, which is what we want out of a coding practice.
Again, the edge cases here will bite you if you aren't careful. Normally, typeof returns the string representation of the value type ('number', 'string', etc.) The problem comes in when evaluating NaN ('number'), null ('object'), and other odd cases. For example, here are a couple of comparisons that might be unexpected:
typeof
var i = 10;
i = i - 'taxi'; //Here i becomes NaN
if (typeof(i) === 'number') {
console.log('i is a number');
} else {
console.log('You subtracted a bad value from i');
}
The resulting alert message would be "i is a number", even though clearly it is NaN (or "Not a Number"). If you were attempting to ensure the passed in value (here it is represented by 'taxi') subtracted from i was a valid number, you would get unexpected results.
While there are times when it is necessary to try to determine the type of a particular value, be sure to understand these (and other) peculiarities about typeof that could lead to undesirable results.
Just like the typeof function, the parseInt function has quirks that need to be understood before it is used. There are two major areas that lead to unexpected results. First, if the first character is a number, parseInt will return all of the number characters it finds until it hits a non-numeric character. Here is an example:
parseInt
parseInt("56"); //Returns 56
parseInt("Joe"); //Returns NaN
parseInt("Joe56"); //Returns NaN
parseInt("56Joe"); //Returns 56
parseInt("21.95"); //Returns 21
Note that last example I threw in there to trip you up. The decimal point is not a valid character in an integer, so just like any other character, parseInt stops evaluating on it. Thus, we get 21 when evaluating 21.95 and no rounding is attempted.
The second pitfall is in the interpretation of the number. It used to be that a string with a leading zero was determined to be a number in octal format. Ecmascript 5 (JavaScript is an implementation of Ecmascript) removed this functionality. Now most numbers will default to base 10 (the most common numbering format). The one exception is a string that starts with "0x". This type of string will be assumed to be a hexadecimal number (base 16) and it will be converted to a base 10 number on output. To specify a number's format, thus ensuring it is properly evaluated, you can include the optional parameter called a radix. Here are some more examples to illustrate these possibilities:
parseInt("08"); //Returns 8 - used to return 0 (base 8)
parseInt("0x12"); //Returns 18 - assumes hexadecimal
parseInt("12", 16); //Returns 18, since base 16 is specified
When you execute a switch statement, each case statement should be concluded by a break statement like so:
switch
case
break
switch(i) {
case 1:
console.log('One');
break;
case 2:
console.log('Two');
break;
case 3:
console.log('Three');
break;
default:
console.log('Unknown');
break;
}
If you were to assign the value of 2 to the variable i, this switch statement would fire an alert that says "Two". The language does permit you to allow fall through by omitting the break statement(s) like so:
switch(i) {
case 1:
console.log('One');
break;
case 2:
console.log('Two');
case 3:
console.log('Three');
break;
default:
console.log('Unknown');
break;
}
Now if you passed in a value of 2, you would get two alerts, the first one saying "Two" and the second one saying "Three". This can seem to be a desirable solution in certain circumstances. The problem is that this can create false expectations. If you do not see that a break statement is missing, you may add logic that gets fired accidentally. Conversely, you may notice later that a break statement is missing and you might assume this is a bug. The bottom line is that fall through should not be used intentionally in order to keep your logic clean and clear.
The For...In loop works as it is intended to work, but how it works surprises people. The basic overview is that it loops through the attached, enumeration-visible members on an object. It does not simply walk down the index list like a basic for loop does. The following two examples are NOT equivalent:
// The standard for loop
for(var i = 0; i < arr.length; i++) {}
// The for...in loop
for(var i in arr) {}
In some cases, the output will act the same in the above two cases. That does not mean they work the same way. There are three major ways that for...in is different than a standard for loop. These are:
If you fully understand for...in and know that it is the right choice for your specific situation, it can be a good solution. However, for the other 99% of situations, you should use a standard for loop instead. It will be quicker, easier to understand, and less likely to cause weird bugs that are hard to diagnose.
When declaring a variable, always use the var keyword unless you are specifically attaching the variable to an object. Failure to do so attaches your new variable to the global scope (window if you are in a browser). Here is an example to illustrate how this works:
function carDemo() {
var carMake = 'Dodge';
carModel = 'Charger';
}
console.log(carMake); //Undefined, since carMake is defined inside the testing function scope
console.log(carModel); //Charger, since this variable has been implicitly attached to window
The declaration of the carModel variable is the equivalent of saying window.carModel = 'Charger';. This clogs up the global scope and endangers your other JavaScript code blocks, since you might inadvertently change the value of a variable somewhere else.
window.carModel = 'Charger';
JavaScript is rather flexible with what it allows you to do. This isn't always a good thing. For instance, when you create a function, you can specify that one of the parameters be named arguments. This will overwrite the arguments object that every function is given by inheritance. This is an example of a special word that isn't truly reserved. Here is an example of how it would work:
arguments
// This function correctly accesses the inherited
// arguments parameter
function CorrectWay() {
for(var i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
// You should never name a parameter after
// a reserved or special word like "arguments"
function WrongWay(arguments) {
for(var i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
// Outputs 'hello' and 'hi'
CorrectWay('hello', 'hi');
// Outputs 'h', 'e', 'l', 'l', and 'o'
WrongWay('hello', 'hi');
There are also reserved words that will cause you issues when you attempt to run your application. A complete listing of these words can be found at the Mozilla Developer Network. While there are work-arounds to use some of these words, avoid doing so if at all possible. Instead, use key words that won't conflict with current or potential future reserved or special words.
When I originally developed my list of best practices, this one was so obvious I overlooked it. Fortunately Daniele Rota Nodari pointed it out to me. Keeping a consistent standard is important to writing easily understandable code. Matching the coding style of the application you are working in should become second nature, even if that means changing your personal style for the duration of the project. When you get the opportunity to start a project, make sure that you have already established a personal coding style that you can apply in a consistent manner.
While being inconsistent with how you write your code won’t necessarily add bugs into your application, it does make your code harder to read and understand. The harder code is to read and understand, the more likely it is that someone will make a mistake. A good post on JavaScript coding styles and consistency in applying them can be found here:. The bottom line here is that you need to write consistent code. If you bring snippets into your application, format them to match your existing style. If you are working in someone else’s application, match your code to the existing style.
As with any software development language, reading the code of other developers will help you improve your own skills. Find a popular open source library and peruse the code. Figure out what they are doing and then identify why they chose to do things that way. If you can't figure it out, ask someone. Push yourself to learn new ways to attach common problems.
JavaScript is not C#. It is not Java or Java-lite. It is its own language. If you treat it as such, you will have a much easier time navigating its particular peculiarities. As you may have noticed, the common theme throughout many of these best practices is that there are hidden pitfalls that can be avoided by simply modifying how you approach certain problems. Little formatting and layout techniques can make a big difference in the success of your project.
Before I finish, I wanted to point out that there are a number of best practices related to JavaScript in HTML that I have not mentioned. For instance, a simple one is to include your JavaScript files at the bottom of your body tag. This omission is intentional. While I've made passing references to both the browser and Visual Studio, the above tips are purely about JavaScript. I will be writing a separate article that covers JavaScript in the browser.
body
Thus concludes my attempt at compiling a list of JavaScript best practices. Please be sure to let me know if you have others that you think should be added to the list.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
var test
function doSomething()
{
var x = "I am x"
var
y = "I am y"
alert(x + ", " + y)
}
doSomething()
alert("x is " + (typeof x))
alert("y is " + (typeof y))
var
y
return
continue
var x = "I am x"
var
y = "I am y"
function p()
{ var q = 'b';
+1;
var r = 'c'
+2;
document.write('q = ' + q + '<br>r = ' + r)
}
p()
Quote:While return is probably the most common place where you will experience this issue, it isn't the only place. Browsers will add semi-colons after a number of keywords, including var and throw.
function f()
{
var x = 1;
return x
+ 1;
}
alert("function returned " + f());
function f()
{
var x = 1;
return
x + 1;
}
alert("function returned " + f());
var x
= 1;
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/articles/580165/javascript-best-practices?fid=1830910&df=10000&mpp=10&sort=position&spc=relaxed&select=4550345&tid=4548311
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Sometime ago, I was musing about current user interface design trends. I had been doing a lot of development at that time and VSS was one frequently used app, I started thinking about the VSS message box, you know the one which has six buttons like "Replace", "Merge", "Leave" etc. and the "Apply to all items" checkbox. If we need to implement something like that today, we would have to roll our own message box for each such message box, well, that's just a waste of time. So I started thinking about a reusable message box that supported stuff like custom buttons, "Don't ask me again" feature, etc. I did find some articles about custom message boxes but most of them were implemented using hooks. Well, I didn't like that too much and would never use such solutions in a production environment. So I decided to create a message box from scratch and provide the functionality that I needed. I also wanted to expose the functionality in an easy to use manner. This article describes some of the hurdles and interesting things I discovered while implementing this custom message box. The source code accompanying this article implements a custom message box that:
The MessageBoxEx component can be used as is in your applications to show message boxes where the standard message box won't do. Also, it can be used as a starting point for creating your own custom message box.
MessageBoxEx
Let's take a look at what all is required if you need to implement a message box that duplicates the functionality provided by default.
The message box dynamically resizes itself to best fit its content. The factors that determine the size of the message box are message text, caption text and number of buttons. Also, I discovered that it imposes some limits on its size, both horizontally and vertically. So no matter how long the text of the message box is, the message box will never extend beyond your screen area, in fact, it does not even come close to covering the entire screen area. The message box also displays itself in the center of the screen.
So we need to first determine the maximum size for the message box. This can be done by using the SystemInformation class.
SystemInformation
_maxWidth = (int)(SystemInformation.WorkingArea.Width * 0.60);
_maxHeight = (int)(SystemInformation.WorkingArea.Height * 0.90);
So the message box has a max width of 60% of the screen width and max height of 90% of the screen height.
For fitting the size of the message box to its contents, we can make use of the Graphics.MeasureString() method.
Graphics.MeasureString()
/// <summary>
/// Measures a string using the Graphics object for this form with
/// the specified font
/// </summary>
/// <param name="str">The string to measure</param>
/// <param name="maxWidth">The maximum width
/// available to display the string</param>
/// <param name="font">The font with which to measure the string</param>
/// <returns></returns>
private Size MeasureString(string str, int maxWidth, Font font)
{
Graphics g = this.CreateGraphics();
SizeF strRectSizeF = g.MeasureString(str, font, maxWidth);
g.Dispose();
return new Size((int)Math.Ceiling(strRectSizeF.Width),
(int)Math.Ceiling(strRectSizeF.Height));
}
The above code is used to determine the size of the various elements in the message box. Once we have the size required by each of the elements, we determine the optimal size for the form and then layout all the elements in the form. The code for determining the optimal size is in the method MessageBoxExForm.SetOptimumSize(), and the code for laying out the various elements is in MessageBoxExForm.LayoutControls().
MessageBoxExForm.SetOptimumSize()
MessageBoxExForm.LayoutControls()
One interesting thing is that the font of the caption is determined by the system. Thus we cannot use the Form's Font property to measure the size of the caption string. To get the font of the caption, we can use the Win32 API SystemParametersInfo.
Form
Font
SystemParametersInfo
private Font GetCaptionFont()
{
NONCLIENTMETRICS ncm = new NONCLIENTMETRICS();
ncm.cbSize = Marshal.SizeOf(typeof(NONCLIENTMETRICS));
try
{
bool result = SystemParametersInfo(SPI_GETNONCLIENTMETRICS,
ncm.cbSize, ref ncm, 0);
if(result)
{
return Font.FromLogFont(ncm.lfCaptionFont);
}
else
{
int lastError = Marshal.GetLastWin32Error();
return null;
}
}
catch(Exception /*ex*/)
{
//System.Console.WriteLine(ex.Message);
}
return null;
}
private const int SPI_GETNONCLIENTMETRICS = 41;
private const int LF_FACESIZE = 32;
;
}
[StructLayout(LayoutKind.Sequential, CharSet=CharSet.Auto)]
private struct NONCLIENTMETRICS
{
public int cbSize;
public int iBorderWidth;
public int iScrollWidth;
public int iScrollHeight;
public int iCaptionWidth;
public int iCaptionHeight;
public LOGFONT lfCaptionFont;
public int iSmCaptionWidth;
public int iSmCaptionHeight;
public LOGFONT lfSmCaptionFont;
public int iMenuWidth;
public int iMenuHeight;
public LOGFONT lfMenuFont;
public LOGFONT lfStatusFont;
public LOGFONT lfMessageFont;
}
[DllImport("user32.dll", SetLastError=true, CharSet=CharSet.Auto)]
private static extern bool SystemParametersInfo(int uiAction,
int uiParam, ref NONCLIENTMETRICS ncMetrics, int fWinIni);
One interesting thing that happened while I was working on getting the caption font was with the definition of the LOGFONT structure. You see in MSDN documentation, the first five fields of the LOGFONT structure were declared as type long. I blindly copied the definition from the MSDN documentation and my call to SystemParametersInfo always returned false. After banging my head for four hours trying to figure out what the problem was, I came across a code snippet that used SystemParametersInfo. I downloaded that snippet and it worked perfectly on my machine. On further inspection, I noticed that the size of the structure I was passing to SystemParametersInfo was different from what the code snippet had. And then the lights came on, long should have been mapped to int...aaaargh.
LOGFONT
long
false
int
Another interesting thing that I had never really noticed about the message box was that if you don't have a Cancel button in your message box, the Close button on the top right is disabled. You can check this by showing a message box with "Yes", "No" buttons only. Not only is the Close button disabled but the system menu also does not show a Close option. So, that called for some more P/Invoke magic to disable the Close button if more than one button was present and there were no Cancel buttons. Of course, since the buttons themselves are custom, each button has a IsCancelButton property that you can set if you want to make the button a Cancel button.
IsCancelButton
[DllImport("user32.dll", CharSet=CharSet.Auto)]
private static extern IntPtr GetSystemMenu(IntPtr hWnd, bool bRevert);
[DllImport("user32.dll", CharSet=CharSet.Auto)]
private static extern bool EnableMenuItem(IntPtr hMenu, uint uIDEnableItem,
uint uEnable);
private const int SC_CLOSE = 0xF060;
private const int MF_BYCOMMAND = 0x0;
private const int MF_GRAYED = 0x1;
private const int MF_ENABLED = 0x0;
private void DisableCloseButton(Form form)
{
try
{
EnableMenuItem(GetSystemMenu(form.Handle, false),
SC_CLOSE, MF_BYCOMMAND | MF_GRAYED);
}
catch(Exception /*ex*/)
{
//System.Console.WriteLine(ex.Message);
}
}
The above code disables the Close button, and also disables the Alt+F4 and Close option from the system menu.
Initially, I had obtained all the standard message box icons from various system files, using ResHacker. After I had posted the article, Carl pointed out the SystemIcons enumeration which provides all the standard system icons. So now, instead of embedding the standard icons into the message box, I am using the SystemIcons enumeration to draw the standard icons. Which means that on Windows XP, instead of , the real system icon is shown. An interesting problem that came up was that initially I was using a PictureBox control to display the icon. Now since it can only take Image objects, I tried converting the icon returned by SystemIcons to a Bitmap via the Icon.ToBitmap() method. This worked alright for Win2K icons, but on XP where the icons had an alpha channel, the icons came out looking terrible. Next, I tried manually creating a bitmap and then painting the icon onto the bitmap using Graphics.DrawIcon(), that too gave the same results. So finally, I had to draw the icon directly on the surface of the message box and drop the picture box.
SystemIcons
PictureBox
Image
Bitmap
Icon.ToBitmap()
Graphics.DrawIcon()
Another interesting thing that I noticed was that out of the eight enumeration values in MessageBoxIcon, only four had unique values. Thus, Asterisk = Information, Error = Hand, Exclamation = Warning and Hand = Stop. The difference I believe is in the support for Compact Framework.
MessageBoxIcon
Asterisk
Error
Exclamation
Hand
When I was almost finished with my implementation, I realized that my message box made no sound when it displayed. I knew that the sounds were configurable via the Control Panel so I could not embed the sounds in the library. Fortunately, there is an API called MessageBeep which is exactly what I required. It takes only one parameter which is an integer representing the icon that is being displayed for the message box.
MessageBeep
Below is the code that plays the alerts whenever a message box is popped:
[DllImport("user32.dll", CharSet=CharSet.Auto)]
private static extern bool MessageBeep(uint type);
if(_playAlert)
{
if(_standardIcon != MessageBoxIcon.None)
{
MessageBeep((uint)_standardIcon);
}
else
{
MessageBeep(0 /*MB_OK*/);
}
}
The interesting part in the design was how to implement the "Don't ask me again" a.k.a. SaveUserResponse feature. I didn't want that the client code be littered with if statements checking if a saved response was available. So I decided that the client code should always call MessageBoxEx.Show() and if the user had saved a response then that response should be returned by the call rather than the dialog actually popping up. The next problem to handle was message box identity, how do I identify that the same message box is being invoked so that I can lookup if a user has saved a response to that message box? One solution would have been to create a hash of the message text, caption text, buttons etc. to identify the message box. The problem here was that in cases where we didn't want to use the response saved by the user, this approach would fail. Another big disadvantage was that there was no way to undo the saved response; once a user made a choice, he had to stick to it for the entire process lifetime.
if
MessageBoxEx.Show()
The approach I have used is to have a MessageBoxExManager that manages all message boxes. Basically, all message boxes are created with a name. The name can be used to retrieve the message box at a later stage and invoke it. The name can also be used to reset the saved response of the user. One other functionality which I have exposed via the MessageBoxExManager is the ability to persist saved responses. Although there is no implementation for this right now, it can be implemented very easily, only the hashtable containing the saved responses need to be serialized.
MessageBoxExManager
This means that a message box once created can be reused. If it is not required anymore, then it can be disposed using the MessageBoxExManager.DeleteMessageBox() method; or if you want to create and show a one time message box, then you can pass null in the call to MessageBoxExManager.CreateMessageBox(). If a message box is created with a null name, then it is automatically disposed after the first call to MessageBoxEx.Show().
MessageBoxExManager.DeleteMessageBox()
null
MessageBoxExManager.CreateMessageBox()
Below is the public interface for the MessageBoxExManager, along with explanations:
/// <summary>
/// Manages a collection of MessageBoxes. Basically manages the
/// saved response handling for messageBoxes.
/// </summary>
public class MessageBoxExManager
{
/// <summary>
/// Creates a new message box with the specified name. If null is specified
/// in the message name then the message
/// box is not managed by the Manager and
/// will be disposed automatically after a call to Show()
/// </summary>
/// <param name="name">The name of the message box</param>
/// <returns>A new message box</returns>
public static MessageBoxEx CreateMessageBox(string name);
/// <summary>
/// Gets the message box with the specified name
/// </summary>
/// <param name="name">The name of the message box to retrieve</param>
/// <returns>The message box
/// with the specified name or null if a message box
/// with that name does not exist</returns>
public static MessageBoxEx GetMessageBox(string name);
/// <summary>
/// Deletes the message box with the specified name
/// </summary>
/// <param name="name">The name of the message box to delete</param>
public static void DeleteMessageBox(string name);
/// <summary>
/// Persists the saved user responses to the stream
/// </summary>
public static void WriteSavedResponses(Stream stream);
/// <summary>
/// Reads the saved user responses from the stream
/// </summary>
public static void ReadSavedResponses(Stream stream)
/// <summary>
/// Reset the saved response for the message box with the specified name.
/// </summary>
/// <param name="messageBoxName">The name of the message box
/// whose response is to be reset.</param>
public static void ResetSavedResponse(string messageBoxName);
/// <summary>
/// Resets the saved responses for all message boxes
/// that are managed by the manager.
/// </summary>
public static void ResetAllSavedResponses();
}
Another design decision was regarding how to expose the MessageBoxEx class itself. Although the MessageBoxEx is a Form, I did not want to expose it as a Form for two reasons: one was to abstract away the implementation details and the second was to reduce intellisense clutter while working with the class. Thus, MessageBoxEx is a proxy to the real Form which is implemented in MessageBoxExForm.
MessageBoxExForm
Below is the public interface for MessageBoxEx:
/// <summary>
/// An extended MessageBox with lot of customizing capabilities.
/// </summary>
public class MessageBoxEx
{
/// <summary>
/// Sets the caption of the message box
/// </summary>
public string Caption
/// <summary>
/// Sets the text of the message box
/// </summary>
public string Text
/// <summary>
/// Sets the icon to show in the message box
/// </summary>
public Icon CustomIcon
/// <summary>
/// Sets the icon to show in the message box
/// </summary>
public MessageBoxExIcon Icon
/// <summary>
/// Sets the font for the text of the message box
/// </summary>
public Font Font
/// <summary>
/// Sets or Gets the ability of the user to save his/her response
/// </summary>
public bool AllowSaveResponse
/// <summary>
/// Sets the text to show to the user when saving his/her response
/// </summary>
public string SaveResponseText
/// <summary>
/// Sets or Gets wether the saved response if available should be used
/// </summary>
public bool UseSavedResponse
/// <summary>
/// Sets or Gets wether an alert sound
/// is played while showing the message box
/// The sound played depends on the the Icon selected for the message box
/// </summary>
public bool PlayAlsertSound
/// <summary>
/// Sets or Gets the time in milliseconds
/// for which the message box is displayed
/// </summary>
public int Timeout
/// <summary>
/// Controls the result that will be returned when the message box times out
/// </summary>
public TimeoutResult TimeoutResult
/// <summary>
/// Shows the message box
/// </summary>
/// <returns></returns>
public string Show()
/// <summary>
/// Shows the messsage box with the specified owner
/// </summary>
/// <param name="owner"></param>
/// <returns></returns>
public string Show(IWin32Window owner)
/// <summary>
/// Add a custom button to the message box
/// </summary>
/// <param name="button">The button to add</param>
public void AddButton(MessageBoxExButton button)
/// <summary>
/// Add a custom button to the message box
/// </summary>
/// <param name="text">The text of the button</param>
/// <param name="val">The return value
/// in case this button is clicked</param>
public void AddButton(string text, string val)
/// <summary>
/// Add a standard button to the message box
/// </summary>
/// <param name="buttons">The standard button to add</param>
public void AddButton(MessageBoxExButtons button)
/// <summary>
/// Add standard buttons to the message box.
/// </summary>
/// <param name="buttons">The standard buttons to add</param>
public void AddButtons(MessageBoxButtons buttons)
}
Also for convenience, the standard message box buttons are available as an enumeration which can be used in AddButton().
AddButton()
/// <summary>
/// Standard MessageBoxEx buttons
/// </summary>
public enum MessageBoxExButtons
{
Ok = 0,
Cancel = 1,
Yes = 2,
No = 4,
Abort = 8,
Retry = 16,
Ignore = 32,
}
Also, the results of these standard buttons are available as constants.
/// <summary>
/// Standard MessageBoxEx results
/// </summary>
public struct MessageBoxExResult
{
public const string Ok = "Ok";
public const string Cancel = "Cancel";
public const string Yes = "Yes";
public const string No = "No";
public const string Abort = "Abort";
public const string Retry = "Retry";
public const string Ignore = "Ignore";
public const string Timeout = "Timeout";
}
Using the code is pretty straightforward. Just add the MessageBoxExLib project to your application, and you're ready to go. Below is some code that shows how to create and display a standard message box with the option to save the user's response.
MessageBoxEx msgBox = MessageBoxExManager.CreateMessageBox("Test");
msgBox.Caption = "Question";
msgBox.Text = "Do you want to save the data?";
msgBox.AddButtons(MessageBoxButtons.YesNo);
msgBox.Icon = MessageBoxIcon.Question;
msgBox.SaveResponseText = "Don't ask me again";
msgBox.Font = new Font("Tahoma",11);
string result = msgBox.Show();
Here is the resulting message box:
Here is some code that demonstrates how you can use your own custom buttons with tooltips in your message box:
MessageBoxEx msgBox = MessageBoxExManager.CreateMessageBox("Test2");
msgBox.Caption = "Question";
msgBox.Text = "Do you want to save the data?";
MessageBoxExButton btnYes = new MessageBoxExButton();
btnYes.Text = "Yes";
btnYes.Value = "Yes";
btnYes.HelpText = "Save the data";
MessageBoxExButton btnNo = new MessageBoxExButton();
btnNo.Text = "No";
btnNo.Value = "No";
btnNo.HelpText = "Do not save the data";
msgBox.AddButton(btnYes);
msgBox.AddButton(btnNo);
msgBox.Icon = MessageBoxExIcon.Question;
msgBox.SaveResponseText = "Don't ask me again";
msgBox.AllowSaveResponse = true;
msgBox.Font = new Font("Tahoma",8);
string result = msgBox.Show();
While showing the message box, a timeout value can be specified; if the user does not select a response within the specified time frame, then the message box will be automatically dismissed. The result that is returned when the message box times out can be specified using the enumeration shown below:
/// <summary>
/// Enumerates the kind of results that can be returned when a
/// message box times out
/// </summary>
public enum TimeoutResult
{
/// <summary>
/// On timeout the value associated with
/// the default button is set as the result.
/// This is the default action on timeout.
/// </summary>
Default,
/// <summary>
/// On timeout the value associated with
/// the cancel button is set as the result.
/// If the messagebox does not have a cancel button
/// then the value associated with
/// the default button is set as the result.
/// </summary>
Cancel,
/// <summary>
/// On timeout MessageBoxExResult.Timeout is set as the result.
/// </summary>
Timeout
}
Here is a code snippet that shows how you can use the timeout feature:
MessageBoxEx msgBox = MessageBoxExManager.CreateMessageBox(null);
msgBox.Caption = "Question";
msgBox.Text = "Do you want to save the data?";
msgBox.AddButtons(MessageBoxButtons.YesNo);
msgBox.Icon = MessageBoxExIcon.Question;
//Wait for 30 seconds for the user to respond
msgBox.Timeout = 30000;
msgBox.TimeoutResult = TimeoutResult.Timeout;
string result = msgBox.Show();
if(result == MessageBoxExResult.Timeout)
{
//Take action to handle the timeout
}
After my initial posting of this article, Carl and Frank pointed out that the message box could also be useful in localized applications. Now, initially I had thought that I would be able to access the localized strings for standard buttons like "OK", "Cancel" etc. from the OS itself, it seems that there is no such documented way, I even talked to Michael Kaplan and he confirmed that there is no way to get those strings. Now instead of thinking of some hack to get the strings from the OS, I decided to use a simple solution, I moved the strings into a .resx file and used that to show the text for standard buttons based on the CurrentUICulture property of the current thread. I've included resources for French and German using BabelFish. Here is an example of a message box that was created after setting the CurrentUICulture to "fr".
CurrentUICulture
MessageBoxEx msgBox = MessageBoxExManager.CreateMessageBox(null);
msgBox.Caption = "Question";
msgBox.Text = "Voulez-vous sauver les données ?";
msgBox.AddButtons(MessageBoxButtons.YesNoCancel);
msgBox.Icon = MessageBoxExIcon.Question;
msgBox.Show();
The resulting message box is shown below:
CreateMessageBox()
Show()
MessageBoxExManager.ResetSavedResponse()
Buttons
SystemInformation.VerticalScrollbarWidth
SystemInformation.HorizontalScrollbarHeight
FlatStyle
Load
ShowDialog()
AutoScale.
|
https://www.codeproject.com/articles/9656/dissecting-the-messagebox?fid=155440&df=90&mpp=25&sort=position&spc=relaxed&select=1801013&tid=3184131
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I was assigned to create a program that lets the user enter the total rainfall for each of the 12 months, into an array of doubles. Then it needed to calculate and display the total rainfall for the year, the average monthly rainfall, and the months with the highest and lowest amounts. Then sort and display the months based on their descending rainfall amount.
Here's what I'm really having trouble with. How do I tell what element position the data came out of from my double array? I need to know that, at least I think I do to tell my char array which element to display for the name of the month with the right rainfall amount.
Can anyone help me please? I'm new to this and am really stuck. Any info would be greatly appreciated!!
// This program allows the user to enter the amount of rainfall for each month. // It can then calculate the total rainfall for the year as well as the average monthly rainfall, // and the months with the highest and lowest amounts of rain. #include <iostream> #include <iomanip> using namespace std; // Function Prototypes double sumArray(double[], int); double getHighest(double[], int); double getLowest(double[], int); void sortArray(double [], int); void showArray(double [], int); int main() { double total; double average; double highest; double lowest; const int MONTHS = 12; const int cols = 2; const int string_SIZE = 10; double values[MONTHS]; // An Array for the values // The rain Array contains the months for the data to be entered char rain[MONTHS][string_SIZE] = { "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December" }; // dynamically ask the user for input. for (int count = 0; count < MONTHS; count++) { cout << "Enter the amount of rainfall for "<<rain[count]<< " in inches."<<endl; cin>>values[count]; } cout<<"\n"; cout<<"\n"; // Get the total rainfall. total = sumArray(values, MONTHS); // Calculate the average. average = total / MONTHS; // Find the highest sales amount. highest = getHighest(values, MONTHS); // Find the lowest sales amount. lowest = getLowest(values, MONTHS); // Display the total cout<<"The total amount of rainfall for the year is: "<<setprecision(3)<<total<<" inches."<<endl; cout<<"\n"; cout<<"\n"; // Display the average cout<<"The average amount of rainfall for the year is: "<<setprecision(3)<<average<<" inches."<<endl; cout<<"\n"; cout<<"\n"; // Display the highest cout<<"The highest amount of rain was in "<<" with: "<<highest<<" inches."<<endl; cout<<"\n"; cout<<"\n"; // Display the lowest cout<<"The lowest amount of rain is: "<<lowest<<" inches."<<endl; cout<<"\n"; cout<<"\n"; // Display the values. cout << "The unsorted values are:\n"; showArray(values, MONTHS); // Sort the values. sortArray(values, MONTHS); // Display them again. cout << "The sorted values are:\n"; showArray(values, MONTHS); return 0; } double sumArray(double array[], int size) { double total = 0; // Accumulator for (int count = 0; count < size; count++) total += array[count]; return total; } double getHighest(double array[], int size) { double highest; highest = array[0]; for (int count = 1; count < size; count++) { if (array[count] > highest) highest = array[count]; } return highest; } double getLowest(double array[], int size) { double lowest; lowest = array[0]; for (int count = 1; count < size; count++) { if (array[count] < lowest) lowest = array[count]; } return lowest; } //*********************************************************** // Definition of function sortArray * // This function performs an ascending order bubble sort on * // array. size is the number of elements in the array. * //*********************************************************** void sortArray(double array[], int size) { bool swap; int temp; do { swap = false; for (int count = 0; count < (size - 1); count++) { if (array[count] < array[count + 1]) { temp = array[count]; array[count] = array[count + 1]; array[count + 1] = temp; swap = true; } } } while (swap); } //************************************************************* // Definition of function showArray. * // This function displays the contents of array. size is the * // number of elements. * //************************************************************* void showArray(double array[], int size) { for (int count = 0; count < size; count++) cout << array[count] << " "; cout << endl; }
|
https://www.daniweb.com/programming/software-development/threads/75745/c-array-help-needed
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Because RabbitMQ was a new third-party piece of software to be used as a critical component of our system, I wanted to test its integration throughly. That involved multiple tests against a local cluster of three nodes (all running on my local machine), as well as the same tests running against a remote RabbitMQ cluster. The tests involved tearing down, recreating, and configuring the cluster in different ways, and then stress-testing it. Setting up and configuring a remote RabbitMQ cluster involves multiple steps, each normally taking less than a second. But, on occasion, one can take up to 30 seconds. Here is a typical list of the necessary steps for configuring a remote RabbitMQ cluster:
- Shut down every node in the cluster
- Reset the persistent metadata of every node
- Launch every node in isolated mode
- Cluster the nodes together
- Start the application on each node
- Configure virtual hosts, exchanges, queues, and bindings
I created a Python program called Elmer that uses Fabric to remotely interact with the cluster. Due to the way RabbitMQ manages metadata across the cluster, you have to wait for each step to complete for every node in the cluster before you can execute the next step; and checking the result of each step requires parsing the console output of shell commands (yuck!). Couple that with node-specific issues and network hiccups and you get a process with high time variation. In my tests, in addition to graceful shutdown and restart of the whole cluster, I often want to violently kill or restart a node.
From an operations point of view, this is not a problem. Launching a cluster, or replacing a node, are rare events and it's OK if it takes a few seconds. It is quite a different story for a developer who want to run a few dozen cluster tests after each change. Another complication is that some use cases require testing unresponsive nodes, which can lead to the halting problem (is it truly unresponsive or just slow?). After suffering through multiple test runs where each test was blocked for a long time waiting for the remote cluster, I ended up with the following approach:
- Elmer (the Python/Fabric cluster remote control program) exposes every step of the process
- A C# class called
Runnercan launch Python scripts and Fabric commands and capture the output
- A C# class called
RabbitMQutilizes the
Runnerclass to control the cluster
- A C# class called
Waitcan dynamically wait for an arbitrary operation to complete
The key was the
Wait class. The
Wait class has a static method called
Wait.For() that allows you to wait for an arbitrary operation to complete until a certain timeout. If the operation completes quickly, you will not have to wait for the time to expire, and
Wait will bail out quickly. If the operation doesn't complete in time,
Wait.For() will return after the timeout expires.
Wait.For() accepts a duration (either a
TimeSpan or number of milliseconds), and a function returns
bool. It also has a
Nap member variable that defaults to 50 milliseconds. When you call
Wait.For(), it calls your function in a loop until it returns
true or until the duration expires (napping between calls). If the function returns
true, then
Wait.For() returns
true; but if the duration expires, it returns
false.
public class Wait { public static TimeSpan Nap = TimeSpan.FromMilliseconds(50); public static bool For(TimeSpan duration, Func<bool> func) { var end = DateTime.Now + duration; if (end <= DateTime.Now) { return false; } while (DateTime.Now < end) { if (func.Invoke()) { return true; } Thread.Sleep(Nap); } return false; } public static bool For(int duration, Func<bool> func) { return For(TimeSpan.FromMilliseconds(duration), func); } }
Now, you can efficiently wait for processes that may take highly variable times to complete. Here is how I use
Wait.For() to check whether a RabbitMQ node is stopped:
private bool IsRabbitStopped() { var ok = Wait.For(TimeSpan.FromSeconds(10), () => { var s = rmq("status", displayOutput: false); return !s.Contains("{mnesia,") && !s.Contains("{rabbit,"); }); return ok; }
I call
Wait.For() with a duration of 10 seconds, which I wouldn't want to block on every time I check whether a node is down (since it happens all the time). The anonymous function I pass in calls the
rmq() method with the
status command. The
rmq() method runs the
status command on the remote cluster, then returns the command-line output as text. Here is the output when the Rabbit is running:
Status of node rabbit@GIGI ... [{pid,8420}, {running_applications, [{rabbitmq_management,"RabbitMQ Management Console","2.8.2"}, {xmerl,"XML parser","1.3"}, {rabbitmq_management_agent,"RabbitMQ Management Agent","2.8.2"}, {amqp_client,"RabbitMQ AMQP Client","2.8.2"}, {rabbit,"RabbitMQ","2.8.2"}, {os_mon,"CPO CXC 138 46","2.2.8"}, {sasl,"SASL CXC 138 11","2.2"}, {rabbitmq_mochiweb,"RabbitMQ Mochiweb Embedding","2.8.2"}, {webmachine,"webmachine","1.7.0-rmq2.8.2-hg"}, {mochiweb,"MochiMedia Web Server","1.3-rmq2.8.2-git"}, {inets,"INETS CXC 138 49","5.8"}, {mnesia,"MNESIA CXC 138 12","4.6"}, {stdlib,"ERTS CXC 138 10","1.18"}, {kernel,"ERTS CXC 138 10","2.15"}]}, {os,{win32,nt}}, {erlang_version,"Erlang R15B (erts-5.9) [smp:8:8] [async-threads:30]\n"}, {memory, [{total,19703792}, {processes,6181847}, {processes_used,6181832}, {system,13521945}, {atom,495069}, {atom_used,485064}, {binary,81216}, {code,9611946}, {ets,628852}]}, {vm_memory_high_watermark,0.10147532588839969}, {vm_memory_limit,858993459}, {disk_free_limit,8465047552}, {disk_free,15061905408}, {file_descriptors, [{total_limit,924},{total_used,4},{sockets_limit,829},{sockets_used,2}]}, {processes,[{limit,1048576},{used,181}]}, {run_queue,0}, {uptime,62072}] ...done.
The function is making sure that the
mnesia and
rabbit components don't show up in the output. Note that if the node is still up, the function will return
false and
Wait.For() will continue to execute it multiple times.
Wait.For() decreases the sensitivity of my tests to occasional spikes in response time (I can
Wait.For() longer without slowing down the test in the common case), and has reduced the runtime of the whole test suite from minutes to seconds.
Conclusion
The sum total of this series of articles has shown a variety of design principles and testing techniques to deal with hard-to-test systems. Nontrivial code will always contain bugs, but deep testing is guarantied to reduce the number of undiscovered issues.
Gigi Sayfan specializes in cross-platform object-oriented programming in C/C++/ C#/Python/Java with emphasis on large-scale distributed systems, and is a long-time contributor to Dr. Dobb's.
Related Articles
Testing Complex C++ Systems
|
http://www.drdobbs.com/windows/net-development-on-linux/windows/testing-python-and-c-code/240147927?pgno=3
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
C++ reference is a name that acts as an alternative name for a previously defined variable.
For example, if you make Bob a reference to the Robert variable, you can use Bob and Robert interchangeably.
The main use for a reference variable is as a formal argument to a function.
If you use a reference as an argument, the function works with the original data instead of with a copy.
References provide a convenient alternative to pointers for processing large structures with a function.
C and C++ use the & symbol to indicate the address of a variable.
C++ uses & symbol to declare references.
For example, to make robert an alternative name for the variable rats, you could do the following:
int bob; int & robert = bob; // makes robert an alias for bob
In this context, & is not the address operator.
Instead, it serves as part of the type identifier.
int & means reference-to-int.
The reference declaration allows you to use bob and robert interchangeably.
Both refer to the same value and the same memory location.
#include <iostream> using namespace std; /* www . j ava2 s .c o m*/ int main(){ int bob = 101; int & robert = bob; // robert is a reference cout << "bob = " << bob; cout << ", robert = " << robert << endl; robert++; cout << "bob = " << bob; cout << ", robert = " << robert << endl; cout << "bob address = " << &bob; cout << ", robert address = " << &robert << endl; return 0; }
The code above generates the following result.
& in the following code declares a reference type variable.
int & robert = bob;
& operator in the next statement is the address operator:
cout <<", robert address = " << &robert << endl;
Incrementing robert by one affects both variables.
We can create both a reference and a pointer to refer to bob:
int bob = 101; int & robert = bob; // robert a reference int * pbob = &bob; // pbob a pointer
Then you could use the expressions robert and
*pbob interchangeably with bob
and use the expressions &robert and pbob interchangeably with
&bob.
We have to initialize the reference when you declare it.
A reference is rather like a const pointer, you have to initialize it when you create it.
int & robert = bob;
is, in essence, a disguised notation for something like this:
int * const pr = &bob;
Here, the reference robert plays the same role as the expression *pr.
References are often used as function parameters, making a variable name in a function an alias for a variable.
This method of passing arguments is called passing by reference.
The following code shows how to swap with references and with pointers.
#include <iostream> using namespace std; void swapr(int & a, int & b); // a, b are aliases for ints void swapp(int * p, int * q); // p, q are addresses of ints int main(){ int my_value1 = 300; int my_value2 = 350; // w w w . j a v a2 s . c om cout << "my_value1 = $" << my_value1; cout << " my_value2 = $" << my_value2 << endl; cout << "Using references to swap contents:\n"; swapr(my_value1, my_value2); // pass variables cout << "my_value1 = $" << my_value1; cout << " my_value2 = $" << my_value2 << endl; cout << "Using pointers to swap contents:\n"; swapp(&my_value1, &my_value2); // pass addresses of variables cout << "my_value1 = $" << my_value1; cout << " my_value2 = $" << my_value2 << endl; return 0; } void swapr(int & a, int & b) // use references { int temp; temp = a; // use a, b for values of variables a = b; b = temp; } void swapp(int * p, int * q) // use pointers { int temp; temp = *p; // use *p, *q for values of variables *p = *q; *q = temp; }
The code above generates the following result.
C++ passes class objects to a function via references.
For instance, you would use reference parameters for the string, ostream, istream, ofstream, and ifstream classes as arguments.
The following code uses the string class.
#include <iostream> #include <string> using namespace std; string my_func(const string & s1, const string & s2); int main(){/* w ww . j a va 2 s . com*/ string input; string copy; string result; cout << "Enter a string: "; getline(cin, input); copy = input; cout << "Your string as entered: " << input << endl; result = my_func(input, "***"); cout << "Your string enhanced: " << result << endl; cout << "Your original string: " << input << endl; return 0; } string my_func(const string & s1, const string & s2){ string temp; temp = s2 + s1 + s2; return temp; }
The code above generates the following result.
|
http://www.java2s.com/Tutorials/C/Cpp_Tutorial/0400__Cpp_Reference_Variables.htm
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I've come across something odd while writing a Ruby module (a set of helper methods for a Sinatra app). I'm declaring a hash as a constant, with keys as strings. Later, when I attempt to retrieve a value, I get nil. On inspecting the hash, I find that the keys have been converted to symbols. What's going on?
Here's a simplified example:
module HelperModule
RANGES = {
'a' => [1...60],
'b' => [60...90],
'c' => [90..999]
}.freeze
def find_range(key)
RANGES[key] # Returns nil when key is 'a', 'b' or 'c'
end
end
{:a=>[1...60], :b=>[60...90], :c=>[90..999]}
.to_sym
It's something in your environment that alters
Hash.
Start with looking into
RANGES.class.ancestors, also look for refinements (those you probably have to grep for
using)
|
https://codedump.io/share/gesqnNkrXLJx/1/why-is-ruby-symbolising-my-hash-keys
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
JmsTemplate is easy for simple message sending. What if we want to add headers, intercept or transform the message? Then we have to write more code. So, how do we solve this common task with more configurability in lieu of more code? First, lets review JMS in Spring.
Spring JMS Options
- JmsTemplate – either to send and receive messages inline
- Use send()/convertAndSend() methods to send messages
- Use receive()/receiveAndConvert() methods to receive messages. BEWARE: these are blocking methods! If there is no message on the Destination, it will wait until a message is received or times out.
- MessageListenerContainer – Async JMS message receipt by polling JMS Destinations and directing messages to service methods or MDBs
Both JmsTemplate and MessageListenerContainer have been successfully implemented in Spring applications, if we have to do something a little different, we introduce new code. What could possibly go wrong?
Future Extensibility?
On many projects new use-cases arise, such as:
- Route messages to different destinations, based on header values or contents?
- Log the message contents?
- Add header values?
- Buffer the messages?
- Improved response and error handling?
- Make configuration changes without having to recompile?
- and more…
Now we have to refactor code and introduce new code and test cases, run it through QA, etc. etc.
A More Configurable Solution!
It is time to graduate Spring JmsTemplate and play with the big kids. We can easily do this with a Spring Integration flow.
How it is done with Spring Integration
Here we have a diagram illustrating the 3 simple components to Spring Integration replacing the JmsTemplate send.
- Create a Gateway interface – an interface defining method(s) that accept the type of data you wish to send and any optional header values.
- Define a Channel – the pipe connecting our endpoints
- Define an Outbound JMS Adapter – sends the message to your JMS provider (ActiveMQ, RabbitMQ, etc.)
Simply inject this into our service classes and invoke the methods.
Immediate Gains
- Add header & header values via the methods defined in the interface
- Simple invokation of Gateway methods from our service classes
- Multiple Gateway methods
- Configure method level or class level destinations
Future Gains
- Change the JMS Adapter (one-way) to a JMS Gateway (two-way) to processes responses from JMS
- We can change the channel to a queue (buffered) channel
- We can wire in a transformer for message transformation
- We can wire in additional destinations, and wire in a “header (key), header value, or content based” router and add another adapter
- We can wire in other inbound adapters receiving data from another source, such as SMTP, FTP, File, etc.
- Wiretap the channel to send a copy of the message elsewhere
- Change the channel to a logging adapter channel which would provide us with logging of the messages coming through
- Add the “message-history” option to our SI configuration to track the message along its route
- and more…
Optimal JMS Send Solution
The Spring Integration Gateway Interface
Gateway provides a one or two way communication with Spring Integration. If the method returns void, it is inherently one-way.
The interface MyJmsGateway, has one Gateway method declared sendMyMessage(). When this method is invoked by your service class, the first argument will go into a message header field named “myHeaderKey”, the second argument goes into the payload.
package com.gordondickens.sijms; import org.springframework.integration.annotation.Gateway;import org.springframework.integration.annotation.Header; public interface MyJmsGateway { @Gateway public void sendMyMessage(@Header("myHeaderKey") String s, Object o);}
Spring Integration Configuration
Because the interface is proxied at runtime, we need to configure in the Gateway via XML.
<?xml version="1.0" encoding="UTF-8"?><beans xmlns="" xmlns: <import resource="classpath:META-INF/spring/amq-context.xml"/> <!-- Pickup the @Gateway annotation --> <si:annotation-config/> <si:poller <si:interval-trigger </si:poller> <!-- Define the channel (pipe) connecting the endpoints --> <si:channel <!-- Configure the Gateway to Send on the channel --> <si:gateway <!-- Send message to JMS --> <jms:outbound-channel-adapter</beans>
Sending the Message
package com.gordondickens.sijms; import org.junit.Test;import org.junit.runner.RunWith;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.test.context.ContextConfiguration;import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; @ContextConfiguration("classpath:/com/gordondickens/sijms/JmsSenderTests-context.xml")@RunWith(SpringJUnit4ClassRunner.class)public class JmsSenderTests { @Autowired MyJmsGateway myJmsGateway; @Test public void testJmsSend() { myJmsGateway.sendMyMessage("myHeaderValue", "MY PayLoad"); }}
Summary
- Simple implementation
- Invoke a method to send a message to JMS – Very SOA eh?
- Flexible configuration
- Reconfigure & restart WITHOUT recompiling – SWEET!
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/dont-use-jmstemplate-spring
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Name
Template::Declare::Bricolage - Perlish XML Generation for Bricolage's SOAP API
Synopsis
use Template::Declare::Bricolage; say bricolage { workflow { attr { id => 1027 }; name { 'Blogs' } description { 'Blog Entries' } site { 'Main Site' } type { 'Story' } active { 1 } desks { desk { attr { start => 1 }; 'Blog Edit' } desk { attr { publish => 1 }; 'Blog Publish' } } } };
Description
It can be a lot of work generating XML for passing to the Bricolage SOAP interface. After experimenting with a number of XML-generating libraries, I got fed up and created this module to simplify things. It's a very simple subclass of Template::Declare that supplies a functional interface to templating your XML. All the XML elements understood by the Bricolage SOAP interface are exported from Template::Declare::TagSet::Bricolage, which you can use independent of this module if you require a bit more control over the output.
But the advantage to using Template::Declare::Bricolage is that it sets up a bunch of stuff for you, so that the usual infrastructure of setting up the templating environment, outputting the top-level
<assets> element and the XML namespace, is just handled. You can just focus on generating the XML you need to send to Bricolage.
And the nice thing about Template::Declare's syntax is that it's, well, declarative. Just use the elements you need and it will do the rest. For example, the code from the Synopsis returns:
<assets xmlns=""> <workflow id="1027"> <name>Blogs</name> <description>Blog Entries</description> <site>Main Site</site> <type>Story</type> <active>1</active> <desks> <desk start="1">Blog Edit</desk> <desk publish="1">Blog Publish</desk> </desks> </workflow> </assets>
bricolage {}
In addition to all of the templating functions exported by Template::Declare::TagSet::Bricolage, Template::Declare::Bricolage exports one more function,
bricolage. This is the main function that you should use to generate your XML. It starts the XML document with the XML declaration and the top-level
<assets> element required by the the Bricolage SOAP API. Otherwise, it simply executes the block passed to it. That block should simply use the formatting functions to generate the XML you need for your assets. That's it.
Support
This module is stored in an open GitHub repository,. Feel free to fork and contribute!
Please file bug reports at.
Author
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
https://metacpan.org/pod/Template::Declare::Bricolage
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
NAME
File::stat - by-name interface to Perl's built-in stat() functions
SYNOPSIS
use File::stat; $st = stat($file) or die "No $file: $!"; if ( ($st->mode & 0111) && $st->nlink > 1) ) { print "$file is executable with lotsa links\n"; } if ( -x $st ) { print "$file is executable\n"; } use Fcntl "S_IRUSR"; if ( $st->cando(S_IRUSR, 1) ) { print "My effective uid can read $file\n"; } use File::stat qw(:FIELDS); stat($file) or die "No $file: $!"; if ( ($st_mode & 0111) && ($st_nlink > 1) ) { print "$file is executable with lotsa links\n"; }
DESCRIPTION.
As of version 1.02 (provided with perl 5.12) the object provides
"-X" overloading, so you can call filetest operators (
-f,
-x, and so on) on it. It also provides a
->cando method, called like
$st->cando( ACCESS, EFFECTIVE )
where ACCESS is one of
S_IRUSR,
S_IWUSR or
S_IXUSR from the Fcntl module, and EFFECTIVE indicates whether to use effective (true) or real (false) ids. The method interprets the
mode,
uid and
gid fields, and returns whether or not the current process would be allowed the specified access.
If you don't want to use the objects, you may import the
->cando method into your namespace as a regular function called
stat_cando. This takes an arrayref containing the return values of
stat or
lstat as its first argument, and interprets it for you..
BUGS(_));
ERRORS
- -%s is not implemented on a File::stat object
The filetest operators
-t,
-Tand
-Bare not implemented, as they require more information than just a stat buffer.
WARNINGS
These can all be disabled with
no warnings "File::stat";
- File::stat ignores use filetest 'access'
You have tried to use one of the
-rwxRWXfiletests with
use filetest 'access'in effect.
File::statwill ignore the pragma, and just use the information in the
modemember as usual.
- File::stat ignores VMS ACLs
VMS systems have a permissions structure that cannot be completely represented in a stat buffer, and unlike on other systems the builtin filetest operators respect this. The
File::statoverloads, however, do not, since the information required is not available.
NOTE
While this class is currently implemented using the Class::Struct module to build a struct-like class, you shouldn't rely upon this.
AUTHOR
Tom Christiansen
|
https://metacpan.org/pod/release/FLORA/perl-5.17.5/lib/File/stat.pm
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.