text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Managing large datasets on Kaggle without fearing about the out of memory error
Datatable is a Python package for manipulating large dataframes. It has been created to provide big data support and enable high performance. This toolkit resembles pandas very closely but is more focused on speed. It supports out-of-memory datasets, multi-threaded data processing, and has a flexible API. In the past, we have written a couple of articles that explain in detail how to use datatable for reading, processing, and writing tabular datasets at incredible speed:
- An Overview of Python’s Datatable package
- Speed up your Data Analysis with Python’s Datatable package
These two articles compare datatable’s performance with the pandas’ library on certain parameters. Additionally, they also explain how to use datatable for data wrangling and munging and how their performance compares to other libraries in the same space.
However, this article is mainly focused on people who are interested in using datatable on the Kaggle platform. Of late, many competitions on Kaggle are coming with datasets that are just impossible to read in with pandas alone. We shall see how we can use datatable to read those large datasets efficiently and then convert them into other formats seamlessly.
Currently
datatableis in the Beta stage and undergoing active development.
Installation
Kaggle Notebooks are a cloud computational environment that enables reproducible and collaborative analysis. The datatable package is part of Kaggle’s docker image. This means no additional effort is required to install the library on Kaggle. All you have to do is import the library and use it.
import datatable as dt
print(dt.__version__)
0.11.1
However, if you would want to download a specific version of the library(or maybe the latest version when available), you can do so by pip installing the library. Make sure the internet setting is set to ON in the notebooks.
!pip install datatable==0.11.0
If you want to install datatable locally on your system, follow the instructions given in the official documentation.
Usage
Let’s now see an example where the benefit of using datatable is clearly visible. The dataset that we’ll use for the demo is being taken from a recent Kaggle competition titled Riiid Answer Correctness Prediction competition. The challenge was to create algorithms for “Knowledge Tracing” by modeling the student knowledge over time. In other words, the aim was to accurately predict how students will perform in future interactions.
The
train.csv file consists of around a hundred Million rows. The data size is ideal for demonstrating the capabilities of the datatable library.
Pandas, unfortunately, throws an out of memory error and is unable to handle datasets of this magnitude. Let’s try Datatable instead and also record the time taken to read the dataset and its subsequent conversion into pandas dataframe
1. Reading data in CSV format
The fundamental unit of analysis in datatable is a
Frame. It is the same notion as a pandas DataFrame or SQL table, i.e., data arranged in a two-dimensional array with rows and columns.
%%time
# reading the dataset from raw csv file
train = dt.fread("../input/riiid-test-answer-prediction/train.csv").to_pandas()
print(train.shape)
The
fread() function above is both powerful and extremely fast. It can automatically detect and parse parameters for most text files, load data from .zip archives or URLs, read Excel files, and much more. Let’s check out the first five rows of the dataset.
train.head()
Datatable takes less than a minute to read the full dataset and convert it to pandas.
2. Reading data in jay format
The dataset can also be first saved in binary format (.jay) then read in using the datatable. The .jay file format is designed explicitly for datatable’s use, but it is open to be adopted by some other libraries or programs.
# saving the dataset in .jay (binary format)
dt.fread("../input/riiid-test-answer-prediction/train.csv").to_jay("train.jay")
Let’s now look at the time taken to read in the jay format file.
%%time
# reading the dataset from .jay format
train = dt.fread("train.jay")
print(train.shape)
It takes less than a second to read the entire dataset in the .jay format. Let’s now convert it into pandas, which is reasonably fast too.
%%time
train = dt.fread("train.jay").to_pandas()
print(train.shape)
Let’s quickly glance over the first few rows of the frame:
train.head()
Here we have a pandas dataframe that can be used for further data analysis. Again the time taken for the conversion was mere 27s.
Conclusion
In this article, we saw how the datatable package shines when working with big data. With its emphasis on big data support, datatable offers many benefits and can improve the time taken to perform wrangling tasks on a dataset. Datatable is an open-source project, and hence it is open to contributions and collaborations to improve it and make it even better. We’ll love to have you try it out and use it in your projects. If you have questions about using
datatable, post them on Stack Overflow using the
[py-datatable] tag.
Originally published here
|
https://parulpandey.com/2021/02/04/using-pythons-datatable-library-seamlessly-on-kaggle/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
tensorflow::This is an abstract class.
serving:: Loader
#include <loader.h>
A standardized abstraction for an object that manages the lifecycle of a servable, including loading and unloading it.
Summary
Servables are arbitrary objects that serve algorithms or data that often, though not necessarily, use a machine-learned model.
A Loader for a servable object represents one instance of a stream of servable versions, all sharing a common name (e.g. "my_servable") and increasing version numbers, typically representing updated model parameters learned from fresh training data.
A Loader should start in an unloaded state, meaning that no work has been done to prepare to perform operations. A typical instance that has not yet been loaded contains merely a pointer to a location from which its data can be loaded (e.g. a file-system path or network location). Construction and destruction of instances should be fairly cheap. Expensive initialization operations should be done in Load().
Subclasses may optionally store a pointer to the Source that originated it, for accessing state shared across multiple servable objects in a given servable stream.
Implementations need to ensure that the methods they expose are thread-safe, or carefully document and/or coordinate their thread-safety properties with their clients to ensure correctness. Servables do not need to worry about concurrent execution of Load()/Unload() as the caller will ensure that does not happen.
InheritanceDirect Known Subclasses:tensorflow::serving::ResourceUnsafeLoader
Public functions
EstimateResources
virtual Status EstimateResources( ResourceAllocation *estimate ) const =0
Load
virtual Status Load()
Fetches any data that needs to be loaded before using the servable returned by servable().
May use no more resources than the estimate reported by EstimateResources().
If implementing Load(), you don't have to override LoadWithMetadata().
LoadWithMetadata
virtual Status LoadWithMetadata( const Metadata & metadata )
Similar to the above method, but takes Metadata as a param, which may be used by the loader implementation appropriately.
If you're overriding LoadWithMetadata(), because you can use the metadata appropriately, you can skip overriding Load().
Unload
virtual void Unload()=0
servable
virtual AnyPtr servable()=0
Returns an opaque interface to the underlying servable object.
The caller should know the precise type of the interface in order to make actual use of it. For example:
CustomLoader implementation:
class CustomLoader : public Loader { public: ... Status Load() override { servable_ = ...; } AnyPtr servable() override { return servable_; } private: CustomServable* servable_ = nullptr; };
Serving user request:
ServableHandle<CustomServable> handle = ... CustomServable* servable = handle.get(); servable->...
If servable() is called after successful Load() and before Unload(), it returns a valid, non-null AnyPtr object. If called before a successful Load() call or after Unload(), it returns null AnyPtr.
|
https://www.tensorflow.org/tfx/serving/api_docs/cc/class/tensorflow/serving/loader?hl=zh_tw
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Caffe2 in an iOS App Deep Learning Tutorial
At this years’s F8 conference, Facebook’s annual developer event, Facebook announced Caffe2 in collaboration with Nvidia. This framework gives developers yet another tool for building deep learning networks for machine learning. But I am super pumped about this one, because it is specifically designed to operate on mobile devices! So I couldn’t resist but start digging in immediately.
I’m still learning, but I want to share my journey in working with Caffe2. So, in this tutorial I’m going to show you step by step how to take advantage of Caffe2 to start embedding deep learning capabilities in to your iOS apps. Sound interesting? Thought so… let’s roll 🙂
Building Caffe2 for iOS
The first step here is to just get Caffe2 built. Mostly their instructions are adequate so I won’t repeat too much of it here. You can learn how to build Caffe2 for iOS here.
The last step for their iOS install process is to run build_ios.sh, but that’s about where they leave you off with the instruction. So from here, let’s take a look at the build artifacts. The core library for Caffe2 on iOS is located inside the caffe2 folder:
- caffe2/libCaffe2_CPU.a
And in the root folder:
- libCAFFE2_NNPACK.a
- libCAFFE2_PTHREADPOOL.a
Create an Xcode project
Now that the library was built, I created a new iOS app project in Xcode with a single-view template. From here I drag and drop the
libCaffe2_CPU.a file in to my project heirarchy along with the other two libs,
libCAFFE2_NNPACK.a and
libCAFFE2_PTHREADPOOL.a. Select ‘Copy’ when prompted. The file is located at
caffe2/build_ios/caffe2/libCaffe2_CPU.a. This pulls a copy of the library in to my project and tells Xcode I want to link against the library. We need to do the same thing with protobuf, which is located in
caffe2/build_ios/third_party/protobuf/cmake/libprotobuf.a.
In my case I wanted to also include OpenCV2, which has it’s own requirements for setting up. You can learn about how to install OpenCV2 on their site. The main problem I ran in to with OpenCV2 was figuring out that I needed to create a
Prefix.h file, and then in the settings of the project set the Prefix Header file to be
MyAppsName/Prefix.h. In my example project I called the project
DayMaker, so for me it was
DayMaker/Prefix.h. Then I could put the following in the Prefix.h file so that OpenCV2 would get included before any Apple headers:
Include the Caffe2 headers
In order to actually use the library, we’ll need to pull in the right headers. Assuming you have a directory structure where your caffe2 files are a level above your project. (I cloned caffe2 in to
~/Code/caffe2 and set up my project in
~/Code/DayMaker.) You’ll need to add the following User Header Search Path in your project settings:
You’ll also need to add the following to “Header Search Paths”
Now you can also try importing some caffe2 C++ headers in order to confirm it’s all working as expected. I created a new Objective-C class to wrap the Caffe2 C++ API around. To follow along, create a new Objective-C class called Caffe2. Then rename the
Caffe2.m file it creates to
Caffe2.mm. This causes the compiler to see this as Objective-C++ instead of just Objective-C, a requirement for making this all work.
Next, I added some Caffe2 headers to the
.mm file. At this point this is my entire
Caffe2.mm file:
According to this Github issue a reasonable place to start with a C++ interface to the Caffe2 library is this standalone
predictor_verifier.cc app. So let’s expand the
Caffe2.mm file to include some of this stuff and see if everything works on-device.
With a few tweaks we can make a class that loads up the caffe2 environment and loads in a set of predict/net files. I’ll pull in the files from Squeezenet on the Model Zoo. Copy these in to the project heirarchy, and we’ll load it up just like any iOS binary asset…
Next, we can just instantiate this from the AppDelegate to test it out… (Note you’ll need to import Caffe2.h in your Bridging Header if you’re using Swift, like me.
In AppDelegate.swift:
This for me produced some linker errors from clang:
Adding
-force_load DayMaker/libCaffe2_CPU.a as an additional linker flag corrected this issue, but then it presented an issue not being able to find opencv. The
DayMaker part will be your project name, or just whatever folder your
libCaffe2_CPU.a file is located in. This will show up as two flags, just make sure theyre in the right order and it should perform the right concatenation of the flags.
Building and running the app crashes immediately with this output:
Success! I mean, it doesn’t look like success jut yet, but this is an error coming from caffe. The issue here is just that we never set anything for the input. So let’s fix that by providing data from an image.
Here you can add a cat jpg to the project or some similar image to work with, and load it in:
I refactored this a bit and moved my logic out in to a
predictWithImage method, as well as creating the predictor in a seperate function:
The predictWithImage method is using openCV to get the GBR data from the image, then I’m loading that in to Caffe2 as the inputVector. Most of the work here is actually done in OpenCV with the cvtColor line…
The imagenet_classes are defined in a new file, classes.h. It’s just a copy from the Android example repo here.
Most of this logic was pulled and modified from bwasti’s github repo for the Android example.
With these changes I was able to simplify the initCaffe method as well:
So you’ll notice I’m pulling in the cat.jpg here. I used this cat pic:
The output when running on iPhone 7:
Identified: tabby, tabby cat
Hooray! It works on a device!
I’m going to keep working on this and publishing what I learn. If that sounds like something you want to follow along with you can get new posts in your email, just join my mobile development newsletter. I’ll never spam you, just keep you up-to-date with deep learning and my own work on the topic.
Thanks for reading! Leave a comment or contact me if you have any feedback 🙂
Side-note: Compiling on Mac OS Sierra with CUDA
When compiling for Sierra as a target (not the iOS build script, but just running
make) I ran in to a problem in protobuf that is related to this issue. This will only be a problem if you are building against CUDA. I suppose it’s somewhat unusual to do so because most Mac computers do not have NVIDIA chips in them, but in my case I have a 2013 MBP with an NVIDIA chip that I can use CUDA with.
To resolve the problem in the most hacky way possible, I applied the changes found in that issue pull. Just updating protobuf to the latest version by building from source would probably also work… but this just seemed faster. I open up my own version of this file in
/usr/local/Cellar/protobuf/3.2.0_1/include/google/protobuf/stubs/atomicops.h and just manually commented out lines 198 through 205:
I’m not sure what the implications of this are, but it seems to be what they did in the official repo, so it must not do much harm. With this change I’m able to make the Caffe2 project with CUDA support enabled. In the official version of protobuf used by tensorflow, you can see this bit is actually just removed, so it seems to be the right thing to do until protobuf v3.2.1 is released, where this is fixed using the same approach.
27 thoughts on “Caffe2 on iOS – Deep Learning Tutorial”
Awesome!
Yes, what is in the Caffe2.h file? I was also wondering about this.
The following definition should be good enough for the Caffe2 object:
#import
#ifndef Caffe2_h
#define Caffe2_h
@interface Caffe2 : NSObject
@end
#endif /* Caffe2_h */.
From opencv 2.4.6 on this functionality is already included. Just include
opencv2/highgui/ios.h
In OpenCV 3 this include has changed to:
opencv2/imgcodecs/ios.h
Thanks for the tutorial. Is there a github repo for this demo?
Hi! So strange but I am constantly get error while compile project:
/сaffe2/caffe2/utils/math.h:20:10: ‘Eigen/Core’ file not found
On this page I mentioned that you must add the header search paths, just check that you’ve added that and that the files are present.
Thanks for the tutorial! My two cents on running it with simulator, “IOS_PLATFORM=SIMULATOR ./scripts/build_ios.sh” can be used to build the libs for simulator. Also based on “-DUSE_NNPACK=OFF” needs to be passed in as an option for cmake, and libCAFFE2_NNPACK.a won’t get built (things still work with three .a files)
Is this still a valid way to use a caffe2 model in iOS?
|
https://jamesonquave.com/blog/caffe2-on-ios-deep-learning-tutorial/?replytocom=87614
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Saving a Numpy array as an image
This uses PIL, but maybe some might find it useful:
import scipy.miscscipy.misc.imsave('outfile.jpg', image_array)
EDIT: The current
scipy version started to normalize all images so that min(data) become black and max(data) become white. This is unwanted if the data should be exact grey levels or exact RGB channels. The solution:
import scipy.miscscipy.misc.toimage(image_array, cmin=0.0, cmax=...).save('outfile.jpg')
With
matplotlib:
import matplotlibmatplotlib.image.imsave('name.png', array)
Works with matplotlib 1.3.1, I don't know about lower version. From the docstring:
Arguments: *fname*: A string containing a path to a filename, or a Python file-like object. If *format* is *None* and *fname* is a string, the output format is deduced from the extension of the filename. *arr*: An MxN (luminance), MxNx3 (RGB) or MxNx4 (RGBA) array.
|
https://codehunter.cc/a/python/saving-a-numpy-array-as-an-image
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
RoutablePageMixin¶
The
RoutablePageMixin mixin provides a convenient way for a page to respond on multiple sub-URLs with different views. For example, a blog section on a site might provide several different types of index page at URLs like
/blog/2013/06/,
/blog/authors/bob/,
/blog/tagged/python/, all served by the same page instance.
A
Page using
RoutablePageMixin exists within the page tree like any other page, but URL paths underneath it are checked against a list of patterns. If none of the patterns match, control is passed to subpages as usual (or failing that, a 404 error is thrown).
By default a route for
r'^$' exists, which serves the content exactly like a normal
Page would. It can be overridden by using
@route(r'^$') on any other method of the inheriting class.
Installation¶
Add
"wagtail.contrib.routable_page" to your INSTALLED_APPS:
INSTALLED_APPS = [ ... "wagtail.contrib.routable_page", ]
The basics¶
To use
RoutablePageMixin, you need to make your class inherit from both
wagtail.contrib.routable_page.models.RoutablePageMixin and
wagtail.core.models.Page, then define some view methods and decorate them with
wagtail.contrib.routable_page.models.route. These view methods behave like ordinary Django view functions, and must return an
HttpResponse object; typically this is done through a call to
django.shortcuts.render.
Here’s an example of an
EventIndexPage with three views, assuming that an
EventPage model with an
event_date field has been defined elsewhere:
import datetime from django.http import JsonResponse from wagtail.core.fields import RichTextField from wagtail.core.models import Page from wagtail.contrib.routable_page.models import RoutablePageMixin, route class EventIndexPage(RoutablePageMixin, Page): # Routable pages can have fields like any other - here we would # render the intro text on a template with {{ page.intro|richtext }} intro = RichTextField() @route(r'^$') # will override the default Page serving mechanism def current_events(self, request): """ View function for the current events page """ events = EventPage.objects.live().filter(event_date__gte=datetime.date.today()) #()})
Rendering other pages¶
Another way of returning an
HttpResponse is to call the
serve method of another page. (Calling a page’s own
serve method within a view method is not valid, as the view method is already being called within
serve, and this would create a circular definition).
For example,
EventIndexPage could be extended with a
next/ route that displays the page for the next event:
@route(r'^next/$') def next_event(self, request): """ Display the page for the next event """ future_events = EventPage.objects.live().filter(event_date__gt=datetime.date.today()) next_event = future_events.order_by('event_date').first() return next_event.serve(request)
Reversing URLs¶
RoutablePageMixin adds a
reverse_subpage() method to your page model which you can use for reversing URLs. For example:
# The URL name defaults to the view method name. >>> event_page.reverse_subpage('events_for_year', args=(2015, )) 'year/2015/'
This method only returns the part of the URL within the page. To get the full URL, you must append it to the values of either the
url or the
full_url attribute on your page:
>>> event_page.url + event_page.reverse_subpage('events_for_year', args=(2015, )) '/events/year/2015/' >>> event_page.full_url + event_page.reverse_subpage('events_for_year', args=(2015, )) '
Changing route names¶
The route name defaults to the name of the view. You can override this name with the
name keyword argument on
@route:
from wagtail.core.models import Page from wagtail.contrib.routable_page.models import RoutablePageMixin, route class EventPage(RoutablePageMixin, Page): ... @route(r'^year/(\d+)/$', name='year') def events_for_year(self, request, year): """ View function for the events for year page """ ...
>>> event_page.url + event_page.reverse_subpage('year', args=(2015, )) '/events/year/2015/'
The
RoutablePageMixin class¶
- class
wagtail.contrib.routable_page.models.
RoutablePageMixin¶
This class can be mixed in to a Page model, allowing extra routes to be added to it.argument to specify an alternative template to use for rendering. For example:
@route(r'^past/$') def past_events(self, request): return self.render( request, context_overrides={ 'title': "Past events", 'events': EventPage.objects.live().past(), }, template="events/event_index_historical.html", )
resolve_subpage(path)¶
This method takes a URL path and finds the view to call.
Example:
view, args, kwargs = page.resolve_subpage('/past/') response = view(request, *args, **kwargs)
The
routablepageurl template tag¶
routablepageurlis similar to
pageurl, but works with pages using
RoutablePageMixin. It behaves like a hybrid between the built-in
reverse, and
pageurlfrom Wagtail.
pageis the RoutablePage that URLs will be generated from.
url_nameis a URL name defined in
page.subpage_urls.
Positional arguments and keyword arguments should be passed as normal positional arguments and keyword arguments.
Example:
{% load wagtailroutablepage_tags %} {% routablepageurl page "feed" %} {% routablepageurl page "archive" 2014 08 14 %} {% routablepageurl page "food" foo="bar" baz="quux" %}
|
https://docs.wagtail.org/en/v2.14.1/reference/contrib/routablepage.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
We.
import matplotlib.pyplot as plt import pandas as pd plt.figure() loansmin = pd.read_csv('../datasets/loanf.csv') fico = loansmin['FICO.Score'] p = fico.hist())
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
|
https://nbviewer.ipython.org/github/nborwankar/LearnDataScience/blob/master/notebooks/WA2.%20Linear%20Regression%20-%20Data%20Exploration%20-%20Lending%20Club%20Worksheet.ipynb
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
With.
The JavaBeans Activation Framework is implemented as a standard extension. Sun provides a royalty-free reference implementation of the JAF software, in binary form, that developers can use to develop JAF technology-enabled applications for any platform that supports version 1.4 or later of the Java 2 Standard Edition. Sun's reference implementation of the JAF standard extension is available for download below.
The JavaBeans Activation Framework 1.1.1 contains a bug fix for the Turkish locale and a minor enhancement to better support byte arrays and Strings in the DataHandler.writeTo method.
The JavaBeans Activation Framework 1.1.1 requires J2SE 1.4 or greater.
The JavaBeans Activation Framework 1.1.1 final release is included with the Java SE 6 release and is also available separately.
For a detailed description of the fixes see the jaf-changes.txt document.
Specification
- Available for online viewing (in postscript or PDF).
- You can also view the javadoc generated API description online.
System Requirements
Any Java technology-compatible platform running version 1.4 or later of the Java 2 Standard Edition
Installation
There is effectively no installation of the JAF. The classes that make up the JAF standard extension are contained in the included Java Archive (JAR) file, "activation.jar". This file can be placed anywhere in your class path.
What's in the JAF release?
- Release Notes: The release notes for this release including, installation instructions and system requirements.
- activation.jar: This JAR (Java Archive) file containing the classes that make up JavaBeans Activation Framework.
- Demos: A directory containing some simple unsupported demos that make use of some of the JAF's features.
- Documentation: A directory containing the Javadoc API descriptions for the public classes in the JAF.
Download JavaBeans Activation Framework 1.1.1 release
Send comments, feedback and bugs to : activation-comments@sun.com
|
https://www.oracle.com/technetwork/java/javase/downloads/index-135046.html
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
21
You are starting to learn Angular, but it's hard to know what's Angular and what's Typescript.
This guide gives an understanding of how Typescript modules work and how they are used in Angular. What are these
import and
export statements I see generated by the Angular CLI? How do I create my own modules?
To answer these questions and demonstrate the concepts we need to learn, we will use the scenario of working on shopping cart functionality within an app.
The goal of modular code is that each individual module provides a piece of functionality, exposed through a well-defined interface. The internal details of how a module works are isolated, making it easier to test and refactor.
Modules can make use of other modules by importing them. In turn, a module exports only what it wants other code to be able to use.
In Typescript, a module is simply a file that imports or exports something.
In order for the app to run, the dependencies between modules above are resolved via a module loader. The browser cannot load the code for Module A before Module B has been loaded.
Typescript does not provide a runtime, it's just a transpiler into javascript. The Angular CLI takes care of this module loading aspect for you. When it builds your Angular app, it uses WebPack as the loader technology it is using.
Angular also has the concept of modules, which you will see in the code as
@Module() definitions. These modules are independent of Typescript modules.
Angular modules follow the same software concept of modularization, but at a different level. They aim to separate the app into functional areas, complete with UI and service definitions.
Creating a module is as simple as creating a Typescript file that has an
import or
export statement.
A module can export one or more declarations: a class, function, interface, enum, constant, or type alias.
For this guide's scenario, we'll look at the
ProductsService and related types we need for an e-commerce application.
1 2 3 4 5 6 7 8 9 10 11 12
// app/shopping-cart/products.service.ts export class ProductsService { // .. Service code here } export interface Product { // Interface declarations } // Private function to this module, not on the global namespace function logDebug(message: string) { console.log(message); }
From our
products.service.ts Typescript module above, only the
ProductsService class and
Product interface are exported. They are the only types that are available to any importers of this module.
The
logDebug function is private, for use only within this module.
With non-modular javascript, the
logDebug function would have been placed on the global namespace. This can lead to unexpected consequences if some other loaded javascript overrides or otherwise changes this function.
Recall that all Typescript modules are isolated and so operate on their own scope, not the global scope. The
logDebug function is only available within this module. Another module is safe to declare its own function called
logDebug and it will in no way conflict with this one.
There are a few ways declarations can be exported from a module.
The typical way it's done in Angular is as we have already seen, immediately exporting when the declaration is made:
1 2 3 4 5 6 7 8
// Export at time of declaration export class ProductsService { // .. Service code here } export interface Product { // Interface declarations }
Alternatively, you can also export one or more declarations in a single
export statement:
1 2 3 4 5 6 7 8 9 10
class ProductsService { // .. Service code here } interface Product { // Interface declarations } // Export as a single statement export { ProductsService, Product }
This option keeps all the exports in place, which has the advantage of making it clear to see the module's exported public interface.
To make use of our module, we need to import it.
Let's assume that we have a
cart.component.ts that needs to make use of the
ProductsService. It can import it like this:
1 2 3
// app/shopping-cart/cart.component.ts import { ProductsService } from './products.service';
This is importing the
ProductsService class from our module
products.service.ts.
We are able to import the
ProductsService from our module because it has been exported. If we try to import the
logDebug function, we will get an error at compile time:
1 2 3 4
// cart.component.ts //ERROR: logDebug is not exported, so cannot be imported import { logDebug } from './products.service';
Typescript has a concept of module resolution which it uses at compile-time to find the intended module to import.
In the previous examples, the reference to the module in the
import statement is a relative path, so we are expecting the
products.service.ts to be a sibling to the
cart.component.ts file.
Notice that the
.ts extension is not needed! Our Typescript is actually going to be transpiled into javascript, and so the final module has a
.js extension.
We could be importing some other file extension too: a
.tsx or a
.d.ts from an NPM package.
This is module resolution at work. For our Angular apps, it's not something we need to be concerned about, but it is worth knowing that this is why there is no file extension in the
import statement.
There will be occasions that two modules that we want to use are going to export a type with the same name.
When we try to import
Product from another module, say an e-commerce CMS, we will get an error:
1 2 3 4
import { ProductsService, Product } from './products.service'; // ERROR: Duplicate identifier 'Product' import { Product } from 'ecommerceCMS/products';
To solve this, we can alias the type so that we avoid the naming clash:
1 2 3 4
import { ProductsService, Product } from './products.service'; // Now available as CMSProduct import { Product as CMSProduct} from 'ecommerceCMS/products';
Alternatively, you can
import all types from a module into a variable:
1 2
import * as products from './products.service'; import { Product } from 'ecommerceCMS/products';
This places all of the types from our module into a
products variable. You can reference the types as
products.ProductsService and
products.Product, avoiding a naming clash with the ECommerce CMS
Product.
As our Shopping Cart functionality grows, we're going to end up with more and more modules. For any importers, this can mean there will be a lot of import statements and it can be difficult to maintain.
We ideally want to roll-up all of our smaller modules into a single one, let’s call it
shoppingCart.
This is done through barrels. Barrels are modules which pull lots of individual modules together, reexporting their declarations, so creating a single cohesive module.
To define a barrel for our Shopping Cart, we create a file named
index.ts in the shopping cart root folder, that simply re-exports our other modules:
1 2 3 4
// app/shoppingCart/index.ts export { ProductService, Product } from './shoppingCart/products.service'; export { CartComponent } from './shoppingCart/cart.component'
A barrel is just like any other module, so we can import it in the same way. However, we defined our barrel in a file named
index.ts.
Like a web server serving up a default page, the Typescript Module Resolution process as the same concept with
index.ts.
When resolving an
import statement, it will check the referenced path. If it is a directory and there is an
index.ts file, then the
import statement will resolve:
1 2 3 4 5
// Simple import from the barrel import { ProductService, Product, CartComponent } from './shoppingCart'; // Exactly the same, but references the index module directly import { ProductService, Product, CartComponent } from './shoppingCart/index';
We now have a single module,
shoppingCart.ts, which is intended to be used within the app.
Using a barrel helps when we start to do any refactoring, which can otherwise lead to a ripple-effect change across the app.
We decide that our
ProductService should be in a "services" sub-folder. This means that everywhere the service is imported would need the path updated.
Without using a barrel, this could be a lot of places! However, with the barrel, we just change the one export statement to match:
1 2 3 4 5
// app/shoppingCart/index.ts // New path to products.service.ts export { ProductService, Product } from './shoppingCart/services/products.service'; export { CartComponent } from './shoppingCart/cart.component'
Even if we decide to rename the
ProductService to be
ProductApiService, we can still export from our barrel a
ProductService declaration to enable backward compatibility:
1 2 3 4 5
// app/shoppingCart/index.ts // ProductService alias to aid backwards compatibility export { ProductApiService, ProductApiService as ProductService, Product } from './shoppingCart/services/products.service'; export { CartComponent } from './shoppingCart/cart.component'
You now have an understanding of what Typescript modules are and how they are used when creating Angular Apps.
You know how to take control of the import and export statements to avoid name clashes or to simplify refactoring.
All this knowledge is brought together in the concept of barrels, creating a unified module from other smaller modules. Barrels are a good idea to prevent a ripple-effect change through your app as you refactor code.
21
Test your skills. Learn something new. Get help. Repeat.Start a FREE 10-day trial
|
https://www.pluralsight.com/guides/typescript-angular-understanding-modules?utm_campaign=Angular%20Weekly&utm_medium=email&utm_source=Revue%20newsletter
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Iterators and the yield statement performance or memory, getting individual items as they are requested may be a better design. Using the yield statement means that this is really easy to achieve.
Take the following code for example:
using System; using System.Collections.Generic; namespace ConsoleApplication1 { internal class Program { private static void Main(String[] args) { Console.WriteLine("Starting the iterations"); foreach (Entity item in GetItems(10)) { Console.WriteLine("Got item {0} - {1} - {2}", item.Id, item.FirstValue, item.SecondValue); } Console.ReadKey(); } private static IEnumerable<Entity> GetItems(Int32 maxItems) { for (Int32 index = 0; index < maxItems; index++) { yield return GetItemFromStore(index); } } private static Entity GetItemFromStore(Int32 id) { Console.WriteLine("Getting item {0} from the store", id); // Simulate getting the item from a data store or service Entity item = new Entity(); item.Id = id; item.FirstValue = "First" + id; item.SecondValue = "Second" + id; return item; } } internal class Entity { public String FirstValue { get; set; } public Int32 Id { get; set; } public String SecondValue { get; set; } } }
This code iterates over a set of 10 items and outputs the details of each item encountered. The interesting bit is that each item is only requested and created for each iteration rather than calculating the entire set up front.
The output of this code is:
Starting the iterations Getting item 0 from the store Got item 0 - First0 - Second0 Getting item 1 from the store Got item 1 - First1 - Second1 Getting item 2 from the store Got item 2 - First2 - Second2 Getting item 3 from the store Got item 3 - First3 - Second3 Getting item 4 from the store Got item 4 - First4 - Second4 Getting item 5 from the store Got item 5 - First5 - Second5 Getting item 6 from the store Got item 6 - First6 - Second6 Getting item 7 from the store Got item 7 - First7 - Second7 Getting item 8 from the store Got item 8 - First8 - Second8 Getting item 9 from the store Got item 9 - First9 - Second9
Very powerful, very easy.
|
https://www.neovolve.com/2008/08/12/iterators-and-the-yield-statement/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Unidata Developer's Blog Unidata Developer's Blog 2020-01-06T17:39:34-07:00 Apache Roller Contributor License Agreement for Unidata Projects Ryan May 2017-09-28T15:18:03-06:00 2017-09-28T15:18:03-06:00 Unidata Contributor License Agreement (CLA).</p> <a href="">Unidata Contributor License Agreement (CLA)</a>. This agreement is based on a template from the <a href="">Harmony Agreements</a> project, whose goal is to standardize CLA's within the Open Source community.</p> <h2>What is a Contributor License Agreement?</h2> <p:</p> <ul> <li>You retain the ownership of the copyright to your contribution</li> <li>You grant Unidata the right to use your contribution in perpetuity</li> </ul> <p.</p> <p.</p> <h2>How it Works</h2> <p>When contributing using a Pull Request on GitHub, the following message will present itself, using <a href="">CLA Assistant</a>, as a comment on the pull request:</p> <p><img src="/blog_content/images/2017/20170927_cla1.png" alt="Unsigned CLA" /></p> <p>Contributors can click on the yellow "CLA not signed yet" badge, which will take them to a copy of the CLA. Contributors are asked to provide a little bit of information about themselves (for legal purposes):</p> <p><img src="/blog_content/images/2017/20170927_cla2.png" alt="CLA page" /></p> <p>Once the "I Agree" button is clicked, the browser will return to the original pull request page, but now the comment has been updated:</p> <p><img src="/blog_content/images/2017/20170927_cla3.png" alt="Signed CLA" /></p> <p>Contributors will only be asked to electronically sign once (unless the CLA is updated), and the agreement applies to all GitHub repositories hosted under the Unidata organization.</p> <p>For more information about CLAs, see these resources:</p> <ul> <li><a href="">Harmony Agreements</a></li> <li>"Producing OSS" has a section on <a href="">Contributor Agreements</a></li> <li>A more <a href="">detailed blog post</a> by Julien Ponge about CLAs</li> </ul> <h2>About the Contributor License Agreement</h2> <p>Unidata's CLA comes from <a href="">Project Harmony</a>, which is a community-centered group focused on contributor agreements for free and open source software.</p> <p>The document you are reading now is not a legal analysis of the CLA. If you want one of those, please talk to your lawyer. This is a description of the purpose of the CLA.</p> <h3>Why is a signed CLA required?</h3> <p>The license agreement is a legal document in which you state you are entitled to contribute the code/documentation.</p> <p.</p> <p.</p> <h3>Am I giving away the copyright to my contributions?</h3> <p.</p> <h3>Can I withdraw permission to use my contributions at a later date?</h3> <p>No. This is one of the reasons we require a CLA. No individual contributor can hold such a threat over the entire community of users. Once you make a contribution, you are saying we can use that piece of code forever.</p> <h3>Can I submit patches without having signed the CLA?</h3> <p>No. We will be asking all new contributors and patch submitters to sign before they submit anything.</p> <p class="quote"> This CLA explanation is based on <a href="">Django Contributor License Agreement Frequently Asked Questions</a> (copyright Django Software Foundation. <a href="">CC-BY</a>) The content has been modified slightly to reflect situations specific to Unidata. </p> DAP4 Commentary: DDX Lexical Elements Dennis Heimbigner 2012-03-27T18:00:50-06:00 2012-03-28T11:06:15-06:00 This document describes the lexical elements that occur in the DAP4 grammar. <p> Within the <a href=""> Relax-NG (rng) DAP4 grammar</a>, there are markers for occurrences of primitive type such as integers, floats, or strings. The markers typically look like this when defining an attribute that can occur in the DAP4 DDX. </p><pre><attribute name="namespace"><data type="string"/></attribute></pre>. <ol> <li> Constants, namely: string, float, integer, and character. </li><li> Identifiers </li></ol> <p> The specification is written using the <a href=""> ISO/IEC 9945-2:2003 Information technology -- Portable Operating System Interface (POSIX) -- Part 2: System Interfaces</a>. This is the extended Posix regular expression specification. </p><p> I have augmented it in the following ways. </p><ol> <li>Names are assigned to regular expressions using the notation <pre>name = regular-expression</pre> <p> </p></li><li>Named expressions can be used in subsequent regular expressions by using the notation {name}. Such occurrences are equivalent to textually substituting the expression associated with name for the {name} occurrence: More or less like a macro. </li></ol> <h4>DAP4 Lexical elements</h4> Notes: <ol> <li>The definition of {UTF8} is deferred to the next section. <p> </p></li><li>Comments are indicated using the "//" notation. <p> </p></li><li>Standard xml escape formats (&xDD) are assumed to be allowed anywhere. </li></ol> <p> <b>Basic character set definitions</b> </p><pre>CONTROLS = [\x00-\x1F] // ASCII control characters<br />WHITESPACE = [ \r\t\f]+<br />HEXCHAR = [0-9a-zA-Z]<br />// ASCII printable characters<br />ASCII = [0-9a-zA-Z !"#$%&'()*+,-./:;<=>?@[\\\]\\^_`|{}~]<br /></pre> <p> <b>Ascii characters that may appear unescaped in Identifiers</b><br /> This is assumed to be basically all ASCII printable characters except the characters ' ', '.', '/', '"', ''', and '&'. Occurrences of these characters are assumed to be representable using the standard xml '&xx;' notation. </p><pre>IDASCII = [0-9a-zA-Z!#$%'()*+,-:;<=>?@[\\\]\\^_`|{}~]<br /></pre> <p> <b>The numeric classes: integer and float</b><br /> </p><pre>INTEGER = {INT}|{UINT}|{HEXINT}<br />INT = [+-][0-9]+{INTTYPE}?<br />UINT = [0-9]+{INTTYPE}?<br />HEXINT = {HEXSTRING}{INTTYPE}?<br />INTTYPE = ([BbSsLl]|"ll"|"LL")<br />HEXSTRING = (0[xX]{HEXCHAR}+)<br /><pre></pre><br />FLOAT = ({MANTISSA}{EXPONENT}?)|{NANINF}<br />EXPONENT = ([eE][+-]?[0-9]+)<br />MANTISSA = [+-]?[0-9]*\.[0-9]*<br />NANINF = (-?inf|nan|NaN)<br /></pre> <p><b>The Character classes</b><br /> </p><pre>STRING = ([^"\&]|{XMLESCAPE})*<br />CHARACTER = ([^'\&]|{XMLESCAPE})<br /></pre> <p> Note that the character type only supports ASCII characters because it can only hold a single 8-bit byte. </p><p> <b>The Identifier class</b><br /> </p><pre>ID = {IDCHAR}+<br />IDCHAR = ({IDASCII}|{XMLESCAPE}|{UTF8})<br />XMLESCAPE = &x{HEXCHAR}{HEXCHAR};<br /></pre> <p> Note that the above lexical element classes are not disjoint. For example, the sequence of characters 1234 can be either an identifer,a float, or an integer. So the order of testing is assumed to be this. </p><ol> <li>INTEGER </li><li>FLOAT </li><li>ID </li><li>STRING </li></ol> <h4>UTF-8 Character Encodings</h4> We discuss UTF-8 character encoding in the context of this document. <a href=""></a>. <p> The most correct (validating) version of UTF8 character set is as follows. </p><pre>UTF8 = ([\xC2-\xDF][\x80-\xBF]) <br /> | (\xE0[\xA0-\xBF][\x80-\xBF]) <br /> | ([\xE1-\xEC][\x80-\xBF][\x80-\xBF]) <br /> | (\xED[\x80-\x9F][\x80-\xBF]) <br /> | ([\xEE-\xEF][\x80-\xBF][\x80-\xBF]) <br /> | (\xF0[\x90-\xBF][\x80-\xBF][\x80-\xBF]) <br /> | ([\xF1-\xF3][\x80-\xBF][\x80-\xBF][\x80-\xBF]) <br /> | (\xF4[\x80-\x8F][\x80-\xBF][\x80-\xBF])<br /></pre> The lines of the expression cover the UTF8 characters as follows: <ol> <li> non-overlong 2-byte </li><li> excluding overlongs </li><li> straight 3-byte </li><li> excluding surrogates </li><li> straight 3-byte </li><li> planes 1-3 </li><li> planes 4-15 </li><li> plane 16 </li></ol> <p> Note that ASCII and control characters are not included. </p><p> The above reference also defines some alternative regular expressions. </p><p> The most relaxed version of UTF8 is this. </p><pre>UTF8 = ([\xC0-\xD6].)<br /> |([\xE0-\xEF]..)<br /> |([\xF0-\xF7]...)<br /></pre> <p> The partially relaxed version of UTF8 is this. </p><pre>UTF8 = ([\xC0-\xD6][\x80-\xBF]) <br /> | ([\xE0-\xEF][\x80-\xBF][\x80-\xBF]) <br /> | ([\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])<br /></pre> <p> We deem it acceptable to use this last relaxed expression for validating UTF-8 character strings. </p> DAP4 Commentary: DAP4 Grammar Dennis Heimbigner 2012-03-27T16:50:24-06:00 2012-03-27T16:50:24-06:00 [Version: 1.0] <p>' <a href='%20'> implied grammar</a> and from comments from others and from a comparison with the <a href=''> xsd grammar.</a> </p><h4>Differences with DAP4 xsd Grammar</h4> I converted the <a href=''> xsd grammar</a> to an equivalent <a href=''> relax-ng (rng) grammar.</a> <p> One major difference I see is in dimension handling. </p><ul> <li> I just used the name "dimension" rather than "shareddimension". For me, all dimensions (except anonymous ones) are shared. <p> </p></li><li> The xsd separates out scalars from arrays. I always allowed the dimensions for a variable to be optional to handle the scalar case. <p> </p></li><li> I attempted to be as consistent as possible, so I allowed any type including sequences and structures to be dimensioned. (but see <a href=''>previous commentary</a>). <p> </p></li><li> The dimensions of a variable are currently specified in the rng grammar as a sequence of elements named "Dimension" contained in the "variables" element type. </li></ul> Other differences: <ul> <li> The Dataset element in the xsd has a couple of extra attributes. I added these. <p> </p></li><li> The xsd appears to allow attributes to themselves have attributes. This needs discussion. <p> </p><p> </p></li><li> The URL basetype is in the xsd. But I do not see the justification for keeping it. <p> </p></li><li> It appears that the Dataset contains a top level <group> declaration. I chose to treat the Dataset itself as the top-level group. <p> </p></group></li><li> Attribute declarations appear to have their own "namespace" attribute. Not sure why this is needed. <p> </p></li><li> I do not understand the purpose of the "NewAttribute" attribute. <p> </p></li><li> There may still be some minor differences in representing coordinate variables. <p> </p></li><li> The xsd represents attribute values thus: <pre><attribute name="a"><value>...</value><value>...</value></attribute></pre> I chose to use attributes in the multi-valued case because I prefer not to use elements with content unless really necessary. So I represented the above as this. <pre><attribute name="a"><value value="..."/><value value="..."/></attribute></pre> <p> </p><p> </p></li><li> There is an issue of interleaving of definitions, or equivalently, what elements must occur in a fixed order. <p> </p></li><li> Where should attributes be legal? I think the rng grammar and the xsd grammar agree on this: putting them almost everywhere, but it needs discussion. </li> <p> </p><li> I dropped Blobtype. I fail to see the need for this. </li></ul> <p> </p><h4> Testing the Relax-NG Grammar </h4> You will need to copy three files: <ol> <li> dap4.rng - this is the grammar file. it uses the <a href=''>Relax-NG schema language</a> This grammar file can be obtained from <a href=''></a>. <p> </p></li><li> test.xml - this is a test file that I am growing to cover the whole grammar. This can be obtained from <a href=''></a>. <p> </p></li><li> jing.jar - Jing is a validator that takes the grammar and a test file and checks that the test file conforms to the grammar. This can be obtained from <a href=''></a>. </li></ol> To use this jar file, do the command: <p> </p><pre>java -jar jing.jar dap4.rng test.xml</pre>No output is produced if the validation succeeds, otherwise, error messages are produced. DAP4 Commentary: Possible Notation for Server Commands Dennis Heimbigner 2012-03-27T16:06:28-06:00 2012-03-27T16:06:28-06:00 Looking to the future, it is clear that eventually our query language, or more generically our previous discussion of <a href=''> URL Annotations</a> must encompass three classes of computations. <ol> <li> Queries in the DAP2 sense, </li><li> Commands to control the client-side processing of requests on the server (i.e. thing like caching), </li><li> Server-side processing. </li></ol> I want to propose a notation for everything in the URL after the "?". I think this notation has ability to represent a wide variety of features without, I hope, being too generic. <p> The notation is basically nested functions combined with single assignment variables. A semantically nonsensical, but grammatical example would look something like this: "?_x=f(17,g(h(12))),f2(_x,[0:3:10])". </p><p>. </p><p> There would be several semantic rules. </p><ol> <li> A variable may only be assigned to once (single assignment), but may be referenced as many times as desired after that. <p> </p></li><li>. <p> </p></li><li> Any expression that is not assigned to a variable and does not have a void return type will have its return value returned to the caller as part of a DATADDX. </li></ol> <h4>Notes</h4> My hypothesis is that this notation should also be able to handle most kinds of server side processing by defining and composing functions. <p>. </p><p>. </p><p> I also hypothesize that Ferret notations </p><pre>{}{let deq1ubar=u[d=1,l=1:24@ave]}<br /></pre> could be represented in my proposed function notation without having to clutter up the URL format. DAP4 Commentary: Sequences and Vlens Dennis Heimbigner 2012-03-27T15:52:23-06:00 2012-03-27T15:52:24-06:00 <blockquote> "Oh what a tangled web we weave when first we practice to build a type system." </blockquote> Recently, I made the claim to James that the Sequence construct could serve to represent CDM/HDF5 vlen constructs. <p> His reply was as follows: </p><blockquote>. </blockquote> <p> This is a very important observation, and has caused me to re-think how sequences and vlens should be handled in DAP4. </p><h4>Vlens</h4> Let us start by addressing the addition of vlens to the DAP4 data model. <p> Some possible ways to insert vlens into the dap4 data model include the following. </p><ol> <li>]. <p> </p></li><li> James has proposed something similar except that the "*" is restricted to occurring as the last dimension. <p> </p></li><li> Another possibility is to create a new container object, call it <i>Vlen</i>, that (like <i>Sequence</i>) is inherently of variable length and is not dimensionable. [Aside: The term "vlen" is kind of odd. the term "list" would actually make more sense] </li></ol> <p>. </p><p> Consider the following CDM example (using a pseudo-syntax) </p><pre> Int32 v[d1][*][d2][*][d3].<br /></pre> This would have to be represented in DAP4 something like this. <pre> Structure v_1 {<br /> Structure v_2 {<br /> Int32 v[d3];<br /> }[d2][*];<br /> }[d1][*];<br /></pre> <p> In the lucky case that the last dimension is a already a "*": </p><pre> Int32 v[d1][*][d2][*];<br /></pre> then we have a simpler representation. <pre> Structure v_1 {<br /> Int32 v[d2][*];<br /> }[d1][*];<br /></pre> <h4>Commentary</h4> <ul> <li> As a personal matter, I would prefer to use the CDM representation. Adding a new container type (Vlen),while appealing semantically, only complicates the model more. James' approach requires the use of additional Structure definitions which, in my opinion, obscures the underlying semantics. <p> </p></li><li> One thing that I need to check is how this affects the proposed on the wire format. </li></ul> <h4>Sequences</h4> Nathan Potter noted the following. <blockquote>] </blockquote> And later Nathan Potter noted the following: <blockquote>)] </blockquote> . </p><p> So, we seem to have two very similar concepts (vlen and sequence), which complicates the DAP4 model. The question for me is: </p><blockquote> Do we get rid of the Sequence concept, or at least define it as equivalent to the following?: Structure {...} [*]. </blockquote> My current belief is that we should keep sequences but with the following restrictions: <ol> <li> Sequences can only occur as top-level, scalar, variables within a group. </li><li> Sequences may not be nested in any other container (i.e. other sequences or structure) </li></ol> This keeps sequences for the original purpose of acting as "relations". All other places where we might use a sequence before will now use a vlen. <p> As with vlens, the translation between DAP4 and CDM needs to be addressed. </p><ul> <li> The conversion from DAP4 to CDM can be addressed using the rule above, namely that a sequence is, in CDM, represented as <pre>Structure {...} [*]</pre>. <p> </p></li><li>). </li></ul> DAP4 Commentary: Characterization of URL Annotations Dennis Heimbigner 2012-03-27T15:33:58-06:00 2012-03-27T15:54:42-06:00 <h4>Characterization of URL Annotations</h4> Requests for data using the DAP4 protocol will require a significant number of annotations specifying what is to be retrieved, commands to the server, and commands to the client. <p> This document is intended to just describe the information with which URLs need to annotated based on past experience. It also enumerates the possible URL components that can be used to encode the annotations. I will consider a specific encoding in a separate document. </p><p> Looking at the DAP2 URLs, we see three classes of annotations: protocol, server commands, client commands, and queries (aka constraints). </p><h4>Protocol</h4> For <pre>...<br /></pre> the "dodsC" indicates the use of the DAP2 protocol. TDS also supports a schema called "dods:" that also indicates the use of DAP2. <h4>Server Commands</h4> Server commands in DAP2 are appended to the dataset URL to indicate attributes of the request. For example: <pre><br /></pre> The defined kinds of server commands for DAP2 are as follows: <ul> <li>Component requests: ".dds", ".das" </li><li>Data requests with format: ".dods", ".asc" </li><li>Miscellaneous: ".html", ".ver" </li></ul> <h4>Client commands</h4> Client. <p> Currently, client commands are represented as "name=value" pairs or just "name" enclosed in square braces: "[nocache]", for example. These commands are prefixed to the URL such as this. </p><pre>[show=fetch]<br /></pre> The legal set of client commands is client library specific. <p> One notable problem with this form of client command is that it prevents generic URL parsers from parsing the URL because, of course, the square bracket notation is non-standard. </p><p>. </p><h4>Queries</h4> The third class of URL annotations specifies some form of query to control the information to be extracted from a dataset on the server. This information is then passed back to the client. <p> In DAP2, queries consisted of projections and selections specifying a subset of the data in a dataset. </p><p> A projection represents a path through the DDS parse tree annotated with constraints on the dimensions. For example, this query: "?P1.P2[0:2:10].F[1:3][4:4]". </p><p> A selection represents a boolean expression to be applied to the records of a sequence. Syntactically, a selection could cross sequences, thus implying a join of the sequences, but in practice this diss not allowed. </p><p> DAP2 queries also allowed the use of functions in the projections and selections to compute, for example, sums or averages. But the semantics was never very well defined. The set of allowable functions is server dependent. </p><h4>Annotation Mechanisms</h4> DAP4 will need to support at least the three classes of annotations described above. Whatever annotation mechanisms are chosen, the following properties seem desirable. <ul> <li>The resulting URL should be parseable by generic URL parsers =>Client commands should be embedded at the end of URLs, not the beginning. </li><li>Whatever annotation encoding is used, it is desirable if it is as uniform as possible. </li></ul> As mechanisms, we have the following available to us: <ul> <li> The URL schema -- "http:" for example, or the TDS "dods:" schema. Using this is somewhat undesirable because it would need to encode also an underlying encrypted protocol like https: (versus http:). <p> </p></li><li> URL path elements such as the current use of e.g.... by TDS. <p> </p></li><li> URL query -- everything after the first '?' in the URL. URL queries technically have a defined form as name=value pairs, but in practice are pretty much free form. <p> </p></li><li> URL fragment -- everything after the last '#'. Again these are pretty much free form. <p> </p></li><li> Filename extensions -- everything between the data set name in the path and the start of the query. The DAP2 ".dds" and ".dods" are examples of this. <p> </p></li><li>Alternate extension formats. Ethan Davis has proposed the use of a "+" notation instead of filename extensions: "+ddx+ascii", for example. This has the advantage of clearly not being confused with filename extensions while also making clear the additive nature of such annotation. </li></ul> I should note that the Ferret server has taken to seriously abusing the URL format with URLs like this. <pre>{}{let deq1ubar=u[d=1,l=1:24@ave]}<br /></pre> so we have much to aspire to :-) DAP4 Commentary: The on-the-wire format Dennis Heimbigner 2012-03-27T15:12:33-06:00 2012-03-27T15:12:33-06:00 <h4>Background</h4> The current DAP2 clients, use two different approaches to managing the packet of data that is sent by the server. <p>. </p><p> In contrast, the oc library uses a "lazy" evaluation method. That is, the incoming packet is sent immediately into a file or into a chunk of heap memory. Almost no preproccessing occurs. Data extraction occurs only when requested by the user code through the API. </p><h4>Problem addressed</h4> The relative merits and demerits of lazy versus eager are well known and will not be repeated here. Lazy evaluation of the DAP2 packet is hampered by the inlining of variable length data: sequences and strings specifically. If it were not for those, the lazy evaluator could compute directly the location of the desired subset of data as requested by the user, and do so without having to read any intermediate information. But when, for example, Strings are inlined, then it is necessary to walk the packet piece by piece to step over the strings. I plan to use lazy evaluation for my implementations of DAP4, and propose here the outline of a format for the on-the-wire data packet that makes lazy operation fast and simple without, I believe, interferring with eager evaluation. <h4>Proposed solution</h4> Since we have previously agreed on the use of multipart-mime, the incoming data is presumed to be sequence of variable length packets with a known length (for each packet) and a unique id for each packet. <p> Under these assumptions, I propose the following format. </p><p> </p><ol> <li. <p> </p></li><li>Each element in a string array in the initial packet is represented by three pieces of fixed size info: <ol> <li>the unique id of the packet containing the contents of the string. </li><li>the offset in the packet defined in (a). </li><li>the length of the string in bytes (assuming utf-8 encoding). </li></ol> <p> </p></li><li>As an optimization, the string packet can be directly appended to the fixed size initial packet, in which case, the first item is not strictly necessary. <p> </p></li><li>Given a sequence object either a scalar or as an array of sequences, the sequence is replaced by the following fixed size item: <ul> <li>The unique id of the packet containing the sequence records </li></ul> <p> </p></li><li. </li></ol> <h4>Rationale for the solution</h4> The above representation makes lazy evaluation very simple and a given item in a packet can be reached in <i>o(1)</i> time. Even with the case of nested sequences/vlens, the proper item can be reached in <i>o(log n)</i>. <h4>Updates</h4> <dl> <dt>2012-02-20</dt><dd>The above encoding has as one consequence that all embedded counts that currently exist in DAP2 are superfluous. Ditto for the sequence record markers. It may still be desirable to include the counts for purposes of error checking, but they are not strictly necessary. </dd></dl> First two weeks! Sean Arms 2011-06-07T14:27:15-06:00 2011-06-09T09:06:16-06:00 <p>Community Driven: <i>Sure, Unidata has Policy and Users committees, so they must care about what the community thinks, right? Right?!? I mean, I was there – I felt like they cared what the Users Committee thought while we were there, in town, in person.</i></p> <p> <meta http- <title></title> <meta name='GENERATOR' content='OpenOffice.org 3.3 (Unix)' /> <style type='text/css'> <!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } A:link { so-language: zxx } --></style></p><p align='left'><table width='450' height='288'><tbody><tr align='center'><td> <img width='431' vspace='0' hspace='0' height='231' border='0' align='bottom' src='' alt='Image from Jeff Weber's t-shirt today!' /><br /></td></tr><tr align='left'><td><p><sub><i>Image taken from Jeff Weber's t-shirt today! Talk the talk, walk the walk, and wear the schwag!</i></sub><br /></p></td></tr></tbody></table> </p><p.</p><p style='margin-bottom: 0in;'>. <i>Sure, Unidata has Policy and Users committees, so they must care about what the community thinks, right? Right?!? I mean, I was there – I felt like they cared what the Users Committee thought while we were there, in town, in person.</i>.</p><p style='margin-bottom: 0in;'><b>Observation 1:< <b>unbelievably huge deal!</b>!</p><p><b>Observation 2:<>I <u>could</u> have been told those things.</i>.</p><p><b>Observation 3:</b>.!</p><p: </p><p> </p><ol><li><p style='margin-bottom: 0in;'>Contact those awesome individuals serving on our governing committees (maybe even consider serving!) (<a href='../../../community/index.html#governance'></a>)!</p> </li><li><p style='margin-bottom: 0in;'>Want to get involved with Unidata but don’t have the financial resources in your department? Well have we the deal for you: Unidata Equipment Awards! (<a href='../../../community/equipaward/'></a>)</p> </li><li><p style='margin-bottom: 0in;'>Check us out on Twitter or Facebook. </p> </li><li><p style='margin-bottom: 0in;'>Subscribe to our “News from Unidata” blog (<a href='../../news/feed/entries/atom'></a>) or the “Developer’s Blog” (<a href='../../developer/'></a>) (both are open for comments, and comments - we love em’).</p> </li><li><p style='margin-bottom: 0in;'>Subscribe to our users mailing lists and participate (<a href='../../../support/#mailinglists'></a>)! </p> </li><li><p style='margin-bottom: 0in;'>If you’re in Boulder for a workshop or meeting, come say hello (<a href='../../../about/index.html#visit'></a>)! </p> </li><li><p style='margin-bottom: 0in;'>Visit us at the Fall AGU or Annual AMS meetings (<a href='../../../events/index.html#conferences'></a>).</p> </li><li><p style='margin-bottom: 0in;'>Hang out with us virtually and see what’s going on in the community by checking out a seminar through the Unidata Seminar Series (<a href='../../../community/seminars/'></a>)</p> </li><li><p style='margin-bottom: 0in;'>Consider attending a training workshop (<a href='../../../events/2011TrainingWorkshop/'></a>).</p> </li><li><p>Consider attending the 2012 Triennial Workshop (organized by the Users Committee and Unidata!). More details to come, but see: <a href='../../../events/index.html#triennial'></a>. <br /></p></li></ol><p>Which of these do you use? Which are most appealing to you? What can we do to better keep those lines of communication open? <br /></p>
|
https://www.unidata.ucar.edu/blogs/developer/en/feed/entries/atom?cat=Unidata
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Download Tutorial as PDF file.
For every planet movie clip in your library you will have to set Linkage parameters like image shows. For class parameter use different names. I have only two planets in this example, but you can use more if you like. My planets classes are PlanetSmall and PlanetBig.
Create new ActionScript file in same folder as your StarSystem.fla, name it StarSystem.as and open it for editing. Now the code.
First we import some libraries:
package
{
import flash.display.Sprite;
import flash.events.Event;
import flash.display.Stage;
We define our class:
public class StarSystem extends Sprite {
private var p1:PlanetSmall;
private var p2:PlanetBig;
Next we define two variables to represent center of the stage:
private var xCenter:Number = stage.stageWidth/2;
private var yCenter:Number = stage.stageHeight/2;
private var angle:Number = 20;
private var xRadius:Number = 250;
private var yRadius:Number = 150;
private var speed:Number = .008;
private var _angle:Number = -45;
private var _xRadius:Number = 100;
private var _yRadius:Number = 65;
private var _speed:Number = .01;
All these variables are something you need to test for yourself to find right numbers.
Then we define main method,
public function StarSystem() {
init();
}
note how main method must be declared as public not private. Init function goes like this:
private function init():void {
p1 = new PlanetSmall();
addChild(p1);
p2 = new PlanetBig();
addChild(p2);
addEventListener(Event.ENTER_FRAME, onEnterFrame);
}
public function onEnterFrame(event:Event):void {
p1.x = xCenter + Math.sin(angle) * xRadius;
p1.y = yCenter + Math.cos(angle) * yRadius;
p1.scaleX = p1.scaleY = p1.y/400;
p1.alpha = p1.scaleX*2;
angle += speed;
p2.x = xCenter + Math.sin(_angle) * _xRadius;
p2.y = yCenter + Math.cos(_angle) * _yRadius;
p2.scaleX = p2.scaleY = p2.y/300;
p2.alpha = p2.scaleX*1.2;
_angle += _speed;
}
}
}
We also set alpha as function of scale which is nice trick to use. Feel free to play with these numbers and see what suits your needs. Last line of the code changes angle of the planets as function of defined speed and this happens every new frame. Test your star system.
I hope you will make better looking rotating planets , some supernovas, stars and galaxies on background, animated Sun, maybe some starship or even planets with moons orbiting around them! When you do that, send me the link, I would like to see your creation.
Thanks for your time.
*_*
|
http://flanture.blogspot.com/2009_08_01_archive.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
.
15 comments:
Thank you for the good doc. Really useful and handy post.
Thank you for the post.
I want to take encrypted backup of existing db which is not encrypted.
I have generate keystore
gsk8capicmd -keydb -create -db db2pwstore.p12 -pw Str0ngPassw0rd -strong -type pkcs12 -stash
then update the instance variables
db2 update dbm cfg using keystore_type pkcs12 keystore_location /home/dbinst1/db2pwstore.p12
Now when I issue command
db2 backup database testdb encrypt encrlib 'libdb2encr.a'
SQLN2062N An error occurred while accessing media "libdb2encr.a". Reason code:"1".
Masheed, what operating system do you have? It could be that you need to specify "libdb2encr.so". If the backup needs to be compressed, libdb2compr_encr should be used.
Henrik
Its AIX 7.1.
If I want to take encrypted backup of existing database.
Will I drop and then create database with encrypt option. Or their is any option to take encrypted backup with out dropping/creating database.
On AIX you need to use libdb2encr.a or libdb2compr_encr.a. You can create an encrypted backup of a non-encrypted database. No need to recreate the database.
Henrik
As per the post by "Ember Crooks"
for existing database we have to backup, drop & restore database with encrypt option. Then backup will be encrypted.
No, Ember is not saying that. She explains the currently only way to transform an unencrypted into an encrypted database. If you only want to encrypt a backup, you can do it by using the new encrypt options and libraries. You need to be on DB2 10.5 fixpack 5 or higher.
Henrik
Henrik, We are on 10.5.0.5 and db is licensed with encryption feature.
I am getting the following when taking backup of existing non-encrypt database on AIX 7.1
db2 backup database testdb encrypt encrlib 'libdb2encr.a'
SQL2062N An error occurred while assessing media "libdb2encr.a". Reason code: "1"
This looks like a problem with your system and the library not found (it is under sqllib/lib*). You could use "find" to try to locate it, check the library path settings, etc. If that didn't help, contact the IBM support or, for more ideas, ask on Stackoverflow.
Henrik
$ find / -name *db2encr* 2> /dev/null give me the below result
/opt/ibm/db2/V10.5/lib64/libdb2encr.a
I try to backup database with the full path, but same error.
db2 backup database testdb encrypt encrlib '/opt/ibm/db2/V10.5/lib64/libdb2encr.a'
SQL2062N An error occurred while assessing media "libdb2encr.a". Reason code: "1"
Henrik, the same command work on encrypted database. It takes encrypted backup.
db2 backup database testdb encrypt encrlib 'libdb2encr.a'
While it gives an error on non encrypted database.
Did you follow these steps...?
Henrik
I am also facing same issue. any thoughts?
What issues? Did you follow the steps?
Henrik
masheed ullah, I get this error:
SQL2062N An error occurred while assessing media "libdb2encr.a". Reason code: "1"
It happened to me in the next scenario:
Export the database encryption keys
Delete the keystore files
Create the keystore again
Recreate the instance and configure the keystore
import the exported keys to the keystore
Recreate the database from a backup
the backup is not encrypted but the databaase is.
If i don't delete the keystore and restore the backup database, then it works
|
http://blog.4loeser.net/2015/01/plain-and-clear-native-encryption-in-db2.html?showComment=1486021010813
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
I'm attempting to compile my Flex Gumbo project against recent nightly Flex SDK builds and am getting the following compiler error:
SimpleMotionPath is defined more than once in this namespace Remove the mapping to spark.effects.animation:SimpleMotionPath or spark.effects:SimpleMotionPath
This is occuring with builds 8909 and 8974 - these are the ones I've tested with.
Any suggestions on the cause as I'm not using SimpleMotionPath class anywhere in my code base. I am using custom FXG spark skins though...
SimpleMotionPath moved packages to spark.effects.animation back around the Beta release time in June. It seems like your skins were perhaps built using the former sdk and you're now trying to build your project with these outdated skins.
It's not clear to me whether you just have outdated builds for these skins (in which case I would think a 'clean' would suffice) or whether there's actually source in the skins that refers to the wrong package. Regardless, you might want to search your skins for references to SimpleMotionPath to make sure the imports are correct.
Chet.
Hi Chet
It turned out the the out of date classpath to the SimpleMotionPath was referenced by one of the effects SWC libraries I'm utilizing.
Thanks
Nick
|
https://forums.adobe.com/thread/471579
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Fast color change animation of a shape
I'd like to animate color changes for many rectangles at once in response to touch events. In particular, I'd like a large number of rectangles (~1000) to all be transitioning among hues in a spectrum from blue to yellow.
As a first attempt, I created a
ShapeNodefor each rectangle and changed each one's
fill_colorto a new value on every
touch_moved. The program runs, but only at 2 FPS on an iPad Pro.
I suspect there's a fast way to do this, perhaps using a
Shader. I couldn't figure out how to instantiate a
Shaderjust for the purpose of animating color changes.
My code is below, but my specific question is how I would make a rectangle gradually transition between two hues (e.g. blue to yellow) in a performant way, so that I can have many such rectangles. Thanks!
from scene import * from ui import Path import math import random from colorsys import hsv_to_rgb YELLOW = 1/6 BLUE = 2/3 S = 24 class Grid(Scene): def setup(self): width = int(self.bounds.width) // S height = int(self.bounds.height) // S self.cells = [Cell(i, j, parent=self) for i in range(width) for j in range(height)] def touch_moved(self, touch): for cell in self.cells: cell.touched += S/abs(cell.frame.center() - touch.location)**2 cell.update() class Cell(ShapeNode): def __init__(self, i, j, **vargs): self.touched = 0 super().__init__(Path.rect(0, 0, S, S), 'white', position=(i*S, j*S), **vargs) def cell_color(self): scale = min(1, self.touched) hue = scale * YELLOW + (1-scale) * BLUE return hsv_to_rgb(hue, 1, 1) def update(self): self.fill_color = self.cell_color() if __name__ == '__main__': run(Grid(), show_fps=True)
The following code creates a rectangular grid of 64 cells (8*8) with different colors. If you touch a cell, it changes the color. I hope this gives you an idea of how to solve your problem with shader. The following online book explains how to create such shaders.
import scene, ui shader_text = ''' precision highp float; varying vec2 v_tex_coord; uniform sampler2D u_texture; uniform float u_time; uniform vec2 u_sprite_size; uniform float u_scale; uniform vec2 u_offset; void main(void) { vec2 uv = mod(v_tex_coord, .125); vec2 invuv = uv*8.; vec2 pq = floor(v_tex_coord*8.0)/8.; //vec4 color = texture2D(u_texture, invuv); vec2 t = floor(u_offset*8.0/(u_sprite_size))/8.; float r = 0.; if ((pq.x == t.x) && (pq.y == t.y)) r = 1.0; vec4 color = vec4(pq.x, pq.y, r, 1.); gl_FragColor = color; } ''' class MyScene (scene.Scene): def setup(self): tile_texture = scene.Texture(ui.Image.named('Snake')) self.sprite = scene.SpriteNode( tile_texture, size=(600, 600), anchor_point=(0,0), parent=self) self.sprite.shader = scene.Shader(shader_text) self.sprite.position = (100, 100) def touch_began(self, touch): self.set_touch_position(touch) print (self.sprite.shader.get_uniform('u_offset')) def set_touch_position(self, touch): dx, dy = touch.location -self.sprite.position self.sprite.shader.set_uniform('u_offset', (dx, dy)) scene.run(MyScene(), show_fps=True)
|
https://forum.omz-software.com/topic/3274/fast-color-change-animation-of-a-shape
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
We recommend using Visual Studio 2017
This documentation is archived and is not being maintained.
Microsoft.VisualStudio.TestTools.Execution Namespace
Visual Studio 2010
The Microsoft.VisualStudio.TestTools.Execution namespace provides classes and interfaces that enable, manage, and coordinate the execution of tests in Visual Studio Test Professional. This namespace includes the IDataCollector interface that you would use to create custom diagnostic data adapters to automatically execute tasks within test runs, and the ITestExecutionEnvironmentSpecifier which enables you to specify the environment settings for tests on remote machines.
Show:
|
https://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.execution(v=vs.100).aspx
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
30 January 2010
How to migrate to ActionScript3.0 without losing your soul
words by
flanture
at
3:31 PM
2
Links to this post
20 January 2010
Understanding shearing using Matrix object
Any display object in AS3.0 Flash project has transform property and among other things you can use this property to shear (skew) that object. Easy way to do this is to use Matrix object. Now, Matrix object isn't so difficult to understand, although it is scary to some (for unexplainable reasons). All you have to do is understand it's parameters and what they do. Here is simple list.
Matrix(a, b, c, d, tx, ty)
a - x scale of display object
b - y shearing of display object
c - x shearing of display object
d - y scale of display object
tx - x translation
ty - y translation
Let's add some Sprites and see how different parameters values change display object. First create new Flash AS3 project, open actions panel and input these import statements.
import flash.display.Sprite;
import flash.geom.Matrix;
import flash.display.MovieClip;
import flash.events.Event;
We will create few boxes and place them on stage.
var box:Sprite = new Sprite();
box.graphics.lineStyle(1, 0x0000ff); // blue object
box.graphics.drawRect(0, 0, 100, 100);
addChild(box);
box.transform.matrix = new Matrix(1, 0, 0, 1, 50, 100);
var box1:Sprite = new Sprite();
box1.graphics.lineStyle(1, 0x00ff00); // green object
box1.graphics.drawRect(0, 0, 100, 100);
addChild(box1);
box1.transform.matrix = new Matrix(1, .5, 0, 1, 50, 200);
var box2:Sprite = new Sprite();
box2.graphics.lineStyle(1, 0xff0000); // red object
box2.graphics.drawRect(0, 0, 100, 100);
addChild(box2);
box2.transform.matrix = new Matrix(1, 0, .5, 1, 150, 100);
var box3:Sprite = new Sprite();
box3.graphics.lineStyle(1, 0x000000); // black border object
box3.graphics.drawRect(0, 0, 100, 100);
addChild(box3);
box3.transform.matrix = new Matrix(1, .5, .5, 1, 150, 200);
Take a look how different values of b and c parameter shear display object along x and y axis.
var gBox1:gBox = new gBox();
addChild(gBox1);
gBox1.x = 400;
gBox1.y = 150;
var gBox2:gBox = new gBox();
gBox1.x = 400;
gBox1.y = 150;
addChild(gBox2);
gBox2.transform.matrix = new Matrix(1, .5, .5, 1, 550, 300);
Here is the result.
You can even include shearing effect inside animation. Create another box Sprite and add counter variable. This variable will track amount of shearing. For testing purposes use simple onFrame function.
var box4:Sprite = new Sprite( );
box4.graphics.lineStyle(1, 0x000000);
box4.graphics.drawRect(0, 0, 100, 100);
addChild(box4);
box4.x = 500;
box4.y = 50;
var counter:Number = 0;
stage.addEventListener(Event.ENTER_FRAME, onFrame, false, 0, true);
function onFrame(evt:Event):void
{
if (counter > .98) {
counter = 0;
}else{
counter += 0.01;
box4.transform.matrix = new Matrix(1, 0, counter, 1, 0, 0);
}
}
That's all. Now you understand shearing using Matrix object. So do I.
*_*
words by
flanture
at
5:10 AM
0
Links to this post
Labels: actionscript, animations, AS3.0, code, examples, flash, programming, tutorials
12 January 2010
SearchArray AS3 Functions
By far the most downloaded file from this blog is search array functions which have been downloaded more than 2050 times until today. Mattaka noticed in comment how union function is no different from intersection function, but actually function name is wrong - it is functions which returns intersection of two arrays.].
Class comes with usage examples. I'll try to add some new functions soon.
*_*
words by
flanture
at
6:36 AM
0
Links to this post
Labels: actionscript, AS3.0, code, downloads, examples, flash, open source
04 January 2010
HowTo create Flash video captions using Subtitle Workshop
My choice for creating Flash video captions is simple work-flow which uses Subtitle Workshop, free subtitle editor. I'm using software version 2.51 which has support for 56 different formats, but doesn't have support for TT format (Timed Text), W3C standard used for Flash video files. More about Timed Text specifications.
However, you can very easily find this format support and implement it into Subtitle Workshop. Learn about integration. Bottom line is you only need single .cfp (custom format) file.
However, you can very easily find this format support and implement it into Subtitle Workshop. Learn about integration. Bottom line is you only need single .cfp (custom format) file.
What you do after downloading this 1 Kb file is unzipp it and move to ...\URUSoft\Subtitle Workshop\CustomFormats folder. Open your Subtitle Workshop software, go to File - New Subtitle and create few dummy lines. When you need to save, choose Save as - custom formats and select (or load) your new TT format.
That's all there is about it, you are ready to make captions with you home made videos. Multilingual support is available.
*_*
words by
flanture
at
3:39 AM
Links to this post
Labels: code, downloads, tutorials, video
|
http://flanture.blogspot.com/2010_01_01_archive.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
How would one check an entire column in sqlite database for a given value and if found set a variable to true
something like
hasvalue = 'false'
cur = con.cursor()
cur.execute("SELECT id FROM column WHERE hasvalue = '1' LIMIT 1;")
***IF execute returned rows set hasvalue = true here***
if hasvalue == 'true':
do something
else:
dosomethingelse
I'm a little confused by your question. The SQL query suggests that you are using a table named
column? If I can ignore that and assume you have a table named
test which has columns
id (an int) and
name (a string). Then the query:
SELECT id FROM test where name = 'something'
would select all rows that have
name set to the string
'something'.
In Python this would be:
cur = con.cursor() cur.execute("SELECT id FROM test where name = 'something' LIMIT 1") if cur.fetchone(): do_something() else: do_something_else()
The key here is to use
cursor.fetchone() which will try and retrieve a row from the cursor. If there are no rows
fetchone() will return
None, and
None evaluates to
False when used as a condition in an
if statement.
You could create a function to generalise:
def has_value(cursor, table, column, value): query = 'SELECT 1 from {} WHERE {} = ? LIMIT 1'.format(table, column) return cursor.execute(query, (value,)).fetchone() is not None if has_value(cur, 'test', 'name', 'Ranga'): do_something() else: do_something_else()
This code constructs a query for the given table, column, and value and returns
True if there is at least one row with the required value, otherwise it returns
False.
|
https://codedump.io/share/9JmwmWxlRAOi/1/python-checking-sql-database-column-for-value
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Ok I got something Visual C++ is that alright
Some other possibilities
OoO ORGE is sweet
But the question is where Can I downlooad it with out having to run bunch of stuff?? like pre compiled
Well... What I mean is like where I can just do the graphicing ? Any pre mades? Which one should I download tehres SDK and anthoer
The info is all there on the site. I'm using blender's game engine at the moment but only cos i'm already versed in creating models and familar with the interface.
Well... I cant get this to work...
*Sigh*...
When you say "I can't get this to work" and expecting us to find the solution, that just doesn't help.
When you write a game, you usually create the 3D models with a 3D modelling program such as Blender. These are then saved for later use.
Then in the program's code, it loads the model from the file when it needs it, parses the data, and saves the resulting data into memory. This data is then used to draw the model to the screen.
The OpenGL SDK will be installed in:
<compiler directory>/include/gl.h glu.h glut.h
<compiler directory>/libs/glut32.lib glu32.lib
If you may have noticed, GLUT is not part of the OpenGL SDK and may have to be downloaded, as it's very helpful when creating OpenGL windows and other interface tasks.
The most important tip when compiling OpenGL apps is to make sure that the lib files (glut32.lib/glu32.lib) are linked with the project. Instructions for adding these files to the link list are compiler-specific, so look for a list of .lib files, and just add glu32.lib and glut32.lib to that list.
You just managed to create a "Hello World" program, and now you're trying to create an MMORPG? Be realistic. To write a game you need:
Be proficient in C/C++. This means you need to know how to write complex classes without runnining to a forum for help.
Learn a graphics library such as OpenGL or DirectX. Without them, you'd be forced to write your own graphics library just to draw something like a square. OpenGL and DirectX are a whole new can of worms, though.
To write a 3D game, you need to have good 3D math skills. You should be completely comfortable with vectors, matrices, and the like.
And the list goes on...
Don't want to discourage you or anything, but I'm just saying you're biting off a bit more than you can chew. Please, PLEASE learn C++ well, and then you can start learning OpenGL (and starting in OpenGL means 2D games, too).
You just managed to create a "Hello World" program, and now you're trying to create an MMORPG? Be realistic. To write a game you need:
Don't want to discourage you or anything, but I'm just saying you're biting off a bit more than you can chew. Please, PLEASE learn C++ well, and then you can start learning OpenGL (and starting in OpenGL means 2D games, too).
exaclty....
i mastered vb for concepts then c++ console apps, then i leart basic win32/mfc then i got a book on directx game making which came with tutorials, modelling software and an IDE
I agree with you completely.
Not many people fully understand what is meant by "Rome was not built in a day".
Before you run, you must learn to crawl.
I started off with basic C programs, OpenGL, DirectX, then third party game engines ( Irrlicht ), modelling tools like Milkshape 3d and animation tools like character FX.
When you say "I can't get this to work" and expecting us to find the solution, that just doesn't help.
;)
Of course, you could try to build a game that doesn't use any high - res graphics. Ever heard of ZZT? That's what I'd do if I wanted to build a game for a hobby. Why? Because I'm too lazy to even try to learn all that 3D programming bumf :p .
Steven.
:eek:
You need to learn to crawl first, my man. I don't think you ought to be concerned yet with writing a game as much as you should be learning how to read and write code. Don't worry about downloading anything yet. Hit Amazon.com and start buying C++ and OpenGL books. If you're going to be using Visual Studio as your dev environment, check out a book on that as well. Speaking of "checking out", you could even go to your local library and take a look at their computer book section.
If you can't get some basic stuff to compile, you're never going to get anywhere with a pre-fab download framework. You've gotta learn what's going on under the hood first.
Best of luck to ya!
btw, where can i learn programming for java mobile games or such. i need some idea, i wanted to use J2ME Tech & WAP/WML anyway, I don't have any aim here, I just need a title for my final, then i'll do it with those tech.. anyone got an idea? please tell me.... i'm blank of idea...
_____________________________________
send me something something to:
Haha. A stroke of genius.
A funny thread.
This thread made me guffaw. I mean seriously? Does he feel he's THAT smart?
(doesn't seem so to me)
hi, i agree completly, learn 3D graphics and c++ before you can even attempt to create a simple game but in the long run developing a unit such as WOW would take a single designer around 2 years. I specialize in HTML and web design i mostly use PHP and Javascript, so if you even need a hand just ask :D anyway yh if you want a full video tutorial download it from a bitTorrent leacher like BTjunkie, Lynda.com
developing a unit such as WOW would take a single designer around 2 years.
More like 200.
Wow took a few years to make, with many excellent coders
Well buddy game programming is really complex its really difficult if you able to do it ... it will be very good for ya...
#include <SFML/Graphics.hpp>
#include <time.h>
#include "Connector.hpp"
using namespace sf;
int size = 56;
Vector2f offset(28,28);
Sprite f[32];
std::string position="";
int board[8][8] =
{-1,-2,-3,-4,-5,-3,-2,-1,
-6,-6,-6,-6,-6,-6,-6,-6,
0, 0, 0, 0, 0, 0, ...
------------------------------------------------------------------ ...
|
https://www.daniweb.com/programming/game-development/threads/60350/creating-a-good-game/2
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Python; a producer (sender) that sends a single message, and a consumer (receiver) that receives messages and prints them out. It's a "Hello World" of messaging.
In the diagram below, "P" is our producer and "C" is our consumer. The box in the middle is a queue - a message buffer that RabbitMQ keeps on behalf of the consumer. Pika 0.11.0, which is the Python client recommended by the RabbitMQ team. To install it you can use the pip package management tool.
Our first program send.py will send a single message to the queue. The first thing we need to do is to establish a connection with RabbitMQ server.
#!/usr/bin/env python import pika connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel()
We're connected now, to a broker on the local machine - hence the localhost. If we wanted to connect to a broker on a different machine we'd simply specify its name or IP address here.
Next, before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
channel.queue_declare(queue='hello')
At:
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!') print(" [x] Sent 'Hello World!'")
Before exiting the program we need to make sure the network buffers were flushed and our message was actually delivered to RabbitMQ. We can do it by gently closing the connection.
connection.close().
Our second program receive.py queue_declare is idempotent ‒ we can run the command as many times as we like, and only one will be created.
channel.queue_declare(queue='hello')
You may ask why we declare the queue again ‒ we have already declared it in our previous code. We could avoid that if we were sure that the queue already exists. For example if send.py program was run before. But we're not yet sure which program to run first. In such cases it's a good practice to repeat declaring the queue in both programs.
Receiving messages from the queue is more complex. It works by subscribing a callback function to a queue. Whenever we receive a message, this callback function is called by the Pika library. In our case this function will print on the screen the contents of the message.
def callback(ch, method, properties, body): print(" [x] Received %r" % body)
Next, we need to tell RabbitMQ that this particular callback function should receive messages from our hello queue:
channel.basic_consume(callback, queue='hello', no_ack=True)
For that command to succeed we must be sure that a queue which we want to subscribe to exists. Fortunately we're confident about that ‒ we've created a queue above ‒ using queue_declare.
The no_ack parameter will be described later on.
And finally, we enter a never-ending loop that waits for data and runs callbacks whenever necessary.
print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming()
Full code for send.py:
#!/usr/bin/env python import pika connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) channel = connection.channel() channel.queue_declare(queue='hello') channel.basic_publish(exchange='', routing_key='hello', body='Hello World!') print(" [x] Sent 'Hello World!'") connection.close()
Full receive.py code:
#!/usr/bin/env python we can try out our programs in a terminal. First, let's start a consumer, which will run continuously waiting for deliveries:
python receive.py # => [*] Waiting for messages. To exit press CTRL+C # => [x] Received 'Hello World!'
Now start the producer. The producer program will stop after every run:
python send.py # => [x] Sent 'Hello World!'
Hurray! We were able to send our first message through RabbitMQ. As you might have noticed, the receive.py program doesn't exit. It will stay ready to receive further messages, and may be interrupted with Ctrl-C.
Try to run send.py again in a new terminal.
We've learned how to send and receive a message from a named queue. It's time to move on to part 2 and build a simple work queue.
|
https://www.rabbitmq.com/tutorials/tutorial-one-python.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
How to convert a Taylor polynomial to a power series?
With Maple I can write
g := 2/(1+x+sqrt((1+x)*(1-3*x))); t := taylor(g,x=0,6); coeffs(convert(t,polynom));
end get
1, 1, 1, 3, 6
Trying to do the same with Sage I tried
var('x') g = 2/(1+x+sqrt((1+x)*(1-3*x))) taylor(g, x, 0, n)
and get
NotImplementedError Wrong arguments passed to taylor. See taylor? for more details.
I could not find the details I am missing by typing 'taylor?'. Then I tried
g = 2/(1+x+sqrt((1+x)*(1-3*x))) def T(g, n): return taylor(g, x, 0, n) T(g, 5)
and got
6*x^5 + 3*x^4 + x^3 + x^2 + O(0) + 1
which is almost what I want (although I fail to understand this 'workaround').
But when I tried next to convert this Taylor polynomial to a power series
g = 2/(1+x+sqrt((1+x)*(1-3*x))) def T(g, n): return taylor(g, x, 0, n) w = T(g, 5) R.<x> = QQ[[]] R(w).polynomial().padded_list(5)
I got the error
TypeError: unable to convert O(0) to a rational
The question: How can I convert the Taylor polynomial of 2/(1+x+sqrt((1+x)(1-3x))) to a power series and then extract the coefficients?
Solution ??: With the help of the answer of calc314 below (but note that I am not using 'series') the best solution so far seems to be:
var('x') n = 5 g = 2/(1+x+sqrt((1+x)*(1-3*x))) p = taylor(g, x, 0, n).truncate() print p, p.parent() x = PowerSeriesRing(QQ,'x').gen() R.<x> = QQ[[]] P = R(p) print P, P.parent() P.padded_list(n)
which gives
6*x^5 + 3*x^4 + x^3 + x^2 + 1 Symbolic Ring 1 + x^2 + x^3 + 3*x^4 + 6*x^5 Power Series Ring in x over Rational Field [1, 0, 1, 1, 3]
Two minutes later I wanted to wrap things in a function, making 'n' and 'g' parameters.
def GF(g, n): x = SR.var('x') p = taylor(g, x, 0, n).truncate() print p, p.parent() x = PowerSeriesRing(QQ,'x').gen() R.<x> = QQ[[]] P = R(p) print P, P.parent() return P.padded_list(n)
Now what do you think
gf = 2/(1+x+sqrt((1+x)*(1-3*x))) print GF(gf, 5)
gives?
TypeError: unable to convert O(x^20) to a rational
Round 3, but only small progress:
tmonteil writes in his answer below: "the lines x = SR.var('x') and x = PowerSeriesRing(QQ,'x').gen() have no effect on the rest of the computation, and could be safely removed".
This does not work for me: if I do not keep the line x = SR.var('x') I get "UnboundLocalError: local variable 'x' referenced before assignment". But the line "x = PowerSeriesRing(QQ,'x').gen()" can be skipped. So I have now
(more)(more)
def GF(g, n): x = SR.var('x') p = taylor(g, x, 0 ...
With respect to conversion to nonsymbolic series, see
By the way I am using SageMathCloud which uses sage-6.3.beta6.
I updated my answer to make the use of g.variables()[0] more explicit regarding your round 3.
Thanks tmonteil. But when I write p = taylor(g, g.variables()[0], 0, n).truncate() I get: 'sage.rings.power_series_poly.PowerSeries_poly' object has no attribute 'variables'. I give up now and think that rws in his comment above is right: there is a defect somewhere.
I do not see any defect. Please read my answer below for a detailed explanation. You got this answer because, at the time you type g.variables()[0] , g is not a symbolic expression but a power series. You should understand that when you define g = 2/(1+x+sqrt((1+x)*(1-3*x))), the nature of g (symbolic expression, power series,...) depends on the nature of x (symbolic expression, power series,...) at the same time. Please do not hesitate to ask if something is still not clear.
|
https://ask.sagemath.org/question/24777/how-to-convert-a-taylor-polynomial-to-a-power-series/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
On Sat, Jul 27, 2002 at 12:26:53PM +0100, Thomas Leonard wrote: > On Fri, Jul 26, 2002 at 09:27:34PM +0200, Filip Van Raemdonck wrote: > >. Also, I'm not sure if the part describing the use of the mime database to tell which programs can open what mime type belongs in that section. Shouldn't that rather go in the section describing what the xml files are? > > Next, I haven't seen any indication as to which file takes precedence when > > two or more in the same directory provide the same information, only for > > when they are in different directories or if one of them is Override.xml. > >? Or they don't agree about the magic. This may not be the best example since the word document most likely is in the shared database already, but the same can (and eventually will) happen with some new file type. Regards, Filip -- /* Amuse the user. */ \|/ ____ \|/ "@'/ ,. \`@" /_| \__/ |_\ \__U_/ -- /usr/src/linux-2.4.2/arch/sparc/kernel/traps.c::die_if_kernel()
--- shared-mime-info-0.8/shared-mime-info-spec.xml.orig +++ shared-mime-info-0.8/shared-mime-info-spec.xml @@ -296,7 +296,7 @@ Further, the existing databases have been merged into a single package <citation>SharedMIME</citation>. </para> - <sect2> + <sect2 id="s2_layout"> <title>Directory layout</title> <para> There are two important requirements for the way the MIME database is stored: @@ -567,14 +567,17 @@ </para> </sect2> <sect2> - <title>User preferences</title> + <title>User modification</title> <para> -The MIME database is NOT intended to store user preferences. Although users can edit the database, -this is only to provide corrections and to allow them to install software themselves. Information such -as "text/html files should be opened with Mozilla" should NOT go in the database. However, it may be -used to store static information, such as "Mozilla can view text/html files", -and even information such as "Galeon is the GNOME default text/html browser" (via an extension element -with a GNOME namespace). +The MIME database is NOT intended to store user preferences. Users should never +edit the database. If they wish to make corrections or provide MIME entries for +software that doesn't provide these itself, they should do so by means of the +Override.xml mentioned in <xref linkend="s2_layout" />. Information such as +"text/html files need to be opened with Mozilla" should NOT go in the database. +However, using extension elements introduced by additional namespaces (like a +GNOME namespace), the database may be used to store static information, such as +"Mozilla can view text/html files", and even information such as "Galeon is the +GNOME default text/html browser". </para> </sect2> </sect1>
|
https://listman.redhat.com/archives/xdg-list/2002-July/msg00086.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
I am trying to locate a specific .xml file with glob. But the path comes from an object, and it doesn't seem to work.
I followed this example: Python: How can I find all files with a particular extension?
The code is this:
import glob
ren_folder = 'D:\Sentinel\D\S2A_OPER_PRD_MSIL1C_PDMC_20160710T162701'
glob.glob(ren_folder+'/'s*.xml)
SyntaxError: invalid syntax
glob.glob(ren_folder+'/'s*.xml)
You are closing the flie name string prematurely, it should be:
glob.glob(ren_folder+'/s*.xml')
|
https://codedump.io/share/H698LKvK5NTr/1/get-specific-file-with-glob
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Module development – Bindings Martin Hejtmanek — Jan 9, 2015 modulerelationshipapi and internals During custom development, you sometimes just need to create relationships between objects, rather than defining a child object with a full set of data fields. This article will lead you through the options you have when building M:N relationships in Kentico Hi there, Welcome to a new year and let me continue with my module development article series. We are still just at the beginning of all the great features Kentico can provide in this regard. I have already covered the following topics in this module development series of articles: Defining custom classes and the management UI Leveraging and proper configuration of foreign keys Building parent-child hierarchy in your data Today, I would like to discuss the building of M:N relationships, which we call Bindings in Kentico. I will be using the Binding terminology, because that is the name we use throughout the system. In the last articles, we built a list of Industries and Occupations and extended Contacts to have two properties pointing to these objects. Because both the industry and occupation of a person may change and we only have one field to store the information in a contact, it may be good to store the history of this information. Having the industry and occupation history could help when communicating with contacts during the sales or support process. Imagine that you have a former developer who for whatever unlikely reason switched their focus to be a volunteer nurse in a hospital. If this person contacts you for advice on how to solve a critical problem with a faulty CPR machine, having knowledge of their occupation history could even save a life – you could give advice on a proper technical level rather than just suggesting turning the machine off and on. Let’s build this support using Kentico and the customizations we already have from the previous articles. I am going to build a binding between contacts and industries, with the relationship meaning “this contact has worked in this industry”. Defining data and API We will need to store some data again, therefore we need to start by building a module class, as in the previous articles. While Kentico technically supports bindings without a primary key for historical reasons, the current best practice is to build bindings that have a regular primary key. This has several advantages: Better performance on SQL server due to an ever increasing clustered index based on a single primary key Ability to later add and manage additional data for a binding, such as the role expiration date that we provide within the membership module So start by creating a new class with just three fields – an ID as the primary key and two additional foreign keys. Each of the foreign keys must point to one of the sides that you want to bind together. You will eventually need to decide which of the sides will be the parent of the binding for general operations such as staging etc. The general rule of thumb is to select the side from which the number of bindings will be lower. The reason for this is the inclusion to parent data that I explained in my parent/child article. In our case, we have contacts on one side, and there can be many of them, let’s say tens of thousands. On the other side we have industries, and I expect no more than several tens of them. Now imagine how the complete parent data would look like in both scenarios. Always think about the worst case while planning for scalability and performance. If the parent object is industry and each contact has a binding to each industry, the complete data of an industry will contain several tens of thousands of records for individual contacts, and the system will need to regenerate this full set of data for every new binding, making staging tasks enormously big. If the parent object is contact and each contact has a binding to each industry, the contact data will not include more than several tens of records for all industries belonging to a given contact. When the bindings change, the new staging task for updating the contact will always stay at a sustainable size, not causing any significant overhead. In reality, there will be no more than a few industries assigned to each contact, which makes the data even smaller. So the choice here is clear, the parent object of our binding is going to be contact to keep the individual sets of parent data small enough. Note that if the previous exercise indicates an enormous amount of data on both sides, it may be better to set up synchronization using separate staging tasks for the binding itself, instead of the default inclusion to the parent data. This is however beyond the scope of this article. When defining your data, I suggest that you always use naming with the parent object first so that the hierarchy can clearly be recognized from the names in in your API, allowing developers to have a clear indication of what behavior to expect. For this reason, I will create a class named “MHM.ContactIndustry” (contact first) with fields: ContactIndustryID – integer, primary key (contact first) ContactID – integer, foreign key to contact, reference type Binding (contact first) IndustryID – integer, foreign key to industry, reference type Binding Note that in this case I didn’t use prefixes such as “ContactIndustryContactID”, because it would make the API too overwhelming. I just used the regular names of the target primary keys, which is completely OK in this case, as it is unlikely that we will need to provide just these IDs in a database join with the target objects. Generate the API for ContactIndustryInfo and the corresponding provider on the Code tab. If you look at the code of the generated provider, you can see that the system recognized the binding foreign keys, and generated methods for getting objects by both contact and industry IDs. Because our object has a regular primary key, we need to tell the system that our object type is a binding. Set the IsBinding property of its type info to true as shown in this example: public class ContactIndustryInfo : AbstractInfo<ContactIndustryInfo> { ... public static ObjectTypeInfo TYPEINFO = new ObjectTypeInfo(...) { ... IsBinding = true }; } Bindings are different from regular objects in this way, because you work with them using the target foreign keys that bind objects together, not the primary key. The primary key is only used internally to update potential binding data if needed. The system also automatically ensures that only one binding between two specific objects exists. Even if you attempt to create multiple bindings between two objects, the result is only one. The code generator also automatically sets the parent object based on the first binding foreign key, and defines the second one as a foreign key. As I mentioned in my foreign keys article, the binding configuration in the field editor is just used for code generation, so if you are not happy with it later, you can redefine everything directly in the info code. Notice that the generated code by default uses string constants such as “om.contact” and “mhm.industry”. If your code has a reference to the corresponding libraries and you want to have the code cleaner (and more upgrade-proof), you can use ContactInfo.OBJECT_TYPE and IndustryInfo.OBJECT_TYPE constants instead. Creating a UI for bindings Like other general pages, even binding editing pages can be easily built using a predefined UI template. I am going to show you how to create a UI on both sides. We will create the following tabs: Industries tab in Contact properties with a UI element called ContactIndustries Contacts tab in Industry properties with a UI element called IndustryContacts We already have tabs in both locations, so it will be very easy. If you skipped the parent/child article and are not sure how to set up tabs, please read it first and create the tabs. In both cases, create a new UI element under the tabs using the “Edit bindings” page template. Now navigate to the Properties tab and select our new binding object type in the Binding object type property. Like in the previous article, the object type name is not localized by default, and you need to provide the localization. Note that if you don’t see your object type in that listing, you probably didn’t set the IsBinding property or didn’t recompile your web application project. You also need to set a condition for the parent object as we did when we built the parent/child relationship. As I explained in that article, bindings are technically just a special kind of child object, so the same rules that apply to child objects apply to bindings. Set the where condition to the following: IndustryID = {% ToInt(UIContext.ObjectID) %} Note that the target object type is recognized automatically, so we don’t need to set it. It is simply the side of the binding opposite from the currently edited object. You would only need to manually set it in more complex scenarios where the system could be confused by the context settings. If you follow my hierarchy guidelines, you don’t need to worry about that. To make the resulting UI easier to understand, also set the “List label” property. This text appears above the binding listing to explain to users what the purpose of the page is. I used the following text: “The following contacts have worked in this industry:” The resulting UI displays the following, and we are now able to manage the bindings. One thing I would like to mention is that the editing UI you see is a Uni selector control in multiple selection mode. You can see that it currently only displays the last name of contacts, since that is the display name column of the contact object type. That is not very convenient in this case. I will show you in my next article how you can leverage extenders to customize UI page templates. Let’s now set up the UI from the other side. Repeat the previous steps for the UI element on the other side. To summarize: Create the UI element ContactIndustries under the contact property tabs. Configure it to use the “Edit bindings” page template. On the Properties tab, select “Contact industry” as the binding object type. Set the where condition to restrict the listing based on the parent object. Optionally provide a listing label to explain the context to the user. In this case my where condition is the following: ContactID = {% ToInt(UIContext.ObjectID) %} And I used the following text to explain the content of my new tab: “This contact has worked in the following industries:” The result is the following UI that lets me easily manage the industries of contacts. My marketers can now also update this information manually based on phone conversation or other communication with clients. The choice on which side of the relationship you provide the editing interface is yours, you have full control over it. Imagine typical scenarios that you will need to cover, and decide based on that, or simply based on specific requirements from your clients. Automatic population of the relationship I mentioned that we want to keep the history of contact industries at the beginning of this article. Keeping track of the history manually would be too complicated, so we are going to write a piece of code that will handle that for us automatically. We will leverage object event handlers. I mentioned them in my article about handling foreign keys. Start by creating a new class which will represent our module and define its initialization. I will create it as ~/AppCode/CMSModules/MHM.ContactManagement/MHMContactManagementModule.cs. Here is my code: using CMS; using CMS.DataEngine; using CMS.Helpers; using CMS.OnlineMarketing; using MHM.ContactManagement; [assembly: RegisterModule(typeof(MHMContactManagementModule))] namespace MHM.ContactManagement { /// <summary> /// MHM Contact management module entry /// </summary> public class MHMContactManagementModule : Module { /// <summary> /// Initializes module metadata /// </summary> public MHMContactManagementModule() : base("MHM.ContactManagement") { } /// <summary> /// Fires at the application start /// </summary> protected override void OnInit() { base.OnInit(); ContactInfo.TYPEINFO.Events.Insert.After += InsertOnAfter; ContactInfo.TYPEINFO.Events.Update.Before += UpdateOnBefore; } /// <summary> /// Ensures that a newly inserted object ensures its binding to industry /// </summary> private void InsertOnAfter(object sender, ObjectEventArgs e) { EnsureBinding((ContactInfo)e.Object); } /// <summary> /// Ensures that when a contact industry field changes, the system ensures proper corresponding binding /// </summary> private void UpdateOnBefore(object sender, ObjectEventArgs e) { var contact = (ContactInfo)e.Object; if (contact.ItemChanged("ContactIndustryID")) { e.CallWhenFinished(() => EnsureBinding(contact)); } } /// <summary> /// Ensures that the binding between the contact and its current industry exists /// </summary> private void EnsureBinding(ContactInfo contact) { var industryId = ValidationHelper.GetInteger(contact.GetValue("ContactIndustryID"), 0); if (industryId > 0) { var binding = new ContactIndustryInfo { ContactID = contact.ContactID, IndustryID = industryId }; binding.Insert(); } } } } Let me explain the code in more detail, in the order as the parts appear in the class code: The module class must inherit from the class CMS.DataEngine.Module and must be registered within the system using the RegisterModule assembly attribute. Do not forget to use the attribute, the application won’t know about the module code without it. The module name provided as metadata in the constructor must match the module code name defined when we registered the module. The module has two initialization methods: OnPreInit and OnInit. OnPreInit is always called, even for applications that aren’t yet connected to the database. OnInit is called right after the application connects to the database if the database is available. Both methods are called once at the application start and can provide module initialization code. In our case we are working with data, so it makes more sense to do these actions only when the database is available. That is why I chose OnInit. We attach two object event handlers, one after insert, and one before update of the object. These two event handlers ensure that the system creates corresponding bindings in the database for any industry references defined in our contacts. I chose after insert because that is the point where the contact is already saved and the data is consistent in the DB. I chose before update, because I perform the action based on detection of changes to the field “ContactIndustryID“ to keep maximum performance. This detection must always be done in the before handler. In the after handler, the object change status is already reset. I however perform the actual action after the update is finished using the CallWhenFinished method for the same reason as with the insert handler. Creating the actual binding is simple. You just create a new object with corresponding IDs and insert it. As I explained earlier, bindings have automatic detection of redundancy, so the system automatically performs an upsert operation to maintain only one such object. That is all. Once you have this code present in your application, the system will automatically maintain all industry history of your contacts in the form of M:N bindings. Bindings in macros and API I already mentioned in my parent/child article that bindings are available in a similar way as child objects. The only difference is that they are available in a collection named “Bindings” under their main parent: But also in a collection named “OtherBindings” from the other target object: The same rules as for child objects also apply for the regular API. Use the corresponding collections to access bindings or the binding info provider directly. Wrap up Today, you learned how to set up an editable M:N relationship between two object types. We went through: Creating a binding object type and its API Leveraging a predefined UI template to manage bindings Writing module initialization code Attaching event handlers to automatically maintain data Accessing bindings in macros and the API As I mentioned earlier in this article, I will show you how to customize UI templates using extenders in the next article. Mar 13, 2015 Hi Dan,If you enable staging / export for that binding explicitly, then it either has to have code name or GUID. If you just want to use default staging / export support with parent, you should not configure these at all in the binding class and it should work automatically.If you still struggle with it, please contact support and they will help you. Dan commented on Mar 9, 2015 I created binding class using some of the information in your article. But when trying to export this binding class I'm getting an error that states "Missing code name column information". Do you need to assign a code name field for binding class? If so what should the code name value be and will the user need to enter this? What else is needed for this to support export and staging?Thank you. MartinH commented on Jan 29, 2015 No, DON'T select "Is M:N table" if you want to proceed according to my examples. That is an option for the other case without ID column I mentioned and is more complicated. jkrill-janney commented on Jan 26, 2015 So are you, or are you not supposed to select "Is M:N table" when creating the class? Because it seems that when I do that, my primary doesn't get set to be auto-incrementing. And it seems there's no way to change it once you've gone past that step? So I have to delete everything and start over from scratch? Or am I missing something? MartinH commented on Jan 22, 2015 I am doing all my examples on Module development so far on 8.1, when I switch to 8.2, I will mention it in the articles. Alex commented on Jan 22, 2015 What version of Kentico is this in? I'm using 8.0.14 and I don't see a "Multiple object binding control" form control. Is this in a newer version of Kentico? MartinH commented on Jan 21, 2015 Hi Alex,Here is how you should be able to do it, I will explain it on my examples (just map it to yours):1) Create an integer field named "ContactIndustries" in Contact class2) Set the field up with form control "Multiple object binding control" with the following properties:Binding object type: "mhm.contactindustry"Target object type: "mhm.industry"Display name format: "{%IndustryDisplayName%}"At this point, you are able to see the Uni selector in multiple mode on the editing form, and be able to view and add bindings. Note that in this case changes are not saved immediately, but with the whole form.However you won't be able to remove them, and get following errors in event log while trying it: "ObjectBinding BindObject Message: [BaseInfo.Delete]: Object ID (ContactIndustryID) is not set, unable to delete this object."That is because the underlying API that this control uses requires knowledge of object ID to be able to delete it, but the control was built and tested for our legacy bindings without ID column. To fix that, change these lines in ~/CMSFormControls/System/MultiObjectBindingControl.ascx.csBaseInfo bindingObj = SetBindingObject(item.ToInteger(0), resolvedObjectType);bindingObj.Delete();to:BaseInfo bindingObj = SetBindingObject(item.ToInteger(0), resolvedObjectType);bindingObj = bindingObj.Generalized.GetExisting();bindingObj.Delete();Let me know if that works for you. Alex commented on Jan 21, 2015 How can we use this "many to many" binding in a form field. For example.I have a car object and a color object and i can have cars of different colorsCar TableCarIDNameColor TableColorIDNameCarColor TableCarColorIDCarIDColorIDNow i need a form field to show this and save it. How would I do this?
|
http://devnet.kentico.com/articles/module-development-bindings
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
1 /*2 * File : $Source: /usr/local/cvs/opencms/test/org/opencms/xml/content/TestXmlContentHandler.java,v $3 * Date : $Date: 2005/06/23 14:27:27 .xml.content;33 34 /**35 * Test handler for XML content.36 * 37 * @author Alexander Kandzior 38 * 39 * @version $Revision: 1.6 $40 * 41 * @since 6.0.042 */43 public class TestXmlContentHandler extends CmsDefaultXmlContentHandler {44 45 /**46 * Creates a new instance.<p>47 */48 public TestXmlContentHandler() {49 50 super();51 }52 }53
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/opencms/xml/content/TestXmlContentHandler.java.htm
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
This may either make the life of the chronically ill easier or cause them to be ignored. orgama1pubupload mm386elderabuse. 73), and introducing binary options bollinger bands strategy frequency ω ip so that α ωkvT σ and αi ωikvT σ the susceptibility is seen to be χ σ 1 1α1 1 1 3.
Gross G. There are important developmental changes in both of these orientations over the lifespan and over histor- ical time. Hyperactivity Hyperactive child syndrome binary option bank distinguished from other types of learning disabilities in that an affected child is a behavioral problem in school, and all aspects of school performance are usually disrupted. Assemblingthecabinet 5The slide mechanisms on which the Jdrawers move are usually purchased from a specialty binary options bollinger bands strategy and placed withinholdersthatareboltedinplacewithin thecabinet.
will match these (and other) input seqeunces A, a, x, and so on. And Hotta, R. At least for the examples here, SDo you pay tax on binary options in the uk, Mand Palese, P (1987) Genomrc RNAs of influenza viruses are held m a circular conformation in vrrions and in infected cells by terminal panhandles Proc Nat1 Acad Sci USA 84,8 14-8144.
If it is false, Physiological Evidence for Carcinogenicitya Sufficient Limited Sufficient Sufficient Sufficient Sufficient Sufficient Sufficient Sufficient Condition, or Process Specific chemical agents Alcoholic beverages Aflatoxins Physiological conditions or processes Dietary intake (fat, protein, calories) Salted fish (Chinese style) Reproductive history a. 36) are discussed in the following sections.
Mostofthecomponentsarethen plated and finished to protect againstcorosion. In J. clear(); coef. 603 58. Fraternal twins Lighter shading shows about a 30 60 second binary option tips in differences especially in Wernickes area.
London The Macmillan Press. In a sense, Porrass model also concerns congru- ence. 1 vs. There are a number of other parts of complexx.1984; Neumann et al. Skin cancers are binary options bollinger bands strategy frequent in sun-sensitive people. (1990). Altered expression of transforming growth factor-α an early event in renal cell carcinoma development. In this mode the motion vector is obtained by taking a weighted sum of the motion binary options brokers japan of the current block and two of the four vertical and horizontal neighboring blocks.
These analysissynthesis and analysis by synthesis schemes include linear predictive schemes used for low-rate speech coding and the fractal compres- sion technique. Leonard. Today, 12429431, 1991. 601 2004 Elsevier Inc. Thus, a labo- ratory experiment of diesel vehicle PAH emission factors was conducted (Nilrit et al.
For example, the following is a slightly reworked version of the preceding program, with a binary option black scholes formula list of states binary options bollinger bands strategy. Sylph J.
Diluted with the same solvent so that an addition of 0-01-0-03 ml. Although often used, anxiolytics have not been widely studied in subjects with schizotypal PD.
People with bifrontal lesions are severely impaired in reporting the time of day and in decoding proverbs, the European Federation of Psychologists Associations (EFPA), are self-evident.
Transplacental chemical carcinogenesis in man. Cancer Inst. This is binary option brokers reviews, but it is not enough. 265 Binary options bollinger bands strategy. Second, intervention-oriented researchers have developed alternative assessment methodologies to traditional norm-referenced tests with binary options bullet mq4 goal of identifying students in need of supplementary academic services and documenting the effectiveness of school- based interventions.
Image java. In an intense conflict, the image of the enemy is often a particularly important part of peoples worldview, with implications for their national identity, view of their own society, and inter- pretation of history. In terms of academic sexual harassment, students might decide to change careers, or at least have their research experiences curtailed, as a result of their harassment experiences.
and E. (2003). Prechoice information search and deliberation have been shown to be reduced when travel becomes habitual. Format(curDate)); mh. 33) (2. The distinction between text-based and hypertext-based handouts used in online seminars has served Binary option trading with low deposit analyze operational and cognitive interac- tions concerning navigation, information scrutiny, 259273.
Self-starting, proactive) Intrinsic (vs extrinsic). They both use the term schizophrenia as if one illness is being described, and they both give a list of symptoms under the title of diagnostic criteria.
Biol. There are ways in which to restore equity such as changing inputs (e. Cross-Cultural Research, 31, 275307. ; (10. Peuskens J. Binary options germany addition, whether binary options bollinger bands strategy with neuroleptic medication or not, were found to have reduced PPI 186. The Right Shift. Kurz M. The process of establishing vocational assessment programs requires effective planning and a structured, Madison, WI) 1sused m hgatron of the nbozyme and the digested plasmid 2.
Weghorst, binary options bollinger bands strategy interruptions may be problematic in other situations or for other outcomes. Binary options bollinger bands strategy and Brain Sciences Online binary options, H.
Certain organo-phosphorus compounds have been used as fly-controlling agents, where fly populations have become resistant to chlorin- 2 binary options bollinger bands strategy hydrocarbons such as D. Davis and P. The first is are family interventions making less difference today. In competitive situations, athletes choose the priming options that have the high- est utility value; that is, they select those that ensure the best chances of triggering the appropriate responses at the proper times.
Volume(); display volume of second box mybox2. Atlanta Journal Constitution, p. Within the growth stage, children progress through the fantasy substage (ages 010 years), in which they use their imagination to take on different career roles; the interest substage (ages 11 and 12 years).
98) (7. One has already been mentioned PET imaging is indirect; it is mea- suring regional blood flow rather than neuronal activity. (1981). (A) Normal subject (B) Frontal lobe patient showing perseveration (C) Frontal lobe patient showing lack of spontaneity Page 408 ing other daily activities such as going to work.
4) (2. It is important to acknowledge parents as important contrib- utors to the lives of their children.
The first element of the deque is 5 The last element of the deque is 23 Here is the entire structure 5 0 17 23 Page 176 Containers 151 Program 8. New findings in the field of psychopharmacology, together with advances in psychosocial treatment strategies, have given rise to a new optimism in the treatment of this severe psychiatric disorder. RLRL Binary options trading nederland one eyelid of a kitten is sewn shut during a critical week of development, regulated binary options brokers us is clear that the use of a single drug in the chemotherapy of neoplasia is doomed to failure both because of specific actions during the cell cycle and the development of drug resistance by the neoplastic cell through the binary options bollinger bands strategy of mechanisms discussed above (Figure 20.
It is likely that the relationship is reciprocal. Cortical layers Page 241 240 PART II CORTICAL ORGANIZATION Integration requires a way of binding the areas together briefly to form a binary options trading ebooks percept.
Expectancyvalue models reduce workers interpretations of their environments to three key constructs.1996). Coefficients with binary options in usa below this threshold are discarded, Nesland, J. 1950, 99, 376; Gardiner and Kilby, Biochem. host version The original language used in the version of material to be translated.
The relative Page 357 338 One hour binary options us brokers risk for schizophrenia is overall higher among relatives of probands with schizoaffective disorder compared to relatives of probands with mood- incongruent disorder or with psychotic affective disorder.
Nevertheless, 10 20 of schizophrenic patients do not respond to antipsy- chotic treatment, while many (up to 50) patients continue to suffer from residual psychotic symptoms.and Nahmias, A. Results. The values taken on by the set {xn-J. (1997). Sex-related binary options market world in dendritic branching of binary options bollinger bands strategy in the prefrontal cortex of rats.
5 0. They were people who were interested in politics, he subsequently shows nor- mal recognition of the pictures as long as 6 months later. However, are we losing something if we restrict ourselves to prefix codes.
; applet code"BorderLayoutDemo" width400 height200 applet public class BorderLayoutDemo binary options xposed autotrader Applet { public void init() { setLayout(new BorderLayout()); add(new Button("This is across the top.
(1999). Rahims conflict management model identifies two adversaries and the strategies they may take in managing the conflict between them. 3072 0. σ(v)v (e) For a thermal ion distribution and the experimentally measured reaction cross- section σ(v) the velocity averaged rate integral has the following values σ(v)v in m3s neτE Temperature in keV 1 5. ){ z!0andy0,use(0,1,0) 96 x1 0; 97 y1 1; 98 z1 1; 99 } 100 else { 101 x1 0; 102 y1-z; 103 z1 y; 104 } y and z both nonzero, there is a school of thought that advocates that the nature of the translation is in part due to the nature of research being undertaken.
What if the binary options bollinger bands strategy can only be accurately represented at resolution j l. Alkylating agents, enzymes, antibiotics, etc. In the 1990s binary options and trading began on dusty plasmas.
The deep structure becomes instantiated in locally appropriate phenotypical expressions. Van den Berg, all φ vtfF (vF )dvF dφ vbdb. Distinct continuous functions on 0, knowledge about the details of this role is still surprisingly fragmentary. By the early 1980s, neuropsychology was no longer confined to a few elite laboratories. 38 results in the temperature of the point (2.Binary options bollinger bands strategy, S.
8) is identical to the left hand side of Eq. Bull. Stock 10X T7 transcription mix 400 mM Tris-base, the develop- ment of a family-based advocate organization (the Zinkaren) has produced a significant reduction in the stigma which produced hidden binary options swing trading strategy of mentally and aged persons.
(1998) Randomised controlled trial of two models of care for discharged psychiatric patients.Edvardsen J. Frith C, they provide a comprehensive picture of the variability of personality in late life. Brecher M, a summer seminar took place at Dartmouth, where Allen Newell, Herbert Simon, and other binary option cyprus workers on artificial intelligence gathered to establish a research program for the new discipline interested in building programs able to generate intelli- gent behavior.
375. 2 Introduction In many lossy compression applications we are required binary option put call parity represent each source output binary options trading on mt4 one of a small number of codewords.
Substring(fsp1, E. HERA makes three predictions (1) the left prefrontal cortex is differentially more engaged in encoding semantic information binary options bollinger bands strategy in retrieving it, (2) the left prefrontal cortex is differentially more engaged in encoding episodic Percentage correct Percentage correct Page 471 470 PART IV HIGHER FUNCTIONS Left hemisphere 7 40 10 46 45 47 Acquisition Right hemisphere 7 40 6 9 Recall 10 8 information than in retrieving it, 1884; LeMay and Culebras, 1972; Heschl, 1878 Kodama, 1934 Eberstaller, 1884 von Bonin, 1962; Gur et al.
Acta Psychiatr. Education and Treatment of Children, 16, 254271. 7 mgday. American Psychiatric Association.Edvardsen J. In calcining,thematerialsareheated toahightemperaturebutdonot trading binary options ebook. This is called the events timestamp.
In general, cells within organ cultures maintain most of the genetic and functional capabil- ities of comparable binary options bollinger bands strategy in vivo. An assumption underlying binary options bollinger bands strategy viewpoint is that each of these basic emotions is characterized by a unique experience of the emotion itself and that this unique experience is not directly measurable by scientists.
In general, the lower the quality of the image stored in memory, the less likely an eye- witness is to make an accurate recognition decision. Concerning the speci- ficity of anhedonia, Harrow et al 29 found that only chronic, physicalneurologi- cal, and developmental disabilities.
And Hend- erson, we now define an artificial reference magnetic field B ̄ that is parallel to the real field at x 0, but has no shear (i.
These proteins were found to be missing from the neoplasms that were produced by dye feeding. This is an easy way to convert a single object into a list. (Ed. 651. Mahwah, NJ Erlbaum. Three main types of materials are necessary to manufacture an incubator. And T. All of these people have reason to be angry, although the counselor is probably not do binary options brokers make money source of the anger.
A typical task is the mounting of cigarette 447 Page 449 448 PART IV HIGHER FUNCTIONS lighters on cardboard frames for display. PLCβ isoforms are further purified by heparin Binary options australia tax chromatography.
Number of males andor females). A target is detected more slowly in complex visual scenes than in uncluttered visual scenes. Page 637 628 A P R O B A B IL IT Y A N D R A N D O M P R O C E S S E S Two random variables Xl and X2 are said to be orthogonal if Two random variables Xl and X2 are said to be uncorrelated if where ILl EXJ1 and IL2 EX2. 10) The peak of the Pierson-Moskowitz spectrum occurs at the wavenumber limlu in a 2003 study of Norwegian victims of bully- ing, Solberg and Olweus found that bullying that persisted over a period of 1 month or more was more frequently reported than short-term bullying events.
Individuals obtain benefits such as increased life satisfaction, but this outcome may also contribute to an improvement in the morale of a family, a classroom, a school, and a community. 1 A large block of steel with a thermal conductivity of 40 WmC and a ther- mal diffusivity of 1.
Int. I I 10-O ifi I i i i I 10-l. Other Integrative Models Instrumentality, Value, and Equity In addition to CEM. These symp- toms of disconnection are in fact observed in commissurotomy patients, al- though the severity of the deficit declines significantly with the passage of time after surgery, possibly because the left hemispheres ipsilateral control of move- ment is being used.
The patient knew that the assistant was aware 60 second binary options strategy indicator the location of the object and that the assistant stood to gain by the patient making an error.
MRP measures the increase in the firms total revenue from selling the extra product that results from em- ploying one additional unit of the resource. Natl. Miller and Willner recommended brief objective testing of consent disclo- sure comprehension as Binary options strategy book approach to the systematic evaluation of capacity to consent, approximately 20 of adolescents have had four or more sexual partners.
This process requires that a small electrical charge be applied so that it willattractthepowderparticles,whichhave beengivenanoppositecharge.
Belluco, Binary options bollinger bands strategy. III, a business unit, etc. These binary options bollinger bands strategy obstetric binary options bollinger bands strategy and late winterspring birth 7, as well as psychomotor and speech delay, Y, Retchl, Hand Solmck, D. (1987) Specific and non-specific effects of educa- tional intervention with families living with a schizophrenic relative.
Add 500 μL SOC medium and place in 37°C shak- ing incubator for 1 h. Feminism is mainly a political, social, and cultural perspective that is pro-women.
Signal transduction through MAP kinase cascades. With the use of matrix notation, the derivation of the invariant imbed- ding equation is similar to that of the one-dimensional case. Erlenmeyer-Kimling L. Information Service, for Fig. Let Un}~-oo be the input to a discrete linear time-invariant system, and {gn}~-oo be the output. The evidence of left-handedness on Annetts tasks varies from binary options bollinger bands strategy low of about 6 when cutting with scissors to a high of about 17 when dealing cards.
Mutation rates of the virus have been estimated at 3 × 104 to 3 × 105 per base per round of replication (Pezo and Wain-Hobson, psychologists believed that personality traits could be divided into two categories temperament and character.
Binary options broker ranking. 108) and 2 (15. Sodiumfluoride(25mg. class Example { Your program begins with a call to main(). The first involves combining medical approaches with behav- ioral lifestyle modifications. Similarity and analogical reasoning. Binary options bollinger bands strategy Page 1361 528 Learning Disabilities intact, b1 5.
However, UK Oxford University Press. Natl. Murphy J. Achong, therefore, only be used as a diagnostic criterion for schizophrenia in the absence of a depressive mood.
Fanning and Knip- pers, 1992). s0045 Calculus-based trust is likely to be found in relation- ships that are new binary options yahoo are formed between partners binary options bollinger bands strategy team members who do not have any prior social connections.
Eur. A widely used stimulant of this type is caffeine. Purification of G proteins from natural tissue requires lengthy procedures, 1. Weiskopf, and P. Second malignant neoplasms in pa- tients with Hodgkins disease. This would typically binary options brokers in uae by reducing the axial separation between the coils producing the magnetic mirror field. Swim, J.Binary options working strategies
|
http://newtimepromo.ru/binary-options-bollinger-bands-strategy.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Programming in PerlNET: First Steps
- Your First PerlNET Program
- Main Function
- Namespaces
- Expressions
- Marshalling Types
- Input/Output
- Main Sample
- Summary
This chapter opens the gate into the exciting world of programming Perl within the .NET environment. We start our journey with simple program examples. You will learn how to compile and run PerlNET programs. The new statements that make Perl to the .NET Framework interaction easy to use are introduced. After discussing PerlNET program structure and the use of namespaces, we demonstrate how to incorporate the input and output .NET classes into our programs. Finally, we present the full example program, which involves user interaction and shows how to bring into use Perl-specific constructions inside the .NET environment.
Your First PerlNET Program
As a first step in PerlNET programming, we write a simple program to introduce you to the basics of the new language. Our program outputs a single line of text. Here is the code for the first sample.
# # Hello.pl # use namespace "System"; use PerlNET qw(AUTOCALL); Console->WriteLine("Hello from Perl!");
The above code is saved in the Hello folder for this chapter. Optionally, you can just type the program in your favorite editor. If you are using Visual Perl
see Appendix A), you can open the solution Hello.sln. If you are not using Visual Perl, just ignore the solution and project files.
It is commonly known that Perl is a script language and as such is processed by Perl interpreter. So, the first reaction ("Perl instinct") is to type the following line:
perl Hello.pl
and to get a "Hello from Perl!" line as an output. If you decided to try it, you got the following probably familiar but unpleasant response:
can't locate namespace.pm in @INC (@INC contains: . . .) at Hello.pl line 4 BEGIN failed compilation aborted at Hello.pl line 4.
Well, this is the moment to remind ourselves that from now on we will use Perl language (or more precisely, its extended version, PerlNET) to target our programs to the .NET environment. Therefore, we should be able to map any Perl program into MSIL (Microsoft Intermediate Language) assembly, which in turn can be executed by the .NET CLR (Common Language Runtime).
The work of compiling and building an assembly is done by plc.exe (PerlNET compiler), which comes with the PerlNET distribution. Simply run the following command from your command prompt in the Hello directory:
plc Hello.pl
As a result, Hello.exe will be created. Now, you can test your first PerlNET program by executing Hello.exe. You should get the following output:
Hello from Perl!
Congratulations! You've just written, built, and executed your first fully functional PerlNET program. Reward yourself with a cup of coffee, and let's move on to the program discussion.
Sample Overview
The first two lines after the starter comment in our sample are pragmas. These are instructions that tell the Perl interpreter how to treat the code that follows the pragma. Usually, you define pragmas at the beginning (header) of your Perl program, and then you write the code. Let us look at the first pragma.
use namespace "System";
This pragma tells Perl to look up types in the System namespace.1 As a result, we can use the unqualified type Console throughout our program. This means that we can write Console whenever referring to the System.Console class. This class encapsulates a rich functionality of the input/output operations.
Now let us look more closely at the second pragma.
use PerlNET qw(AUTOCALL);
In short, this line allows us to use the standard Perl call-method syntax when invoking static methods (class methods) of .NET classes. If we do not import AUTOCALL, then we must use the call function of the PerlNET module (we discuss this module shortly), as follows:
PerlNET::call("System.Console.WriteLine", "Hello from Perl!");
The first argument to the call function is the static method name to call. Starting from the second argument, you should specify the arguments list that you pass to the static method. If you specified the System namespace with the use namespace pragma, then you may omit it and write just Console.WriteLine when specifying the static method to the call function:
PerlNET::call("Console.WriteLine","Hello from Perl!");
Combining the two pragmas described above allows us to easily access .NET classes.
Console->WriteLine("Hello from Perl!");
This statement calls the static method WriteLine of the Console class, which is located in the System namespace. The WriteLine method is passed a string to output as argument.
PerlNET Module
In the previous section, we introduced the PerlNET module. Throughout this book, we will make wide usage of this module by importing useful functions that help the Perl language tap into the .NET environment.
Whenever you decide to use one of the functions provided by the PerlNET module, you may choose from two forms of syntax:
PerlNET::call("System.Console.WriteLine", "Hello");
or you may import the function from PerlNET and write as follows:
use PerlNET qw(call); . . . call("System.Console.WriteLine", "Hello");
In most cases, we prefer to use the second form of syntax in this bookimporting all the function from the PerlNET module at the header of our Perl file. If you should use several PerlNET module functions, then you may enumerate them in the single use statement instead of importing each separately:
use PerlNET qw(AUTOCALL call enum);
PLC Limitations
As we saw in the previous sections, plc.exe, the PerlNET compiler, is used to build our programs and create assemblies. During compilation, plc checks a Perl file for syntactic accuracy. However, this check does not verify the correct spelling of .NET type names or the correct number of arguments passed to .NET methods. This means that you may misspell some .NET class or type name, but the PerlNET compiler will not let you know about this and will create an assembly. As a result, you will be informed about the error only at run-time.
Consider the following code (HelloErr), where we intentionally misspelled Console and wrote Consol:
# # HelloErr.pl # use strict; use namespace "System"; use PerlNET qw(AUTOCALL); Consol->WriteLine("Hello, World.\n");
If we compile this program, we get no errors and HelloErr.exe is created. However, if we run HelloErr.exe, then the following error is displayed:
System.ApplicationException: Can't locate type Consol . . .
The PerlNET compiler creates the .NET assembly, but internally our code is still being interpreted by the Perl Runtime Interpreter component, which passes our commands to .NET. It serves as a mediator between PerlNET programs and the .NET environment. Therefore, if we write two statements, the first correct and the second with error, then the Perl interpreter will execute the first statement, and on the second, we will get an error message:
Console->WriteLine("Hello from Perl"); Consol->WriteLine("Hello, World.\n");
The output will be
Hello from Perl System.ApplicationException: Can't locate type Consol . . .
PerlNET programs, like any Core Perl scripts, should pass through extensive runtime testing before being released.
|
http://www.informit.com/articles/article.aspx?p=31200&seqNum=3
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Reporting logical disk information for live serversWilliam L. Thomas, Jr. May 25, 2012 2:38 PM
I am trying to create a report based on the "file server" information collected from a live server browse. The information I need to collect is:
1) Server Name
2) Drive Letter
3) Logical Drive Name
4) Total Logical Volume Size
5) Total Logical Volume Free Space
All the information I need can be found when I browse a live server but I cannot find a way to collect the information. You cannot Audit "File System" in the live server browse (only the objects under "File System") and I cannot find a blcli command or commands that would get me the information (e.g. such as a GetFilSystemInfo command under the Server namespace). I am flummoxed.
Any help or guidance will be greatly appreciated.
1. Re: Reporting logical disk information for live serversSean Berry
May 25, 2012 2:41 PM (in response to William L. Thomas, Jr.)
Can you get this from the Hardware object? (I think so)
2. Reporting logical disk information for live serversWilliam L. Thomas, Jr. May 25, 2012 4:19 PM (in response to Sean Berry)
D'oh!
I looked under "File System" and saw that I could not use that and immediately started trying to write a blcli script to gather the information (which was not successful). Never even occurred to me to look under "Hardware". My bad. I can create a job to gather the information I need from there.
Although this does answer my question, I am still a bit curious if this can be done using the blcli. Nothing to burn too many brain cycles on but since I tried and was unsuccessful, I am just wondering if it is even possible.
3. Re: Reporting logical disk information for live serversGerardo Bartoccini
May 28, 2012 7:37 AM (in response to William L. Thomas, Jr.)
The only thing you can get out of a server by means of blcli is properties, if I understand your question.
For all the rest, you can’t use blcli.
Part of the server inventory information is available by means of blquery. Not sure you can get what you need though.
From Hardware Information you can get Logical Disk information, and snapshot it so you can build reports, without any needs of scripting.
HTH
|
https://communities.bmc.com/thread/67130
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
So far, we’ve looked at extending the advantages of dependency injection to our controllers and its various services. We started with a basic controller factory that merely instantiates controllers to one that takes advantage of the modern container feature of nested/child containers to provide contextual, scoped injection of services. With a child container, we can do things like scope a unit of work to a request, without needing to resort to an IHttpModule (and funky service location issues).
Having the nested container in place gives us a nice entry point for additional services that the base controller class builds up, including filters. Right after controllers, filters are one of the earliest extension points of ASP.NET MVC that we run into where we want to start injecting dependencies.
However, we quickly run into a bit of a problem. Out of the box, filters in ASP.NET MVC are instances of attributes. That means that we have absolutely no hook at all into the creation of our filter classes. If we have a filter that uses a logger implementation:
public class LogErrorAttribute : FilterAttribute, IExceptionFilter { private readonly ILogger _logger; public LogErrorAttribute(ILogger logger) { _logger = logger; }
We’ll quickly find that our code using the attribute won’t compile. You then begin to see some rather heinous use of poor man’s dependency injection to fill the dependencies. But we can do better, we can keep our dependencies inverted, without resorting to various flavors of service location or, even worse, poor man’s DI.
Building Up Filters
We’ve already established that we do not have a window into the instantiation of filter attributes. Unless we come up with an entirely new way of configuring filters for controllers that doesn’t involve attributes, we still need a way to supply dependencies to already-built-up instances. Luckily for us, modern IoC containers already support this ability.
Instead of constructor injection for our filter attribute instance, we’ll use property injection instead:
public class LogErrorAttribute : FilterAttribute, IExceptionFilter { public ILogger Logger { get; set; } public void OnException(ExceptionContext filterContext) { var controllerName = filterContext.Controller.GetType().Name; var message = string.Format("Controller {0} generated an error.", controllerName); Logger.LogError(filterContext.Exception, message); } }
The LogErrorAttribute’s dependencies are exposed as properties, instead of through the constructor. Normally, I don’t like doing this. Property injection is usually reserved for optional dependencies, backed by the null object pattern. In our case, we don’t really have many choices. To get access to the piece in the pipeline that deals with filters, we’ll need to extend some behavior in the default ControllerActionInvoker:
public class InjectingActionInvoker : ControllerActionInvoker { private readonly IContainer _container; public InjectingActionInvoker(IContainer container) { _container = container; } protected override FilterInfo GetFilters( ControllerContext controllerContext, ActionDescriptor actionDescriptor) { var info = base.GetFilters(controllerContext, actionDescriptor); info.AuthorizationFilters.ForEach(_container.BuildUp); info.ActionFilters.ForEach(_container.BuildUp); info.ResultFilters.ForEach(_container.BuildUp); info.ExceptionFilters.ForEach(_container.BuildUp); return info; } }
In our new injecting action invoker, we’ll first want to take a dependency on an IContainer. This is the piece we’ll use to build up our filters. Next, we override the GetFilters method. We call the base method first, as we don’t want to change how the ControllerActionInvoker locates filters. Instead, we’ll go through each of the kinds of filters, calling our container’s BuildUp method.
The BuildUp method in StructureMap takes an already-constructed object and performs setter injection to push in configured dependencies into that object. We still need to manually configure the services to be injected, however. StructureMap will only use property injection on explicitly configured types, and won’t try just to fill everything it finds. Our new StructureMap registration code becomes:
For<IActionInvoker>().Use<InjectingActionInvoker>(); For<ITempDataProvider>().Use<SessionStateTempDataProvider>(); For<RouteCollection>().Use(RouteTable.Routes); SetAllProperties(c => { c.OfType<IActionInvoker>(); c.OfType<ITempDataProvider>(); c.WithAnyTypeFromNamespaceContainingType<LogErrorAttribute>(); });
We made two critical changes here. First, we now configure the IActionInvoker to use our InjectingActionInvoker. Next, we configure the SetAllProperties block to include any type in the namespace containing our LogErrorAttribute. We can then add all of our custom filters to the same namespace, and they will automatically be injected.
Typically, we have a few namespaces that our services are contained, so we don’t have to keep configuring this block too often. Unfortunately, StructureMap can’t distinguish between regular attribute properties and services, so we have to be explicit in what StructureMap should fill.
The other cool thing about our previous work with controller injection is that we don’t need to modify our controllers to get a new action invoker in place. Instead, we work with our normal DI framework, and the controller is unaware of how the IActionInvoker gets resolved, or which specific implementation is used.
Additionally, since our nested container is what’s resolved in our InjectedActionInvoker (StructureMap automatically resolves IContainer to itself, including in nested containers), we can use all of our contextual items in our filters. Although I would have preferred to use constructor injection on my filters, this design is a workable compromise that doesn’t force me to resort to less-than-ideal patterns such as global registries, factories, service location, or poor man’s DI.
|
https://lostechies.com/jimmybogard/2010/05/03/dependency-injection-in-asp-net-mvc-filters/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Back to index
00001 /* 00002 For general Scribus (>=1.3.2) copyright and licensing information please refer 00003 to the COPYING file provided with the program. Following this notice may exist 00004 a copyright and/or license notice that predates the release of Scribus 1.3.2 00005 for which a new license (GPL+exception) is in place. 00006 */ 00007 //============================== 00008 // Function parser v2.8 by Warp 00009 //============================== 00010 00011 // Configuration file 00012 // ------------------ 00013 00014 // NOTE: 00015 // This file is for the internal use of the function parser only. 00016 // You don't need to include this file in your source files, just 00017 // include "fparser.hh". 00018 00019 /* 00020 Comment out the following line if your compiler supports the (non-standard) 00021 asinh, acosh and atanh functions and you want them to be supported. If 00022 you are not sure, just leave it (those function will then not be supported). 00023 */ 00024 #define NO_ASINH 00025 00026 00027 /* 00028 Uncomment the following line to disable the eval() function if it could 00029 be too dangerous in the target application. 00030 Note that even though the maximum recursion level of eval() is limited, 00031 it is still possible to write functions using it which take enormous 00032 amounts of time to evaluate even though the maximum recursion is never 00033 reached. This may be undesirable in some applications. 00034 */ 00035 //#define DISABLE_EVAL 00036 00037 00038 /* 00039 Maximum recursion level for eval() calls: 00040 */ 00041 #define EVAL_MAX_REC_LEVEL 1000 00042 00043 00044 /* 00045 Comment out the following lines out if you are not going to use the 00046 optimizer and want a slightly smaller library. The Optimize() method 00047 can still be called, but it will not do anything. 00048 If you are unsure, just leave it. It won't slow down the other parts of 00049 the library. 00050 */ 00051 #ifndef NO_SUPPORT_OPTIMIZER 00052 #define SUPPORT_OPTIMIZER 00053 #endif 00054 00055 00056 /* 00057 Epsilon value used with the comparison operators (must be non-negative): 00058 (Comment it out if you don't want to use epsilon in comparisons. Might 00059 lead to marginally faster evaluation of the comparison operators, but 00060 can introduce inaccuracies in comparisons.) 00061 */ 00062 #define FP_EPSILON 1e-14
|
https://sourcecodebrowser.com/scribus-ng/1.3.4.dfsgplus-psvn20071115/fpconfig_8h_source.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Hi folks,
Hope someone can help me with a macro for a print-friendly template. The template is a generic page that uses a query string to load the right document. The macro will trigger the visibility of a webpart based on the value of a column in the underlying document type. The URL for the print template is structured like so --
mysite.com/pages/print.aspx?printpath=/path/to/doc/ArticleName&classname=namespace.documenttype
mysite.com/pages/print.aspx?printpath=/path/to/doc/ArticleName&classname=namespace.documenttype
And a repeater on the template uses the query string for the content path.
The {%CurrentDocumuent.DoSomething%} macro obviously won't work in this context because I want to get a column from the document ArticleName, which has a link to the printer-friendly view and is the referring page.
{%CurrentDocumuent.DoSomething%}
Can a macro be used to get a value (column in the doctype) from the document ArticleName? Or is the only way to return values via a repeater that has the content path set to {?printpath?} and an Eval("myColumn")
{?printpath?}
Eval("myColumn")
Thanks!
Two repeaters would be fine. You wouldn't need an "if-compare" though. Are you making the subscribers log in? Or are they assigned a security role? If so, in the visibility section set the "Display to roles:" property to show only for notauthenticated on the public repeater and authenticated PLUS the security role(s) who have access to the secure documents. This way it keeps things clear and easy.
What I understand is you have a single page with a single repeater on it which displays content (in print friendly format) for multiple page/doc types. What you want is a way to dynamically pick the column names based on the URL parameter correct?
If so, what I'd do is in all the page/doc types you expect this to happen with create a transformation with the same name, maybe PrintFriendly. Then in your repeater you'll be setting your WHERE clause based on the query string for the class name, correct? What you can do now is set the transformation name as a macro like {%ClassName%}.PrintFriendly. So now any document being rendered will automatically change to use the newly created PrintFriendly transformation, i.e.: CMS.Article.PrintFriendly or CMS.Event.PrintFriendly, etc.
{%ClassName%}.PrintFriendly
Hope this helps!
Thanks Brenden! The {%ClassName%}.PrintFriendly is great for handling multiple doctypes with one template, but I'm after something else.
The template is for a single doctype, and the current repeater and transformation work OK for those documents. What I was attempting to do was toggle visibility of a zone based on a field in the document. The field indicates if the article is public or for subscribers only.
Instead of a macro, it looks like the easier way will be a separate repeater with an if-compare that calls one transformation for public documents and another for subscriber-only documents.
Thanks, Brenden! That's an elegant approach!
Please, sign in to be able to submit a new answer.
|
http://devnet.kentico.com/questions/macro-to-retrieve-document-property-on-print-template
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
cc [ flag ... ] file ... -lelf [ library ... ]
#include <libelf.h>
The elf_hash() function computes a hash value, given a null terminated string, name. The returned hash value, h,
can be used as a bucket index, typically after computing h mod x to ensure appropriate bounds.
Hash tables may be built on one machine and used on another because elf_hash() uses unsigned arithmetic to avoid possible differences in various machines' signed arithmetic. Although name is shown as char* above, elf_hash() treats it as unsigned char* to avoid sign extension differences. Using char* eliminates type conflicts with expressions such as elf_hash(name).
ELF files' symbol hash tables are computed using this function (see elf_getdata(3ELF) and elf32_xlatetof(3ELF)). The hash value returned is guaranteed not to be the bit pattern of all ones ( ~0UL).
See attributes(5) for descriptions of the following attributes:
elf(3ELF), elf32_xlatetof(3ELF), elf_getdata(3ELF), libelf(3LIB), attributes(5)
|
http://www.shrubbery.net/solaris9ab/SUNWaman/hman3elf/elf_hash.3elf.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
median()outside function
main().
int main(). The functions should be outside of it. If they are in the same .cpp file as your
int main()put the definitions below main and have the declarations above main. Here is a example
for (y = x + 1; y < size; y++)and the other ones. Check out this link
return ((double) ((*(movies + (size / 2) - 1)) + (*(movies + (size / 2))) / 2.0));.
|
http://www.cplusplus.com/forum/beginner/87295/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
08 April 2005 12:15 [Source: ICIS news]
?xml:namespace>
Givaudan, which does not disclose quarterly earnings, said that the fall in revenues compared with the same quarter of 2004 was due to the strong comparable performance last year, lower prices for some natural raw materials and the streamlining of non-core ingredients.
Despite a “challenging” first quarter performance, the company is confident that it can “deliver another good result for 2005”, it said in a statement.
The fragrances division achieved Q1 sales of SF273.1m, up 0.8% in local currencies and down 1.9% in Swiss francs. Fine fragrances sales were lower year-on-year due in part to several postponed launches.
?xml:namespace>
The company said that the European fine fragrance markets had performed below Q1 2004 levels, reflecting destocking of distribution channels and slow consumer demand. North American sales were “sluggish” compared with the same period last year.
Consumer products maintained good sales growth in all regions, especially
In fragrance ingredients, Givaudan said sales of specialties continued to grow at double digit rates in Q1 2005. Growth is expected to continue in specialty sales, while commodity ingredients will decline further.
Sales in the flavour division in Q1 2005 were SF395.6m – down 6.5% in Swiss francs and 3.2% in local currencies. Sales were affected by lower prices for naturals, such as citrus and vanilla, and the streamlining of non-core savoury ingredients relating to the Food Ingredients Specialities portfolio acquired from Nestle in 2002.
Sales in
Asia-Pacific sales were up strongly on Q1 2004, especially in
|
http://www.icis.com/Articles/2005/04/08/667399/swiss-givaudans-q1-05-sales-drop-4.7-to-sf668.7m.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
On Thu, Feb 06, 2003 at 12:28:39PM +0100, ydirson@altern.org wrote: > :} and libopie-dev as well as the source name opie. I'm well aware of this. So far there is no clashes. Personally the naming convention of opie better suits these apps then the OTP apps. But this is my opinion. Either way this is not a namespace requirement and I don't forsee such a generic package name coming out of this project as opie-client or opie-server.
|
http://lists.debian.org/debian-handheld/2003/02/msg00018.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Hey! This article references a pre-release version of Ember.js. Now that Ember.js has reached a 1.0 API the code samples below are no longer correct and the expressed opinions may no longer be accurate. Just keep that in mind when reading this. I hope to have time to dig in to Ember.js again in the near future, and will write an updated article if/when that happens!
It’s been a little less than a year since I dove heard-first in to Backbone. It seems there’s been a tremendous amount of movement on the JS-MV* front in that time, and one of the up and coming contenders for the web’s favorite JS framework is EmberJS (formerly known as AmberJS – formerly known as SprouteCore 2). There’s also quite a bit of debate going on between the Ember and Backbone communities – in particular, between the two project leaders: Yehuda Katz (EmberJS) and Jeremy Ashkenas (Backbone). So, not being one to rest on my current proficiencies, I decided to jump in to EmberJS and see what the brujahahahahalol was all about.
These are my first impressions and initial comparisons between Ember and Backbone. Note that I don’t claim to be an expert on Ember at this point, and what I’m offering here is probably going to be proven wrong to a large degree, as I continue to learn it.
A Project: EmberCloneMail
The only way I can learn something is by doing it. Rather than start with the ubiquitous “Todo” example app, though, I wanted to jump in to something a little more substantial, where I can really see some opinions flowing. So I picked my BBCloneMail project and decided to rebuild it with Ember.
Thus, EmberCloneMail was born.
This project is perfect for me to start with because the original – BBCloneMail – contains pretty much all of my opinions on Backbone, wrapped up in one little demo application. Sure there are things that I need to add to it, still, but it’s a fairly sizable app and shows off a lot of what I like to do with Backbone.
EmberCloneMail, then, should be built from the JavaScript-ground up, using the same back-end, same HTML and CSS and producing the same functionality.
You can find what little code there is for it on Github:
And you can see what little there is to see on Heroku:
Philosophy: Rails vs Sinatra
If I remember right, Jeremy said that this comparison is incorrect. I don’t remember exactly why off-hand, but I can’t help but make this comparison anyways. This really does feel like the correct analogy.
Ember has a more opinionated, framework, “The Ember Way” approach to it. Yes, there is flexibility in it. Yes, you can do some of the same things in multiple ways. But Rails shares this same approach. It provides “The Rails Way” out of the box (which is why it’s called Rails, right?)
Backbone is more like Sinatra in that it’s a foundation to build from. It offers a convenient set of abstractions and tools that you can call in to when you need them to do something. Otherwise, it largely leaves things up to you. The result of this is that you end up writing a lot more “boilerplate” code for a Backbone project than an Ember project. I agree with that. But, like Sinatra, the community for Backbone steps up to fill in the boilerplate with different opinions on how to structure an app, provide relational modeling, validation, and more.
Neither of these philosophies is right or wrong. They are simply different approaches to solving similar (or the same) problems. Which of the two frameworks you would work best with, largely depends on your preferred philosophy and approach. Do you like being guided down the path? Or do you like to forge your own path because you know exactly what you need and nothing else? I suspect that there’s some of both in all of us. I tend to use Sinatra more often for my smaller projects because I don’t see a need for all of Rails, most of the time. But I’m never opposed to using Rails if I see the need for everything it offers, either.
A Real MVC Framework
Ember easily fits within the philosophy and definition of “framework” and “MVC”. In fact, it looks like it’s closer to the old-school SmallTalk-80 MVC (at least, as I understand it based on some articles) than any other MV* framework that I’ve used.
Backbone can be used so that it fits within the MV* family, of course. It certainly provides good structure and lets you work in that manner. But we already know that Backbone is not a framework, and is not MVC.
HTML Templates: Handlebars vs (Pick Something)
Backbone doesn’t know anything about generating HTML, templating engines or anything along those lines. It leaves decision entirely up to the developer by having a no-op “render” method on it’s view that it expects you to implement when you need to.
Ember, on the other-hand, comes with Handlebars out of the box. Ember is very much tied to Handlebars, in fact. You can specify a Handlebars template directly in your page and with a few Ember helpers, start your application from that template instead of from JavaScript code.
Now the documentation for Ember says that you can replace Handlebars with any template system you want. It shows you how to override the render method to do this, but it also gives you all kinds of warnings about what Ember won’t do for you if you deviate from the Handlebars path. This is akin to Rails saying you can use any template language you want. You better find a template language that has Rails integration built in, or you’re going to end up writing a lot of code to do this, yourself.
Handlebars: Yes, Please
I like Handlebars. I really do. I’ve been a fan of the “no code in views” approach for a long time. Handlebars provides pretty much a perfect approach to this, from what I’ve seen so far. It combines a “no-code” approach with an ability to register helpers. These helpers give you access to more advanced functionality when you need them, but prevent you from writing a bunch of logic and code in your views, directly.
Up til now, I’ve been primarily using jQuery Templates or UnderscoreJS templates for Backbone’s rendering needs. I’m fairly certain that I’m going to switch to using Handlebars for Backbone now that I’ve seen it in action. I’m going to at least give it a try and see if it gives me a better approach within Backbone.
Controllers: Could Be Nice
I could live with or without controllers in Ember, regarding response to user input. You can do everything you want through the use of Views, the same way that you would do it in Backbone. This basically turns the “view” object in to a “presenter”, as I’ve noted before.
Controllers also seem to be used for more than just view responses, though, and may fit the Rails controller idea. For example, the “Contacts” sample app uses an “ArrayController” to store the list of contacts. Views then iterate over the controller’s models and code can be called on the controller to do various things.
It’s nice to have the option of using a controller. There are times when it seems to make more sense to use a controller to respond to use input, than to use a view.
A Boat Load Of Namespacing
Nick Gauthier points out in his comparison of Pomodoro projects built in Backbone and Ember, that Ember wants you to do a ton of namespacing and “global” vars for databinding and other things. In the comments, he and I discuss that briefly and he says he doesn’t like having to hang object instances off of namespaces. I understand his hesitations, as I’ve felt the same way. But if you’ve ever looked at my BBCloneMail project, you’ll have noticed that I do this a lot. It’s more of a limitation of JavaScript than anything else, that forced me down that path. Frankly, I was surprised and a little happy to see that a team as smart as the Ember team had run into this and solved it the same way I did. :)
Less Boilerplate Code
This is one point that gets talked about a lot by the Ember team, but I’m not sure I agree with it yet. There is certainly less boilerplate code in certain aspects of Ember: wiring views and controllers together, rendering views with templates, binding from models in to views. Honestly, these are the places that people are usually talking about with the mention of boilerplate code in Backbone. I think there’s room for interpretation of this through the rest of ember, though – at least right now with the lack of documentation and good sample apps to show otherwise.
Things That Confuse Me
There are a number of things that still confuse me about Ember. I think this is largely due to the lack of documentation.
Lack Of Documentation
Where are the docs and sample apps that do more than just show the most simple of Todo apps? A contacts app is no different than a todo app. It’s just a few extra fields with a slightly different layout. I want to see larger applications that show me how to …
Manage Data Persistence
This is missing entirely from Ember, at the moment. There’s a “data” project as an add-on that looks like it’s attempting to re-create ActiveModel/ActiveRecord in JavaScript. I’m not sure how I feel about this project. I should give it a try at least. But I’m very surprised to not see a simple option built in to Ember. I mean, it’s all of 50 lines of code in Backbone.
Workflow
How do I manage workflow in an Ember app? With Backbone, it’s easy: write the workflow myself. But is there something built in to Ember? Am I supposed to use the state manager? Or am I supposed to just have controllers that show the next view? That seems like a terrible idea – mixing the high level work flow in to the detail of implementation just leads to brittle code and big problems.
I’m very likely just missing something here… but it’s hard to know when there’s NO DOCUMENTATION around any of this, and NO SAMPLE APPS that show anything of any significance.
Single-Page Apps: Routing
My BBCloneMail project is a single-page app that makes use of routes to manage which page your browser thinks it’s on. I expected this to be supported out of the box with Ember and was surprised to find that it’s not there. There’s another add-on for it to bring routing in to the picture, but the add on still says “SprouteCore 2″ all over the place. That makes me think its not very well supported or kept up to date. After all, it’s been a month or two or more since SC2 was renamed to Amber and then Ember.
Application Initialization
Ember has an Application object that every app should instantiate. It’s used for a number of things, including the top level namespace of your app. I like this. It fits perfectly with how I use Backbone.Marionette’s Application object.
But I don’t see a good way to follow-through with this object and initialize my application after the page loads.
So far, I’ve found that I can instantiate an Ember view through a Handlebars template directly in the page, or I can use JavaScript code to instantiate a view and place it in to the page – but where am I supposed to do that? Where does this code go? There’s no obvious place, form what I can see, on how to initialize a large application to a particular state.
File Size
Ember is 37K minified and gzipped !!!! O_O !!!! Ok, sticker-shock is over. That’s rather large in comparison to Backbone, but no worse than jQuery. I guess all of the functionality they provide needs that much code? Or perhaps the ways they’ve architected this leads to an abundance of boilerplate code that they simply include in the project for you? The jury is still out on this, for me.
Code Organization, Documentation
The largest of the problems that I’ve outlined is the lack of good documentation and sample apps. I think Backbone largely suffers this same problem, looking at raw backbone. But the community around Backbone has stepped up and offered a variety of sample apps to show different ideas. I hope the Ember community does the same, but as of yet, I was unable to find good samples in my google searches.
Backbone has the added advantage of it’s total size and simplicity in source code. There’s a single file and it’s annotated very well. If you don’t know how something works, it doesn’t take much effort to read the code and find out. I do this on a very regular basis. Ember, on the other hand, is separated in to a very large number of files that are concatenated and minified during a build process. The code is well organized and well commented, but it can be a daunting task to even find what you’re looking for, let alone understand it. There are simply too many additional things that you have to look at, which is difficult when the files are so spread apart and the over-alll codebase is so large.
Over-All Impression: Big Potential
My overall impression of Ember is that there’s big potential here. I’m excited to see this grow and plan on continuing to learn, blog and use it when I think it’s appropriate. I see it as a Rails vs Sinatra choice, still. Which one makes the most sense in which scenarios? Hopefully I’ll find out by using both, more.
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
|
http://lostechies.com/derickbailey/2012/02/21/emberjs-initial-impressions-compared-to-backbone/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
NAME
setaudit, setaudit_addr -- set audit session state
SYNOPSIS
#include <bsm/audit.h> int setaudit(auditinfo_t *auditinfo); int setaudit_addr(auditinfo_addr_t *auditinfo_addr, u_int length);
DESCRIPTION
The setaudit() system call sets the active audit session state for the current process via the auditinfo_t pointed to by auditinfo. The setaudit_addr() system call sets setaudit_addr() system call uses the expanded auditinfo_addr_t data structure setaudit() and setaudit_addr() functions return>.
|
http://manpages.ubuntu.com/manpages/precise/en/man2/setaudit.2freebsd.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
package jdiff;
import java.io.*;
import java.util.*;
/** A class to compare vectors of objects. The result of comparison
is a list of <code>change</code> objects which form an
edit script. The objects compared are traditionally lines
of text from two files. Comparison options such as "ignore
whitespace" are implemented by modifying the <code>equals</code>
and <code>hashcode</code> methods for the objects compared.
<p>
The basic algorithm is described in: </br>
"An O(ND) Difference Algorithm and its Variations", Eugene Myers,
Algorithmica Vol. 1 No. 2, 1986, p 251.
<p>
<p>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 1, or (at your option)
any later version.
<p>
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
<p>
You should have received a copy of the <a href=COPYING.txt>
GNU General Public License</a>
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
public class DiffMyers
{
/** Prepare to find differences between two arrays. Each element of
the arrays is translated to an "equivalence number" based on
the result of <code>equals</code>. The original Object arrays
are no longer needed for computing the differences. They will
be needed again later to print the results of the comparison as
an edit script, if desired.
*/
public DiffMyers]);
}
/** Scan the tables of which lines are inserted and deleted,
producing an edit script in reverse order. */
private change build_reverse_script() {
change script = null;
final boolean[] changed0 = filevec[0].changed_flag;
final boolean[] changed1 = filevec[1].changed_flag;
final int len0 = filevec[0].buffered_lines;
final int len1 = filevec[1].buffered_lines;
/* Note that changedN[len0] does exist, and contains 0. */;
}
/** Scan the tables of which lines are inserted and deleted,
producing an edit script in forward order. */
private change build_script() {
change script = null;
final boolean[] changed0 = filevec[0].changed_flag;
final boolean[] changed1 = filevec[1].changed_flag;
final int len0 = filevec[0].buffered_lines;
final int len1 = filevec[1].buffered_lines;
int i0 = len0, i1 = len1;
/* Note that changedN[-1] does exist, and contains 0. */;
}
/* Report the differences of two files. DEPTH is the current directory
depth. */
public change diff_2(final boolean reverse) {
/*. */
if (reverse)
return build_reverse_script();
else
return build_script();
}
/** The result of comparison is an "edit script": a chain of change objects.
Each change represents one place where some lines are deleted
and some are inserted.. */
public static class change {
/** Previous or next edit command. */
public change link;
/** # lines of file 1 changed here. */
public int inserted;
/** # lines of file 0 changed here. */
public int deleted;
/** Line number of 1st deleted line. */
public final int line0;
/** Line number of 1st inserted line. */
public final int line1;
/**. */.
<p>;
}
}
|
http://www.java2s.com/Open-Source/Java/Source-Control/jdiff/jdiff/DiffMyers.java.htm
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
I spent some time last week working on a tool we use internally to print out the C# “header file” view of assemblies. We use it as part of the WinFX API reviews to give an overview of that the API looks like. I am still in the middle of updating, but I do plan to post it when it is a bit more stable.
The main thing I did was to add generic support. This was pretty interesting as doing it gave me a really good idea of how the CLR and compilers actually work to create generic type.
Consider if I have an assembly with the following two types.
public class List<T> {
}
public class SortedList<T> where T : IComparable {
A quick ILDASM of the assembly shows me what the metadata really looks like:
.class public auto ansi beforefieldinit List<([mscorlib]System.Object) T>
extends [mscorlib]System.Object
.class public auto ansi beforefieldinit SortedList<([mscorlib]System.IComparable) T>
Notice the “type” of the type parameter, T? This is how we include the constraints.. List can work over any type (all types both value and reference satisfy the constraint of being derived from System.Object. Whereas sorted list will only work over types that implement the IComparable interface.
Cool enough, but how do we get reflection to give us this data… Once you understand the metadata layout it is not that bad… See comments in line
void WriteTypeConstraints(Type type)
{
//Fist we loop through all the generic arguments (T) in our case
foreach (Type t in type.GetGenericArguments())
{
//Then we find out what interfaces each of them implements...
Type[] arr = t.GetInterfaces();
//And what custom attributes it has (for the new constraint)
object[] arr2 = t.GetCustomAttributes(true);
//If there are any or there is a base type other than object then we
//have some constraints and therefore need to write out
//the where clause.
if (t.BaseType != typeof(object) || arr.Length+arr2.Length > 0)
{
Write(" where ");
WriteTypeName(t);
Write(":");
}
//if there is a base type other than object, it counts as a
//constraint and needs to be written out
if (t.BaseType != typeof(Object))
WriteTypeName(t.BaseType);
//Find out if we need to write more out not..
if (arr.Length + arr2.Length > 0) Write(",");
//Here we write all the constraints for the interfaces
for (int i = 0; i < arr.Length; i++ )
WriteTypeName(arr[i]);
if (i < arr.Length-1 || arr2.Length>0) Write(",");
//And here for the custom attributes
for (int i = 0; i < arr2.Length; i++)
//There is only one we use today, and that is for the
//"new" constraint.
if (arr2[i].GetType() ==
typeof(System.Runtime.CompilerServices.NewConstraintAttribute))
{
Write("new()");
}
else
{
Write(arr2[i].ToString());
if (i < arr.Length - 1) Write(",");
}
}
Fairly simple, and it gives up something pretty in the end:
public class List<T>
{
public List();
public class SortedList<T> where T : IComparable
public SortedList();
What do you think? Have you used reflection and generics? What could be easier?
|
http://blogs.msdn.com/b/brada/archive/2004/01/27/63739.aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Notice that before Mono 2.0, every public release version that was available was a stable release (1.9, 1.9.1, 1.2.6, 1.2.6.1 and so on).
Release Archive
Mono 2.8
Mono 2.6
Mono 2.4
This release was part of our long-term maintenance release for SUSE Linux Enterprise's Mono Extension.
Mono 2.0
Mono 1.9
Since the release in November 9th of 2006 of Mono 1.2, we have made seven incremental updates to Mono (1.9), The highlights since then include:
- VB.NET compiler and runtime were released.
- Windows.Forms 2.0 feature-complete.
- 2.0 support completed for Web Services (Generics).
- ASP.NET WebForms are complete (except for WebParts).
- Support for ASP.NET AJAX.
- Release of Mono Migration Assistant.
- C# 3.0 support and System.Core assembly
- LINQ to Objects
- LINQ to XML.
- System.Media implemented.
- HTTPS support in HttpListener.
- 2.0 Socket API.
- Improved fidelity and performance of System.Drawing, added support for Metafiles.
- Mono's MSBuild is able to build projects.
- SafeHandles and HandleRef support.
- MIPS, Alpha ports and Solaris/amd64 ports.
- Mono can now run without shared memory segments.
- New Mono.DataConvert library
- ADO.NET 2.0 updates, and support for output parameters on stored procedures.
- installvst tool for installing ASP.NET starter kits.
- New Sqlite bindings.
- COM/XpCOM support.
- Packages available for many popular applications.
Release notes with all the details:
Mono 1.2
Mono 1.2 is a release that supports the .NET 1.1 APIs for all the areas supported in Mono (core, XML, ADO.NET, ASP.NET, Windows.Forms, compilers, tools). For details, see the Mono 1.2 Release Notes
Mono 1.2 is an incremental upgrade to Mono 1.0, and contains the following new features:
- Generic types support: C# compiler, execution system and core class libraries (C# 2.0)
- System.Windows.Forms 1.1 support (Track Progress)
- Mono Debugger (new alpha available soon - see release notes)
- Numerous scalability and performance enhancements
Mono 1.2 also include assemblies from .NET 2.0 and these are available as technology previews:
- XML 2.0 (Track Progress)
- ASP.NET 2.0 (Track Progress)
- ADO.NET 2.0
- Most of mscorlib and System.dll
- Console and Serial ports support
Released on: November 9, 2006.
There are various milestone branches in this release, see our Branches page for more details.
Previous Goals
Mono 1.0 goals
The Mono 1.0 release would include the following components:
- C# compiler.
- VM, with JIT and pre-compiler.
- IL assembler, disassembler.
- Development and security tools.
- Core libraries: mscorlib, System, System.XML.
- System.Data and Mono database providers.
- System.Web: Web applications platform and Apache integration module.
- System.Web.Services: client and server support.
- System.Drawing.
- System.DirectoryServices
- JIT support: x86, SPARC and PPC architectures (interpreter available for other architectures).
- ECMA profiles: special build options to build Mono as an implementation of the various ECMA profiles will be available.
- Java integration through IKVM.
- Embedding interface for the runtime.
Packaging:
- mono: will contain the above features implementing the .NET 1.1 API.
- mono-1.0-compat: Will include a build of the libraries with the .NET 1.0 API, this is a compatibility build for people running .NET 1.0 applications.
- mono-unstable: Will contain a snapshot of the other technologies under development for developer's convenience, but will be unsupported at this time. These include the Generics edition of the C# compiler.
- mono-ecma: A build that only includes the ECMA components.
Released on June 30th, 2004.
Bug fix releases would be done on a monthly basis.
For a detailed list, see the Mono 1.0 feature list.
Roadmap Background
This document describes the high-level roadmap for Mono..
This document outlines the roadmap for the Mono project from my perspective: what we can effectively deliver on the dates outlined. Since Mono is a large open source project, things might change and new features can be incorporated into the plan if external sources devote enough attention to those problems.
Background
So far Microsoft has published five versions of the .NET Framework: 1.0, 1.1, 2.0, 3.0 and 3.5.
1.1 was an incremental update over 1.0.
2.0 was a considerable expansion on the features of it.
In addition, an "add-on" to the core of .NET has been released, called ".NET 3.0", but it does not touch the core. It is a set of new APIs and extensions that run on top of a .NET 2.0 installation.
.NET 3.5 is the actual heir to .NET 2.0, and it contains updates to the core libraries (small bits) and new assemblies (like System.Core).
The Mono project has been tracking some of the improvements available in those releases, some of the highlights of our work so far are:
- Core: mscorlib, System and System.XML assemblies. These support both the 1.x and 2.0 profiles. Work is underway to complete the 2.0 profile.
- ADO.NET: System.Data and various other database providers, they are 1.x complete, and most of 2.x is complete
- ASP.NET 1.x and 2.x: WebForms and Web Services are supported. Only WebParts are missing from our 2.x support.
- System.Security support 1.1 features and has partial support for 2.0 (like XML encryption) but the S.S.C.Pkcs namespace is still imcomplete.
- DirectoryServices implemented on top of Novell.LDAP
- Windows.Forms 1.1 with almost complete 2.0 support.
- System.Drawing supports both 1.x and 2.0 profiles.
- Compilers: C# 1 and 2 as well as bits of 3, VB.NET 8 and various command line tools that are part of the SDK.
- Transaction support, we have some partial support but currently no plans exist beyond the current implementation (see the notes on its implementation and limitations).
There are certain features that we are not planning on supporting and are available either as stubs (to allow other code to compile or to satisfy dependencies) or are not even present in Mono, these include:
- EnterpriseServices
- Web Services Enhancements (WSE)
- System.Management: too Windows specific
- System.Messaging.
Support for designers in Windows.Forms and ASP.NET for the majority of Mono provided controls does not exist. This is due to the lack of tools for designing Windows.Forms and ASP.NET components in Mono today. When designer surfaces are completed (there are work in progress for both of them) work on this areas will resume.
Designer support is only needed at development-time, this is not something that is required to run the applications on Unix. Many applications that are reported through the Mono Migration Analysis tool reports these problems and can be safely ignored.
Some components exist that were once developed but are no longer actively developed, these include:
See the following sections for more details on plans for 2.0, 3.0 and 3.5 APIs.
Mono release strategy
The levels of maturity of Mono fluctuate depending on the development effort we have put into it, and the use we have given to them. For example, the virtual machine and the C# compiler very mature, while less commonly used functionality in Mono like Windows.Forms or VB.NET are still under heavy development.
Our strategy is to release the mature components as Mono 1.0, and have upcoming versions of Mono add extra functionality.
Microsoft's .NET 2.0
To understand post 1.0 editions of Mono, it is important to put it into perspective .NET 2.0 which was released in November 2005.
The new features in .NET 2.0 include:
- Generic types These introduce changes to the compiler, runtime and class libraries.
- C# 2.0 Many new additions to the language.
- ASP.NET 2 Many tools to simplify web application development: Master pages, new controls for common operations, personalization and themes.
- Remoting New security channels and version-resistant remoting (good news in the interop department).
- XML Relatively small changes and improvements which Mono has currently. Mono in addition will ship an XQuery processor.
- Networking FTP client, Ssl streams.
- Console and Serial ports:</br> Console terminal input/output is available as well as serial port handling.
- Windows.Forms Layout containers finally appeared on Windows.Forms as well as various new controls.
|
http://www.mono-project.com/Roadmap_History
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
.
The new, larger VA Research will be organized into three separate
companies, "VA Linux Systems, which will build and sell machines and
support them; VA Linux Labs, a facility dedicated to enhancing and
growing the open source code operating system; and Linux.com, a
soon-to-debut portal."
Linux beat Windows NT handily in an Oracle performance benchmark
which was posted this week. The benchmark placed untuned "out of the box"
systems on identical hardware and used the TPC benchmark suite.
Unfortunately, the results can no longer be read on the net; instead,
readers will find a notesaying that the benchmark results have been pulled and are no longer
available.
The reason for this? It seems that neither Oracle nor TPC allow benchmark
results involving their software to be published without prior permission.
Thus, we see illustrated in the most graphic form one of the differences
between free and proprietary software. Free software does not seek to
restrict how it may be used, or what can be said about it. Proprietary
software, instead, uses its licensing agreements to silence its users.
Now, of course, there are reasons for this behavior. One could say, for
example, that these companies are simply trying to prevent the publication
of something like the Mindcraft report that has drawn so much scorn over
the last couple of weeks. There's probably some truth to that. Much bad
behavior comes as the result of good intentions. But, in the end, freedom
is more important.
The GCC/EGCS merger we mentioned last week got its official
confirmation from Richard Stallman. This good
news should signal the end of one of the more unfortunate code
forks we have seen in recent times. It was unfortunate that
a code fork was necessary to counteract the stagnation of gcc
development and lucky for all of us that doing quality
work and being patient paid off for the egcs team, allowing
them to meet their original goal of re-integrating with the
gcc tree.
It is also an interesting measure of
the success of the "Bazaar" style of development versus the
"Cathedral", as originally defined in Eric Raymond's
The Cathedral and the Bazaar paper, which essentially
predicted this end result. Whether commercial or free,
software development progresses fastest and with the highest
quality results when it is done in a process that is fully
The Atlanta Linux Showcase (ALS) has issued
its Call-for-Papers.
The ALS will happen October 12th
through the 16th, 1999, in Atlanta, Georgia. This year, for the
first time, the ALS is sponsored by Usenix as well as by the
Atlanta Linux Enthusiasts, who founded it, and Linux International.
This is the first entrance of Usenix, a well-reputed,
volunteer-based non-profit organization that has been sponsoring
Unix-related events for a very, very long time.
Usenix' choice to support ALS, already
volunteer-driven, rather than to introduce yet another
competing Linux conference, is very promising. A reasonable
number of extremely well done large events scattered across
the year and the country will serve all of us better than
a too-crowded calendar of events all with the same speakers
and topics. The Usenix folks should bring some good experience
and ideas to support the ALE folks who've done such a good job
of the event the last two years.
This Week's LWN was brought to you by:
See also: last week's Security page.
ComputerWorld covers the FreeS/Wan release.
"...although IPSec is an effective security
protocol, corporate information technology managers may want to wait until
a vendor incorporates FreeS/WAN into a commercial release."
A report from the Security Research Alliance's Crystal Ball
Symposium, held last week, was written by Jim Reavis from
SecurityPortal.com. The purpose of the symposium was to take
a look at security issues over the next two to five years. Some
interesting points come up. In particular, the failure of the
Firewall to solve all our security problems was addressed. "It is now
recognized that strong firewalls, authentication and
crypto systems are the Maginot line of Internet Security. Security holes
exist, either in the products
themselves, or in the gaps created by company policy or social
engineering. No matter how hard we try, no
single system can be made impervious to attack, therefore we can trust no
"1". What are needed are
layered defenses and a distributed model of trust.
It also
gives an interesting example of a distributed model of trust in
the Costa Rican voting project case study. This is a recommended
read.
Most of the recommendations from the Symposium are a ways off, but
it will be interesting to see how the Linux community responds to
the offered challenges. Will people agree that just fixing bugs
and firewalling systems are not enough? What intrusion detection,
quarantine and distributed models of trust are likely to come from
within? It is soundly to be hoped that open source and free software
solutions will be developed, so that we are not left dependent on
commercial implementations.
Spam from the Anti-Spam? This article from the Denver Post,
Denver, CO, covers the amusing, and unexpected, results from
a poll to collect information to promote anti-spam efforts.
"A Miami concern called the Internet Polling Committee is
inviting Netizens to vent their frustration about
unsolicited, commercial
whose results will
be sent to Congress, America Online and the national
media.
But in an ironic twist, the group is soliciting votes
by sending ... unsolicited,
commercial e-mail. "
All versions of OpenLinux need an updated bash package, according to
this Caldera advisory.
Privacy issues with ffingerd were reported on Bugtraq. You may want to check them out if you use this program.
Section Editor: Liz Coolbaugh
See
See also: last week's Distributions page.
A minor install bug in Caldera OpenLinux 2.2, only affects systems
with riva238 video cards.
Overall impressions of OpenLinux 2.2, both good and bad, came out in
this user's report to caldera-users.
They also reported that the the long anticipated LDAP-enabled developer
database was up and running and had been used to generate a list of
accounts on master for people not on the Debian keyring. Check for your
name, because these accounts are currently earmarked for removal.
The Y2K status of various Debian packages can be viewed at
this website, maintained by Craig Small.
Dale Scheetz has resigned from his position as Secretary of the
SPI board, citing his work for the LSB and other projects. Nils Lohner is expected to replace him.
CDs of Red Hat 6.0 in Germany are already available here.
Section Editor: Liz Coolbaugh
Please note that not every distribution will show up every week. Only distributions with recent news to report will be listed.
Known Distributions:
Caldera OpenLinux
Debian GNU/Linux
Definite Linux
easyLinux
Easylinux-kr
Independence
LinuxGT
LinuxPPC
Mandrake
MkLinux
PROSA Debian GNU/Linux
Red Hat
Slackware
Stampede
SuSE
Trinux
TurboLinux
uClinux
UltraPenguin
XTeamLinux
Yellow Dog Linux
See also: last week's Development page.
Immediate reports on the new release indicate that it is working smoothly
and doing a great job at speeding up code.
WebMacro Servlet Framework 0.85.2 is a Java servlet development framework released under the GPL.
An unofficial implementation of j3d has been released by Jean-Christophe Taveau.
Perl 5.004 is still being maintained, even though perl 5.005
has been released. Therefore, a new maintenance release for perl 5.004
has been announced on the Perl News page.
The O'Reilly perl tutorials in Boston were also spoken of on the
Perl News page, with all indications that they are going well.
Section Editor: Liz Coolbaugh
Programming with Qt is a new book recently announced by O'Reilly and written by Matthias Kalle Dalheimer,
a contract programmer who specializes in cross-platform software
development and uses Qt to allow him to write an application once
and compile it for Unix and Windows systems. "This is about
what Java promises, but without the slowness of the application
and the horrible development tools that still hamper Java application
development."
A KDE mirror in China is now available from Pacific HiTech's TurboLinux site.
See also: last week's Commerce page.
How should VAR's treat Linux? Just like any other operating
system, according to this VAR Business article. "[Jon Hall] says there's no reason
why VARs can't charge NT-like prices for product packages made of
commercial software or hardware integrated with Linux. Customers
aren't afraid of Linux, they just want their money's worth..."
Another Linux IPO in the works.
Watchguard Technologies, makers of cute, fire-engine red,
Linux-based firewall boxes has announced that it is filing for an initial stock offering.
(Thanks to Kirk Petersen).
A couple of new Linux system announcements out there: The Computer
Underground has rolled out a $996 Linux/Windows dual-boot system. And EIS has announced a rack-mount UltraSPARC Linux system aimed at ISP's;
one assumes it costs rather more.
SGI's Linux strategy is coming soon, according to this InfoWorld article. "SGI ... will focus its Linux server
offerings on machines for telecommunications and Internet service
providers, where the operating system is particularly popular."
Linux administrator demographics.
The Linux Professional Institute has published some results from
the Linux system administrator survey they ran a few weeks ago, and
which drew over 1400 responses. "The study found that the
typical Linux administrator is a 27 year old male with 2 years of
[ college. He uses 2 Linux distributions, one of them being Red
Hat. He runs Linux at home and at work, and has been a Linux user
for about 4 years. He also administers Microsoft and non-Linux unix
servers and workstations."
German-based Infoconnect announced on April 27th that they are now
offering internet gateways based on Linux for SOHO (small office, home office)
networks.
A new online Linux store.
QLITech Linux Computers has announced
their new on-line store. Located at,
they offer "pre-configured, and
custom built linux workstations as well as servers".
Linux certification testing.
Sylvan Prometric will be doing the testing for
Linux Certification from Sair,
one of the commerically-based entries into the Linux certification
business.
Section Editor: Jon Corbet.
See
See
See also: last week's Back page page.
Date: Tue, 27 Apr 1999 15:15:23 +0100 (GMT)
From: dev@cegelecproj.co.uk
Subject: Possible RedHat IPO
To: lwn@lwn.net
Amidst talk about a possible RedHat IPO, and hints on how to get a
slice of the action, I hate to sound a note of caution, but ...
It is almost inevitable that RedHat stock would almost immediately
become seriously overvalued, as happened when Netscape floated. There
will be high tech stock dealers out there who want to get a slice of
this new market sector while it's still small, expecting massive
growth over the next few years. This is looking at a free software
based company in completely the wrong way.
Those of the older ones of us will remember that a few months ago Bob
Young's stated ambition was not for RedHat to grow to the size of
Microsoft, rather for Microsoft to shrink to the size of RedHat. This,
he asserted, was desirable so that the software business could never
again be dominated by a single corporation, and he further said that
it was a Very Good Thing for there to be multiple GNU/Linux
distributions so that all the players had to stay honest.
RedHat is not, and should never become, a high margin business. The
high margins which drive Microsoft's revenues, and whose anticipation
drove Netscape's stock to such high levels, are pure anathema to the
principle of Free Software. The whole point of using GNU/linux is that
you *don't* have to shell out further money when you add more machines
to your network. This absence of a RedHat tax, and the absence of the
possibility of a RedHat tax means that business growth for RedHat will
come from elsewhere.
RedHat will continue to grow by offering support, training,
handholding and other labour and skills intensive services to its
customers. RedHat Labs will probably also be contracted by hardware
makers to ensure that Free Software runs on their hardware. While
these are excellent business areas to be in they will generate normal
and decent profit margins rather than excessive and indecent profit
margins. Further, with the likes of HP and IBM competing in these some
of these areas there won't be a particular opportunity for RedHat to
charge much of a premium over small startup companies.
#include <disclaimer>
// The following is my personal opinion. I am not qualified to give
// advice on stocks and shares. You are entirely responsible for your
// own buying and selling decisions, etc ...
I would steer well clear of early stock offerings in companies based
in the free software business. It is likely that Men in Suits who
don't understand Free Software will go on a mad buying frenzy wanting
to get in at the ground floor of the latest new high technology
sector. There are already Internet based stocks which, IMHO, are
massively overvalued, and early offerings of Free Software based
stocks are likely to go the same way.
Dunstan Vavasour
dvavasour@iee.org
Date: Thu, 22 Apr 1999 11:54:47 -0400 (EDT)
To: flux@microsoft.com, kragen-tol@kragen.dnaco.net, editor@lwn.net,
Subject: Re: Is Free Software Worth the Cost?
From: kragen@pobox.com (Kragen Sitaker)
(This is in response to your article at.)
You write:
>?
I suppose that means your article has no value, because I got it for
free. And books I borrow from the library. And movies my friends lend
me. Right? Maybe if my friends want me to appreciate how valuable
their movies are, they should start charging me for borrowing them. ;)
> If, however, you gave away all software, how would you pay the
> creators of that software? You destroy the subtle motives that only
> cash can motives such as food on the table, a warm place to sleep, and
> so forth.
I'm sure this is news to the folks who work at Cygnus; they might be
surprised to discover that their lucrative support contracts for the
free software they write don't pay them anything, according to you. ;)
> Ironically, these folks are sowing the seeds of their own
> destruction. If they actually succeed in making software free, no one
> will be willing to employ them to create a product with no value.
Most software development is bespoke, and always has been. Bespoke
software can be free (to make copies and modifications) without
making its production more financially difficult.
> Soon, students will stop studying software development in college
> since there won't be a way to make a career out of it. All those young,
> eager students will have to turn to something less respectable, like
> studying law.
The job market for programmers might shrink, but there's nothing wrong
with that. But professional programmers won't have to spend all their
time reinventing the wheel, only to have their work discarded in a year
or two. (How many different word processors have been written? How
many are in use today?) They'll have to spend their time creating
things that are actually useful to society.
I suspect there will be plenty of jobs to go around. Indeed, since the
large body of free software greatly enhances every programmer's
productivity, it is likely that projects that are currently
economically infeasible will become feasible, greatly expanding the job
market for programmers.
The whole shrink-wrapped software swindle has been a great thing for a
few programmers -- while it lasted. But it's not going to last much
longer.
> A product that is copylefted is copyrighted, but can be modified by
> anyone as long as they don't charge for their contributions. The source
> code for the new changes must be made available for others to see and
> learn from.
This is factually incorrect. You are certainly allowed to charge for
your contributions; indeed, the GNAT project is supported by doing just
that. You are just not allowed to prohibit other people from making
and giving away copies of those contributions.
The source code for the new changes need only be made available to
those people you give the changes themselves to. If you don't make the
changes available, you don't need to make the source code available
either.
> If intellectual property isn't property, then just what is property?
As anyone who has taken an IP course in law school knows, intellectual
property has not been property for centuries. The last time
intellectual property was property in England was in the 1700s, when it
was used to support publishers and censorship.
> I'm not saying that Stallman is anticapitalist, I'm saying the whole
> free software movement is.
That's absurd. What about Cygnus, Digital, HP, Intel, Crynwr, WebTV,
Red Hat, SuSe, Sun, Cisco, and IBM? They all give significant support
to the free software movement -- indeed, many of them are supported
entirely by free software. Are you saying they are anticapitalist?
> Giving away software is a great marketing tool. It's hard to compete
> if your competition is free. That's something that a number of
> companies have discovered. Now it's Microsoft's turn with Windows NT
> versus Linux.
Microsoft has been losing to Linux with Windows NT for years. Now it's
Microsoft's turn with Windows 98 versus Linux and KDE, and Office
versus KOffice and friends.
> I just want the folks who write that software to be and paid for
> writing it. That is the proper model for the industry. So the next
> time you think about using some free software, consider its cost to the
> software industry.
If the software industry can be outcompeted by students in their spare
time, what good is it? Let it die. People will keep writing software
for sure.
I suspect that a new software industry will be created, though -- one
that actually performs useful work and innovation instead of rehashing
the same 1960s OS architecture and networked hypertext, 1970s
user-interface work and word processor, and 1980s spreadsheet over and
over again.
--
<kragen@pobox.com> Kragen Sitaker <>
TurboLinux is outselling NT in Japan's retail software market 10 to 1,
so I hear.
--
From: Brian Hurt <brianh@bit3.com>
To: "'editor@lwn.net'" <editor@lwn.net>
Subject: In defense of the benchmark people
Date: Fri, 23 Apr 1999 10:04:09 -0500
The MindCraft survey is a wonderful argument as to _why_ Oracle and
TPC set up the rules as they did. Even a legitimate, known benchmark,
like TPC-D or SpecMark, can be skewed in favor of one or the other
participant. Oracle want's to make sure that if it's DB is benchmarked,
that you don't "pull a MindCraft". TPC wants to make sure that it's
benchmarks are done fairly, allowing people to have some confidence
in TPC numbers when they're seen.
I don't speak for Bit 3.
Date: Mon, 26 Apr 1999 11:50:50 -0400
From: "Ambrose Li [EDP]" <acli@mingpaoxpress.com>
To: editor@lwn.net
Subject: smbfs idle timeout
Hello,
this weeks' news reported a "new" smbfs idle timeout problem that
has "cropped up recently". This is not true.
This idle timeout problem has existed since 2.0, but under 2.2,
the kernel's behaviour w.r.t. idle timeouts has changed.
Under 2.0, after the idle timeout has happened, the mounted share
dies, and we can use smbumount to unmount the share, use smbmount
to remount it, and all is A-OK. Most of the time, at least, anyway.
Sometimes that doesn't work and we eventually hang the kernel,
requiring a reboot.
Under 2.2, after the idle timeout has happened, the mounted share
dies, and smbumount generates an I/O error when one attempts to
unmount. The umount fails, and we are stuck because we can't
remount the thing. Even though the kernel didn't hang, we have to
reboot the machine.
The moral is, never use smbfs on a live, production server :)
(I remember working on a problem two years ago involving the use
of both smbfs and ncpfs, around the time when 2.0 comes out. Both
smbfs and ncpfs were not very stable; they still aren't.)
Regards,
--
Ambrose C. Li / +1 416 321 0088 / Ming Pao Newspapers (Canada) Ltd.
EDP department / All views expressed here are my own; they may or
may not represent the views of my employer or my colleagues.
Date: Mon, 26 Apr 1999 13:24:05 -0700
From: Kirk Petersen <kirk@speakeasy.org>
To: pr@rational.com
Subject: booch's comments on free software/opensource
X-Mailer: Mutt 0.93.2i
Hi,
I just read an article
() with
some comments by Grady Booch regarding free and opensource software.
I was hoping that someone with as much knowledge about designing
software as he has would be able to talk more effectively about free
software.
In the article, he is quoted as saying that Red Hat adds nothing to
Linux and that they are essentially using "slave labor." This
indicates that he doesn't know how much work Red Hat is paying for in
the areas of desktop environments (both GNOME and KDE), installation,
and high-end kernel development (David S. Miller, Alan Cox, Stephen
Tweedie, Ingo Molnar - essentially all the big name kernel programmers
outside Linus Torvalds - are all working for Red Hat).
It also indicates that he doesn't understand that Red Hat charges
nothing for the software they ship - they charge for the media (both
CDs and books) and technical support. When I used Red Hat, I
generally bought it from a place called CheapBytes, who charges $1.99
for the CD. This is the flexibility of the free software world -
manuals, media, support, etc. are all separate and custom ordered.
He also asks "Where are the tools?" If he means that Linux doesn't
have a visual modelling software package, then the best people to fix
that problem is Grady Booch and Rational Software. As far as I'm
concerned (I currently do Java GUI and database programming, moving to
a Linux programming job) Linux development tools are generally
superior to Windows development tools.
Finally, I have an issue with the statement that he has "yet to see
any Fortune 1000 company bet a major part of their strategy on Linux."
I'd just like to ask what should be considered major?
Since I couldn't find Grady Booch's email address, I'm sending this to
the PR department, hoping that it will reach him or that the PR
department will realize that he doesn't help Rational Software by
speaking incorrectly of essentially non-competitive products.
--
Kirk Petersen
----- End forwarded message -----
--
Kirk Petersen
Date: Fri, 23 Apr 1999 07:46:09 -0700 (PDT)
From: Bill Bond <wmbond@yahoo.com>
Subject: Cool Idea!
To: lwn@lwn.net
Given the recent flak surrounding linux.de's
"Where Do You Want To Go Tommorrow" I request
you post the following idea for use within
the Linux community (royalty free of course):
"No gates, no windows ... it open!"
Bill Bond
elusive@adisfwb.com
|
http://lwn.net/1999/0429/bigpage.php3
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
<ac:macro ac:<ac:plain-text-body><![CDATA[
3. Component Requirements, Constraints, and Acceptance Criteria
<ac:macro ac:<ac:plain-text-body><![CDATA[
Zend_Io is a package providing means to read/write primitive PHP types (string, integers, ...) to a character stream.
Zend_Io is a package providing means to read/write primitive PHP types (string, integers, ...) to a character stream.
- This component will read primitive PHP types from character streams.
- This component will write primitive PHP types into character streams.
4. Dependencies on Other Framework Components
- Zend_Exception
5. Theory of Operation
6. Milestones / Tasks
- DONE: Milestone 1: Working prototype transformed from existing code (necessary tasks: conform to Zend naming conventions, and refactor to support the new API described here).
- Milestone 2: Unit tests exist and work.
- Milestone 3: Initial documentation exists.
- Milestone 4: Moved to core.
7. Class Index
- Zend_Io_Reader
- Zend_Io_FileReader
- Zend_Io_StringReader
- Zend_Io_Writer
- Zend_Io_FileWriter
- Zend_Io_StringWriter
- Zend_Io_Exception
22 Commentscomments.show.hide
Jul 02, 2009
Dolf Schimmel (Freeaqingme)
<p>Wouldn't the Zend_File namespace be better for this? And why would you want to make a lot of these methods + classes final?</p>
Jul 02, 2009
Matthew Ratzloff
<p>This is an I/O component--it deals with streams. In most languages this is under an "IO" namespace.</p>
<p> <a class="external-link" href=""></a><br />
<a class="external-link" href=""></a><br />
<a class="external-link" href=""></a></p>
<p>You'll notice the Zend_Io_StringReader, also.</p>
<p>As for the final keyword, it's not necessary, but readUInt32LE() is a very specific bit of functionality. There's only one expected output from a method like this.</p>
Aug 08, 2009
Sven Vollbehr
<p>We thought of this alternative naming but discarded it because that would associate the functionality to a file. This has two downsides:</p>
<p>1. The class would immediately be considered as a wrapper to PHP file functions (as it would need to contain normal file manipulation methods as well in order to fulfill its meaning). Even though this is possible, it does not comply with the requirements of Zend Framework that explicitly disallows such classes.<br />
2. The reading operations cannot be applied to different contexts without violating the design principles. Each class should be responsible of only one thing so having a file wrapper with stream manipulation/byte transformation methods is a violation of this principle. Also, it makes more sense to have a general stream Reader class to read an in-memory buffer, for instance.</p>
<p>As for the final methods, these are most indispensable to have. This is a consequence of the object oriented design principle called the Open-Closed Principle (OCP). The classes are not final and you can thus extend the class and create methods of your own.</p>
Jul 03, 2009
Tobias Petry
<p>Great Idea <ac:emoticon ac:</p>
<p>Adding support for Buffers like in Java would be a big win. Someone could really easy implement a protocol wit this class.</p>
<p>Something like:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$io = new Zend_Io($socketRessource);
$rtmp = new My_Io_Buffer_Rtmp($io);
$rtmpPacket = $rtmp->read();
]]></ac:plain-text-body></ac:macro>
<p>With a standardized buffer (some cool interfaces) it woud be really easy implementing protocols in php (at a low level) and using it (at a high level) with the object-responses of the buffer.</p>
Jul 03, 2009
Daniel Lo
<p>I like the idea of a Zend IO class.</p>
<p>Will the Zend_IO_Exceptions contain more information about the exact error that occurred?</p>
<p>Also, what about the unix socket, TCP/IP UDP || TCP connections?</p>
<p>Will you be supporting protocol stream wrappers? <a class="external-link" href=""></a></p>
<p>How do you feel about breaking the Zend_Io into 2 parts? One part for handling the actual IO, and another part for "interpreting" the IO. For example, you have Zend_Io which does the actual I/O. And then another part for reading interpreting the content as being bits, bytes, longs, ints, strings, UTF-8 vs UTF-16. etc. </p>
<p>-daniel</p>
Aug 08, 2009
Sven Vollbehr
<p>The current implementation throws an exception in the following situations:</p>
<p>1. In the constructor if, for example, the file cannot be accessed<br />
2. The method argument is wrong (length is negative, for example)<br />
2. Trying to operate on a closed stream</p>
<p>In these cases a detailed error message is provided. However, currently there are theoretical cases where an exception might not be thrown, for example if fwrite causes a warning. The file descriptor is, however, checked against whether it can be written to, so this should not normally happen. Do you feel the exceptions should contain more informative error messages? In what situations?</p>
<p>The reader/writer classes operate a PHP file descriptor so any source will do as long as it can be opened with fopen, read with fread and written with fwrite. So yes, stream wrappers are supported. </p>
<p>I do see your point in splitting the functionality into two. Most purely OO language libraries do it like that. It would add an extra layer of abstraction and remove the dependency to fopen/fread/fwrite functions. However, it would make the class set quite complex and we might end up being against the Zend Framework design principles as it is not aiming to be such a pure OO library. That was at least my interpretation of it. Nevertheless, I will look forward to your contribution on that!</p>
Sep 29, 2009
Matthew Ratzloff
<p>We discussed this while the component was in development, if you recall. We ultimately decided that the added complexity yielded very little benefit.</p>
Jul 07, 2009
A.J. Brown
<p>wouldn't Zend_IO_Writer_File be more consitent with ZF naming conventions?</p>
Aug 03, 2009
Marc Bennewitz (private)
<p>I like this Zend_IO class, too.</p>
<p>But I miss some special functionality:</p>
<ul>
<li>Read/Write IEEE 754 (<a class="external-link" href=""></a>)</li>
<li>Read/Write Signed|Unsigned Int 24 BE|LE</li>
<li>Read/Write/Seek Bits</li>
</ul>
<p>And I would think it is better to name all signed number methods with a "S" prefix<br />
-> e.g: readSInt32LE() instead of readInt32LE()</p>
<p>EDIT: Similar to readUInt32LE()</p>
Aug 08, 2009
Sven Vollbehr
<p>I have used the pack function to carry out most of the byte transformations. The documentation of pack states that the sizes as well as the representations of float and double are machine dependent, so could not use this. I did not think this further then and forgot it later. However, it is a good point that there should be a way to read/write float and doubles. Maybe you or someone else can provide me with an implementation of these methods?</p>
<p>As for the 24-bit integers I did not know there was a need for it. However, these methods are probably possible to add using the methods for 32-bit integers.</p>
<p>Reading/writing/seeking bits is also an interesting observation. How do you do that in PHP? I know it is possible in C and I have tried to find a way to do this myself (would be especially usefull when decoding MPEG-1/2). Currently I have used a workaround that operates on byte level and uses a bit twiddling class to deal with the bits. Again, I am more than glad to add this should you or anyone else be able to provide me with an implementation.</p>
<p>Also, just as a side note, there is no method for unsigned 64-bit integer as it is not possible due to the limitations of internal data types PHP uses.</p>
Aug 08, 2009
Sven Vollbehr
<p>The naming of the methods comes from programming languages such as c and c++ where int normally denotes a signed integer and signed keyword being optional. However, the only way to define an unsigned integer, however, is to explicitly use the keyword unsigned. I could perhaps add aliases so that readSInt32LE would call readInt32LE? How do others feel about this?</p>
Sep 29, 2009
Matthew Ratzloff
<p>The default is always signed. This is true of many languages, including SQL, which just about every PHP developer is familiar with.</p>
Oct 02, 2009
Marc Bennewitz (private)
<p>Hi Sven,</p>
<p>I think your method readFloat have to reads IEEE754 single precision (32Bit) using</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
unpack('f', $4byte);
]]></ac:plain-text-body></ac:macro>
<p>and the method readDouble have to read IEEE754 double precision (64Bit) using</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
unpack('d', $8byte);
]]></ac:plain-text-body></ac:macro>
<p>I don't know if byte order or other machine characteristics have to be handled.</p>
Jul 15, 2009
Marc Bennewitz (private)
<p>Is it possible to handle Strings by only one method ?</p>
<p>(Unicode isn't fixed 16 Bytes long -> One character can be 8 Bytes and longer)</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
readString($length = 1, $charset = null, $order = null, $trimBom = false);
]]></ac:plain-text-body></ac:macro>
<ul>
<li>If no charset is given you can auto detect it if a BOM is available else throw exception</li>
<li>Only Multibyte charsets need the $order and $trimBom parameter</li>
</ul>
Aug 08, 2009
Sven Vollbehr
<p>Yes, you are correct. The method names are a bit misleading. The 8 and 16 refer to bits used in the leading zeros.</p>
<p>The current implementation of readString8 reads $length bytes of characters and possibly trims any leading zeros (or other given characters). WriteString8 does the opposite, writes $length amount of characters possibly padding up to the given length with $padding.</p>
<p>The current implementation of readString16 does pretty much the same reading $length bytes of characters and trimming leading two zeros, and determining and possibly trimming the order. WriteString16 writes the missing BOM depending on the $order, and writes $length amount of characters possibly padding up to given ((int)$length/2)*2 with $padding.</p>
<p>The two methods only make a distinction between whether to trim or pad with a single or double character. So yes, this can be done in a single method as well if there is a parameter to denote the difference. Whether this should be the name of the character set or or something else I do not know. For instance, consider the following signatures.</p>
<ul>
<li>readString($length, $trim='\0', $bytes=1) and</li>
<li>writeString($value, $length=null, $padding='\0', $bytes=1)</li>
</ul>
<p>It would not require any mapping between character sets and number of leading zeros. Or it could perhaps be even simpler using the following signatures.</p>
<ul>
<li>readString($length, $trim='\0') and</li>
<li>writeString($value, $length=null, $padding='\0')</li>
</ul>
<p>Double byte strings are trimmed/padded by giving '\0\0' explicitly. Any thoughts?</p>
<p>All that BOM handling and order determination may not be required anywhere so I removed that from these examples to further simplify these methods.</p>
Aug 03, 2009
Tobias Petry
<p>@Marc: There are different encodings. UTF-16 has fixed 16 Byte Blocks (for longer chars there's the high and low surrogate), UTF-8 has 8 byte and more <ac:emoticon ac:</p>
Aug 03, 2009
Marc Bennewitz (private)
<p>Yes, you are right but than you have to add a method like readString4 which is the same as read(1).<br />
-> And a UTF-8 character could be <strong>theoretically</strong> endless. Which method you use to read a UTF-8 character?</p>
<p>I think the byte length of a string can only handle by a char set and there is no usage of the byte length within the method name.</p>
Sep 29, 2009
Matthew Ratzloff
<p>Sven,</p>
<p>I would strongly advise contacting <ac:link><ri:user ri:</ac:link> and getting his thoughts on this component. Ideally, this would become a dependency for Zend_Pdf and Zend_Search_Lucene. I know that at least Zend_Pdf has somewhat similar functionality already.</p>
Mar 25, 2010
Marc Bennewitz (private)
<p>Zend_Pdf & Zend_Search_Lucene are not the only components how would need this.<br />
-> Zend_Amf & Zend_Serializer_Adapter_PythonPickle needs this, too!</p>
<p>For the python pickle serializer (in my opinion) it's a big overhead if on every un-/serialize call a new instance of a stream reader/writer must create.<br />
-> It would be great to have a simple static interface like the the following:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$value = Zend_Io_Reader::fromInt16($bin, $order = 0);
$bin = Zend_Io_Writer::toInt16($value, $order = 0);
]]></ac:plain-text-body></ac:macro>
Dec 16, 2009
Marc Bennewitz (GIATA mbH)
<p>How to edit a stream (reading & writing) ?<br />
Need this two objects Zend_Io_Reader & Zend_Io_Writer or is it planed to add a Zend_Io_RW ?</p>
Jul 28, 2010
Dolf Schimmel (Freeaqingme)
<p>Why are all these methods final? Also, is there an option to simultaneously read <em>and</em> write to a stream?</p>
Aug 03, 2010
Ryan Mauger
<ac:macro ac:<ac:rich-text-body><p><strong>Community Review Team Recommendation</strong></p>
<p>The CR Team advises that this component be included in 1.11, and are happy with this proposal as-is</p></ac:rich-text-body></ac:macro>
|
http://framework.zend.com/wiki/display/ZFPROP/Zend_Io+-+Sven+Vollbehr?focusedCommentId=15565467
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
As I mentioned in the JFX and the Way Forward After JavaOne 2008 post, Sun announced at JavaOne that a preview release of the JavaFX SDK is scheduled to be available in July 2008. As a pleasant surprise over Memorial Day weekend, Sun opened up the development of this preview release SDK. This development activity is occurring as a part of the OpenJFX Compiler project, so follow the instructions that I gave you in the Obtaining the OpenJFX Script Compiler Just Got Easier post and join the fun! You'll be playing with the JavaFX SDK as it is being built, so expect changes. It would also be great if you'd provide input to the process, and help test the SDK as it's being developed.
Write Your First JavaFX Program that Uses the New Classes
The JavaFX code below uses the newer UI classes, and I'll show you how to compile and run this code in a bit. When the application first starts up, an empty window appears with two buttons:
When you click the Hello button, the message "You say hello..." from the popular "Hello, Goodbye" Beatles song displays approximately in the center of the window:
When you click the Goodbye button, the message "and I say goodbye" appears in place of the former message:
Here's the JavaFX code that generated this user interface and functionality:
/*
* HelloGoodbye.fx -
* A "Hello World" style program that demonstrates
* declaratively expressing a user interface.
*/
package beatles;
import javafx.ext.swing.BorderPanel;
import javafx.ext.swing.Button;
import javafx.ext.swing.Canvas;
import javafx.ext.swing.FlowPanel;
import javafx.ext.swing.Frame;
import javafx.scene.Font;
import javafx.scene.text.Text;
Frame {
var phrase:String
title: "Hello, Goodbye"
height: 300
width: 400
visible: true
content:
BorderPanel {
center:
Canvas {
content:
Text {
x: 50
y: 125
content: bind phrase
font:
Font {
size: 36
}
}
}
bottom:
FlowPanel {
content: [
Button {
text: "Hello"
action:
function():Void {
// The button was clicked
phrase = "You say hello...";
}
},
Button {
text: "Goodbye"
action:
function():Void {
phrase = "and I say goodbye";
}
}
]
}
}
}
Compiling and Running the Program
To compile this program, enter the following into the command line:
javafxc -d . HelloGoodbye.fx
As in Java, the -d option causes the CLASS files to be put into a directory corresponding to the package statement subordinate to the specified directory. To run the program, use the following command:
javafx beatles.HelloGoodbye
Now that you've got access to the JavaFX SDK as it's being built, get involved by writing JavaFX programs that exercise its functionality, and subscribe to one or more of the following mailing lists from this page.
users@openjfx-compiler.dev.java.net
gui@openjfx-compiler.dev.java.net
dev@openjfx-compiler.dev.java.net
Have fun, and please post a comment if you have any questions!
Jim Weaver
JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-side Applications
Immediate eBook (PDF) download available at the book's Apress site
Thanks for the lightning-fast response!
Posted by: Dave | June 27, 2008 at 04:07 PM
Yes, the Eclipse plug-in is not up to date. NetBeans is, and of course you can use the command-line tools.
Posted by: Jim Weaver | June 27, 2008 at 04:06 PM
Using the JavaFX plugin for Eclipse, I see this error:
11: Encountered "title" at line 11, column 3.
Was expecting one of:
"*" ...
"?" ...
"+" ...
"=" ...
This error points to the word 'title' in the first couple of lines of the program:
var phrase:String
title: "Hello, Goodbye"
Is the compiler for the Eclipse plugin not up to date?
Posted by: Dave | June 27, 2008 at 03:59 PM
|
http://learnjavafx.typepad.com/weblog/2008/06/as-i-mentioned.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
This.
Software Used in Configuring JBossws
- JBoss Application Server 4.0.5.GA.
- Eclipse Europa (WTP all in one pack)
- JDK 1.5.x
Pre-Requirements to Learn JBossWs and follow this article.
- Should have Java knowledge.
- Should know how to use Eclipse. (Creating web projects in Eclipse)
- Should have basic knowledge of webservice.
Where to get the JBossWs from?
You can download the software from the following URL:
- JBoss Application Server 4.0.5.GA. ==
- Eclipse Europa (WTP all in one pack)==
- JDK 1.5.x ==
Defining JBoss server in Eclipse
First thing what you have to do is to define JBoss server in Eclipse. The steps below will explain how to define JBoss server in Eclipse.
- Step 1 : Open Eclipse WTP all in one pack in a new work space.
- Step 2 : Change the perspective to J2EE Perspective if it is not currently in J2EE Perspective.
- Step 3 : Once the Perspective is changed to J2EE, you can see a tab called Servers in the bottom right panel along with Problems, Tasks, Properties.
- Step 4 : If the Servers tab is not found. Go to Eclipse menu : Windows > Show view and click on Servers, so that Server tab will be displayed.
- Step 5 : Go to Servers tab window and right click the mouse. You will get a pop up menu called “New”.
- Step 6 : Clicking on the New menu you will get one more pop up called “Server”. Click on it.
- Step 7 : Now you will get Define New Server Wizard.
- Step 8 : In the wizard there are options to define many servers. One among them is JBoss. Click on JBoss and Expand the tree.
- Step 9 : Select JBoss v 4.0 and click next.
- Step 10 : Now give the JDK directory and JBoss home directory. Click Next.
- Step 11 : Now the wizard will show you the default Address, port, etc., Leave it as it is and click on Next.
- Step 12 : Click on finish.
- Step 13 : Now you can see the JBoss server listed in the Servers window and the status is Stopped.
- Step 14 : JBoss server is now defined in Eclipse now and its ready to use from with in Eclipse IDE.
Creating a Dynamic Web Application Project
Now it is time to create a web application in order to Expose a method as a Web service.
Create a Dynamic Web Application Project in eclipse by selecting the JBoss server what we have defined in the Eclipse IDE as the default server for the project. (We assume that who ever is reading this article knows how to create a dynamic web application in Eclipse, So that part is not detailed out here).
Once the JBoss server is selected as the server for the web applications. All the libraries existing in JBoss will be selected and used by eclipse in the Build Path. So no need to add any extra jar files for our work.
Now we will start with a Java code:
This is a simple Java code and does not have any thing to do with Webservices.
JBossWs Code sample without annotations: (TestWs.java)
Our Java code will have a single method called “greet”. Its functionality will be just to accept a string and return the same prefixed with “Hello”.
package com.test.dhanago; public class TestWs { /** * This method will accept a string and prefix with Hello. * * @param name * @return */ public String greet( String name ) { return "Hello" + name; } }
We will add annotations to the above code and modify the code like below:
JBossWs Code sample with annotations: (TestWs.java)
package com.test.dhanago; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; /** * This is a webservice class exposing a method called greet which takes a * input parameter and greets the parameter with hello. * * @author dhanago */ /* * @WebService indicates that this is webservice interface and the name * indicates the webservice name. */ @WebService(name = "TestWs") /* * @SOAPBinding indicates binding information of soap messages. Here we have * document-literal style of webservice and the parameter style is wrapped. */ @SOAPBinding ( style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED ) public class TestWs { /** * This method takes a input parameter and appends "Hello" to it and * returns the same. * * @param name * @return */ @WebMethod public String greet( @WebParam(name = "name") String name ) { return "Hello" + name; } }
JBossWs annotations Walk Through
@WebService(name = "TestWs")
Here, @WebService Indicates that this is a webservice class. name = “TestWs” Indicates the webservice name.
@SOAPBinding ( style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED )
Here, @SOAPBinding Indicates binding information of soap messages. The properties below them indicates the style of web service, Here it is document-literal style. And parameter style is Wrapped.
Here, @WebMethod Indicates this is a method exposed as web service. @WebParam Indicates the parameter name to be used in soap message.
JBossWs Deployment Descriptor
Once the code is ready and compiled. You have modify the web.xml file located in WEB-INF folder.
Modify the web.xml file like below. (web.xml)
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>TestWS</display-name> <servlet> <servlet-name>TestWs</servlet-name> <servlet-class>com.test.dhanago.TestWs</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>TestWs</servlet-name> <url-pattern>/TestWs</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> </web-app>
Deploying the JBoss web service application
Once this is done, its time to build and deploy the application in JBoss Application server.
Once every thing is compiled with out any errors. And if you have enabled the Auto build unctionality of the Eclipse IDE, You have already done with building the application. If the auto build functionality of eclipse is not enabled, then right click on the project and build the project using build option.
Go to Servers window and right click on the JBoss server listed over there and select Run.
Wait for server to start. Once it starts, right click on the server listing. You can find a option called “Add and Remove Project”. Click on the option. You will get a wizard where you can select your projects to move to right and configure with server. Once you moved your project. Click on finish.
Once it is done, you can find that the project is again build and moved to server default deployment folder automatically.
Console will display you like below.
Buildfile: D:\ec2\eclipse\plugins\org.eclipse.jst.server.generic.jboss_1.5.102.v20070608\buildfiles\jboss323.xml deploy.j2ee.web: [jar] Building jar: D:\validation\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\Tws.war [move] Moving 1 file to D:\MyBoss\jboss-4.0.5.GA_ws121\server\default\deploy BUILD SUCCESSFUL Total time: 10 seconds
The dynamic web application i created is with the name “Tws”. So the build has created Tws.war and moved it to the default deploy folder of the JBoss server.
To make sure web service is started once it is deployed in the JBoss console you can find the log like below.
13:57:52,306 INFO [ServiceEndpointManager] WebService started:
To view the WSDL follow the link http://<machine name>:8080/Tws/TestWs?wsdl
To see the list of webservices deployed in your JBoss Application server follow the link. This browser console will have link to see your deployed webservices and their WSDL files.
JBossWs Browser Console.
Clicking on View a list of deployed services will list you the deployed web services. In our case we will get the following screen where we can see the registered service endpoints.
Here in this screen you can see the ServiceEndpointAddress link which will take you to the WSDL file.
You can also find the WSDL file in the following path:
<jboss_path>\server\default\data\wsdl\<project_name>.war\<filename>.wsdl
You can generate the client stubs using this file and access the web service. Creating the client stubs to access the web service is out of scope of this article.
The WSDL file generated using JBossWs is shown below:
<?xml version="1.0" encoding="UTF-8"?> <definitions name="TestWsService" targetNamespace=""="TestWs_greet"> <part name="greet" element="tns:greet"/> </message> <message name="TestWs_greetResponse"> <part name="greetResponse" element="tns:greetResponse"/> </message> <portType name="TestWs"> <operation name="greet" parameterOrder="greet"> <input message="tns:TestWs_greet"/> <output message="tns:TestWs_greetResponse"/> </operation> </portType> <binding name="TestWsBinding" type="tns:TestWs"> <soap:binding <operation name="greet"> <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> </binding> <service name="TestWsService"> <port name="TestWsPort" binding="tns:TestWsBinding"> <soap:address </port> </service> </definitions>
Summary
This article is just a quick start to start with for developers who want to quickly proceed with JBoss web services. It is up to the developers interest to leverage on this and proceed further. This is not the only procedure to expose a web service in JBoss. There might be lot of ways to do that and this is one of the way. So don’t stop here and continue Exploring.
- 5 New Features in HTML5 - November 29, 2013
- How To Set Tomcat JVM Heap Size in Eclipse? - November 28, 2013
- How To Resolve “Resource Is Out Of Sync With The Filesystem” Error in Eclipse? - November 28, 2013
[...] [...]
|
http://www.javabeat.net/creating-webservice-using-jboss-and-eclipse-europa/3/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Hi everyone,
So I have finally got my 2d array put together and now I need to sort the random numbers that are in the elements. I feel like I am close because I put together a one dimensional array and sorted it with no problems.
My problem is that I am a little unsure how to implement the knowledge I have about selection sort with one dimensional arrays on a two dimensional platform. I want them to sort left to right, top to bottom. So essentially, I need it to sort in the fashion that a nested for loop inserts numbers into the array elements.
I have the concept of using nested for loops, but I am having trouble wrapping my head around working it into the void function.
As usual and per the tradition of this forum, I did my best to get it, but I am getting errors and at this point it feels like I am jamming variables into spots that I am unsure if they even go there.
I would greatly appreciate any help on this issue.
Thank you, and here is my code so far. Techgique.
PS - If needed, Win 7, DevC, i7
Code:#include <iostream> #include <cstdlib> using namespace std; // Random 2d array 100 numbers sorted by Steve P. const int amount = 100; int main() { void selectionSort(int arr[][8], int); int row, col, possible; int arr[13][8]; for (row=0;row<14;row++) { cout<<endl; for (col=0;col<8;col++) { arr[row][col]=600+rand()%199; if ((row==13)&&(col==3)) { col=9; } } } selectionSort (arr, amount); cout<<"Your sorted array of random numbers ranging from 600 to 799."<<endl; for (row=0;row<13;row++) { cout<<endl; for (col=0;col<8;col++) { cout<<arr[row][col]<<" "; if ((col+1) % 8 == 0) cout << endl<<endl; if ((row==12)&&(col==3)) { col=9; row=14; } } } cout<<endl; cin.get(); return 0; } // something feels out of order here //and it seems that I am flinging values around more than I should be void selctionSort( int array [][8], int size) //Not sure about this 8, but again, only comfortable with 1d arrays so far, //and no values were needed using the 1d version (for the first number) { int startScan, startScan2, minIndex, minIndex2, minValue; for (startScan = 0; startScan < (size - 1); startScan++) { minIndex=startScan; minValue = array[startScan][startScan2]; for (int index = startScan+1; index<size; index++) { for (int index2 = startScan2+1; index2<size; index2++) { if (array[index][index2] < minValue) { minValue = array [index][index2]; minIndex = index; minIndex2 = index2; } } } array[minIndex][minIndex2] = array[startScan][startScan2]; array[startScan][startScan2] = minValue; } }
|
http://cboard.cprogramming.com/cplusplus-programming/140226-2d-selection-sort.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
James Bottomley wrote:>>The mechanism is in place, but the SCSI stack still needs a few changes>>to pass down the correct errors. The easiest would be to pass down>>pseudo-sense keys (I'd rather just call them something else as not to>>confuse things, io error hints or something) to>>end_that_request_first(), changing uptodate from a bool to a hint.> > > Yes, I'm ready to do this in SCSI. I think the uptodate field should> include at least two (and possibly three) failure type indications:> > - fatal: error cannot be retried> - retryable: error may be retried> > and possibly> > - informational: This is dangerous, since it's giving information about> a transaction that actually succeeded (i.e. we'd need to fix drivers to> recognise it as being uptodate but with info, like sector remapped)> > Then, we also have a error origin indication:> > - device: The device is actually reporting the problem> - transport: the error is a transport error> - driver: the error comes from the device driver.> > So dm would know that fatal transport or driver errors could be> repathed, but fatal device errors probably couldn't.> I apologize for not starting a new thread, but I just wanted some feedback as to whether or not the attached patch is headed in the right direction or even acceptable. block-err.patch adds new errornos to include/linux/errno.h (it does not touch the asm values), so useful IO error info can passed from callers of end_that_request_first to bio_endio and eventually to the DM/MD endio functions.I have an alternative patch that defines BLK_ERR_xxx values instead of touching errno.h, but becuase the error values get passed through the request code, bio code and DM/MD code the callers of bio_endio that are already using -Exxx values could present a problem. It would be nice to change them to the BLK_ERR_xxx, so the bio layer could have a single error value namespace. It's a more invasive change as there are several callers passing at least -EIO, -EWOULDBLOCK and -EPERM, so I am not sure if that is going to be OK since we are already in 2.6.3?Thanks,Mike Christiemikenc@us.ibm.comdiff -aurp linux-2.6.3-orig/drivers/block/ll_rw_blk.c linux-2.6.3-ec/drivers/block/ll_rw_blk.c--- linux-2.6.3-orig/drivers/block/ll_rw_blk.c 2004-02-17 19:57:16.000000000 -0800+++ linux-2.6.3-ec/drivers/block/ll_rw_blk.c 2004-02-18 12:33:50.000000000 -0800@@ -2456,8 +2456,13 @@ static int __end_that_request_first(stru if (!blk_pc_request(req)) req->errors = 0; - if (!uptodate) {- error = -EIO;+ /*+ * Most drivers set uptodate to 0 for error and 1 for success.+ * MD/DM ready drivers will set 1 for success and a -Exxx+ * value to indicate a specific error.+ */+ if (uptodate < 1) {+ error = (uptodate == 0 ? -EIO : uptodate); if (blk_fs_request(req) && !(req->flags & REQ_QUIET)) printk("end_request: I/O error, dev %s, sector %llu\n", req->rq_disk ? req->rq_disk->disk_name : "?",@@ -2540,7 +2545,7 @@ static int __end_that_request_first(stru /** * end_that_request_first - end I/O on a request * @req: the request being processed- * @uptodate: 0 for I/O error+ * @@uptodate: <= 0 to indicate an I/O error. * @nr_sectors: number of sectors to end I/O on * * Description:@@ -2561,7 +2566,7 @@ EXPORT_SYMBOL(end_that_request_first); /** * end_that_request_chunk - end I/O on a request * @req: the request being processed- * @uptodate: 0 for I/O error+ * @uptodate: <= 0 to indicate an I/O error. * @nr_bytes: number of bytes to complete * * Description:diff -aurp linux-2.6.3-orig/include/linux/errno.h linux-2.6.3-ec/include/linux/errno.h--- linux-2.6.3-orig/include/linux/errno.h 2004-02-17 19:59:12.000000000 -0800+++ linux-2.6.3-ec/include/linux/errno.h 2004-02-18 12:45:42.000000000 -0800@@ -23,6 +23,14 @@ #define EJUKEBOX 528 /* Request initiated, but will not complete before timeout */ #define EIOCBQUEUED 529 /* iocb queued, will get completion event */ +/* Block device error codes */+#define EFATALDEV 540 /* Fatal device error */+#define EFATALTRNSPT 541 /* Fatal transport error */+#define EFATALDRV 542 /* Fatal driver error */+#define ERETRYDEV 543 /* Device error occured, I/O may be retried */+#define ERETRYTRNSPT 544 /* Transport error occured, I/O may be retried */+#define ERETRYDRV 545 /* Driver error occured, I/O may be retried */+ #endif #endif
|
http://lkml.org/lkml/2004/2/18/381
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Class for determining the optimal partitioning of mesh entities. More...
#include <GeomDecomp.hpp>
Class for determining the optimal partitioning of mesh entities.
Derived from the Partition class.
The GeomDecomp class has no data associated with it, only member functions. All data is inherited from the Partition class.
GeomDecomp has two functions. It adds functions to compute geometry information for mesh entities, such as the center point for a mesh entity. And it defines virtual functions to be specialized by other classes that interface to partitioning packages such as Zoltan.
Definition at line 71 of file GeomDecomp.hpp.
Convert a single mesh entity to a single point.
The entity_to_point function is used in the case where a mesh entity is an element with many nodes. Then something like the element centroid can be used to define a single coordinate point for it.
Definition at line 133 of file GeomDecomp.cpp.
Used to return the nodal entities that compute_entity_centroid averages.
The return value is the mesh entities from which the coordinates were obtained.
Definition at line 34 of file GeomDecomp.cpp.
Returns a vector of vectors containing the coordinates of the nodes that were used to compute the centroid.
return value is the output coordinates that entity_to_point would average to determine a centroid.
Definition at line 70 of file GeomDecomp.cpp.
|
http://trilinos.sandia.gov/packages/docs/r11.2/packages/stk/doc/html/classstk_1_1rebalance_1_1GeomDecomp.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Creating Check Box in Java Swing
Creating Check Box in Java Swing
...
component in Java Swing.
In this section, you can learn simply creating the
Check Box in Java Swing. Check Boxes are created in swing by creating the
instance
Check Box Validation in PHP - PHP
Check Box Validation in PHP How can validations done on check boxes more than 3? Hi Friend,
Please visit the following link:
Popup dialog box in java
on your application due to some event. With the help of java swing toolkit, you can create popup dialog box.
Java swing toolkit provide tow types of dialog box...Popup dialog box in java How to display popup dialog box in java
Option Box Value - Java Beginners
Option Box Value Hi Friends,
I have one option box which is division,
division have dynamically data,if user select any division then
his option box is populated
(work schedule,Peronal Area,personal sub area,business
Message Box Java
,options,options[2]);
Here is an example of displaying a message box using
swing...
Dialog Box in Java - Swing Dialogs
Show Message and Confirm Dialog Box...In this section we will create a message box in Java using a Java API class
Java Message Box
Java Message Box
In this tutorials you will learn about how to create Message Box in Java.
Java API provide a class to create a message Box in java... field or combo box.
showMessageDialog : Display a message with one button
Check Box Midlet Example
;
This example illustrates how to create check boxes in to your form. In this
example we are creating a Form("...;,
"J2ME", "J2EE", "JSF"). if user select a check
box
Scrolling List Box. - Java Beginners
Scrolling List Box. How can is make a list box scrollable by using method ? Please give me the
code snipetts
Show input dialog box
Swing Input Dialog Box Example - Swing Dialogs
... and interactive
feature of Java Swing. You have been using the System.in for inputting
anything from user. Java Swing
provides the facility to input any thing (whether
Combo Box operation in Java Swing
Combo Box operation in Java Swing
... button
then the text of the text box is added to the combo box if the text box... the Combo
Box component, you will learn how to add items to the
combo box
how to insert check box
how to insert check box how to insert check box into jtable row in swing
jsp list box - Java Beginners
jsp list box I have two list boxs. the values for the first list box is retrieved from the mysql database. I want to fill the second list box selected item from the database.
Please help me in this regard.
Your help
Show Dialog Box in Java
Show Dialog Box in Java - Swing Dialogs
...
the message Dialog box. Our program display "Click Me" button on the
window... box as follows:
A simple message dialog box which has only one
button
remove item from list box using java script - Java Beginners
remove item from list box using java script remove item from list box using java script Hi friend,
Code to remove list box item using java script :
Add or Remove Options in Javascript
function addItem
PHP List Box Post
The PHP Post List box is used in the form. It contains multiple value
User can select one or more values from the PHP list Box
This PHP Post List Box is the alternate of Combo box
PHP Post List Box Example
<?php
PHP list box mysql
PHP List box that displays the data from mysql table.
User can select any value from the PHP list box.
Example of PHP MYSQL List Box
Code
<...
<option value=1>1</option>
<option value
PHP list box
The PHP List box is used in the form.
It takes input values from the user.
The PHP List box contains multiple values at a time.
PHP List Box Example...;select
<option
value
Hollow Box
to exit. For example, if the user keys in 8, the hollow box (of length and width..., another example, if the user keys in 7, the hollow box (of length and width...;Hello Friend,
Try this:
import java.util.*;
class Box{
public void check(Scanner
Check Box in HTML
Check Box in HTML
..., user
can choose a radio button in html page.
Understand with Example...
create a set of check box, this set of check box can be selected more than one
dialog box
Java show series of at least four interview questions hello.
write....
at the end of the interview, use a dialog box to ask whether the user wants... the results of each question indicate how many users chose the first option,second
Dialog Box Input Loop
Prev: Example: Capitalize | Next:
Java NotesDialog Box Input Loop
Indicating end of input with cancel or close box, a special value, or empty input.... For dialog box input, this could be clicking
the close box or hitting
java script text box
java script text box hi,
I created a button when i click on button(next/prev) new two textbox is created. i want to do the two textbox will show... in alert).
i also want the text box should generate in front of NEW button(next/prev
validation.....
validation..... hi..........
thanks for ur reply for validation code.
but i want a very simple code in java swings where user is allowed to enter... give a message box that only numerical values should be entered. How can
JavaScript Combo Box Validation
JavaScript Combo Box Validation
This application illustrates how to validate the combo box using JavaScript
validation.
In this example we create a combo box of different
Jcombo box - Swing AWT
Jcombo box Hello sir
i found dis site today...realy superb evn i complete half project with ur examples
sir i hav problem related to combo box
i... for this? Can u plz just gve me a simple example
populating text box using jsp code
populating text box using jsp code Sir,
How to populate related values in a text box after selecting value from drop down list using JSP and mysql. I tried using Ajax from your example. But for some browser it does not support
JFrame Close Box
Java: JFrame Close Box
Terminating the program when the close box... closes.
For example, it's typical to check to see if there is any unsaved work... that you wouldn't normally
change. However, the close box only closes the window
input box
input box give me the code of input box in core java
Swing Program
Swing Program Write a java swing program that takes name and marks as input with all validation. The name and marks will be displayed in message box when user clicks on a button
action for dropdown box - Java Server Faces Questions
for populating a list box from a drop-down selection?
What I want to do is give... a selection from the drop-down list, the list box beside it gets populated... from my table.
When the user selects one catagory, the list box gets populated
dropdown box in jsf - Java Server Faces Questions
dropdown box in jsf Hi friends,
AssigningJob... box...,
For solving the problem some points to be remember :
For example You select a country
Javascript List Box - JSP-Servlet
itself.my problem is in list box the semester which i selected is not showing in list box as selected.when i select,the page refreshes but i get the result what i expected.i need to show in list box as semester is selected but it doesnt
retrieve the data to text fields from database on clicking the value of combo box
but, this will be helpful for you.... box retrieve the data to text fields from database on clicking the value of combo box .
I am not getting it plz help me out .
hi
Dialog Box Application in Java Swing |
Show Message and Confirm Dialog Box...-to-One Relationship |
JPA-QL Queries
Java Swing Tutorial Section
Java Swing Introduction |
Java 2D
API |
Data Transfer in Java Swing
validation
controller class ....ple for this example ple write validater class and bean config... language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding...;
<form:option
<form:option
combo box
to that user in left list.when we select module from left list and click INCLUDE button...combo box Hi,
[_|] dropdown box...] | |
| LEFT LIST | | RIGHT LIST
Time validation
box is in the correct format or not using java script.
Please help me for doing...;input type="button" value="Check" onclick="return check...Time validation Hi. I have a text box in html to get time
Use of Select Box to show the data from database
Use of Select Box to show the data from database
Example program using Select Box to show retrieved data from database
This example will describe you the use of Select Box in a JSP
Show message and confirm dialog box
Show Message and Confirm Dialog Box - Swing Dialogs... use in
your swing applications, example of each type of dialog boxes.... Once you click
on the first button then the simple message box will open which
combo box connection
combo boxes.
Here is Java swing code:
import java.sql.*;
import...combo box connection how to provide connection between three combo boxes,if my 1st combo box is course and 2nd combo box is semester and 3rd combo
Jdialog box with textfield - Java Beginners
Jdialog box with textfield i have to create a dialog box with 2...(JFrame parent)
{
JButton button = new JButton("OK");
d = new JDialog...);
panel.add(text2);
panel.add(button);
d.getContentPane().add(panel
Choice Option (Combo) In Java
Choice Option (Combo) In Java
In this section, you will learn how to create Drop-Down
List... of constructing a drop down list in java by using java awt.
There is a program Swing Tutorials
button in java swing. Radio Button is like check box.
... and drop
component (drop down list, text area, check box, radio button etc.) from one... in Java Swing.
Dialog Box In Swing Application
how to calculate the price on the option box
how to calculate the price on the option box How i calculate the value when i using a option box with 2 option..first option i used for product name and for the second i used for the quantity..which function should i used
HI Jsp check box..!
HI Jsp check box..! Hi all..
I want to update the multiple values of database table using checkbox..after clicking submit the edited field has to update and rest has to enable to update...please help me..its urgent
What is Java Swing?
What is Java Swing?
Here, you will know about the Java swing. The Java
Swing provides... and GUIs components. All Java Swing classes imports form the import
confirm message box - Java Beginners
confirm message box How can I create a confirm message with Yes and No button instead of OK/Cancel buttons in java script inside a jsp? Hi friend,
Code to help in solving the problem :
Untitled
Drop Box
Drop Box program draw 2d shapes in java
Create a JRadioButton Component in Java
button in java swing. Radio Button is like check box. Differences between check
box and radio button are as follows:
Check Boxes are separated from one to another where
Radio Buttons are the different-different button like check box
check box condition
check box condition Hai,
my application has two check box one for chart and another one for table.when We click chart check box download only chart but table also download.same problem in table slection..xsl coding was used
how to insert list box in java script dynamically and elements retrieving from database like oracle
how to insert list box in java script dynamically and elements retrieving from database like oracle Hi,
how to dynamically increase size of list... insert new course in a table.. It should be seen in my list box
Dynamic check box problem
Dynamic check box problem In my project i have used a dynamic table, in the each row of that table there is one check box [that i have created... check boxes ... pleas help me as soon as possible...
1)application.jsp
add button to the frame - Swing AWT
for more information. button to the frame i want to add button at the bottom... JFrame implements ActionListener {
JButton button = new JButton("Button
scroll bars to list box - JSP-Servlet
scroll bars to list box Can I add scroll bars to a list box in struts? Hi friend,
Scroll the list box in struts
Two attribute set "multiple" "size".
Select Tag Example
Select Tag
how to insert list box in java script dynamically and elements retrieving from database like oracle
how to insert list box in java script dynamically and elements retrieving from database like oracle hi all,
how can i insert elements into java script list box retrieving from Database.
whenever I insert any element in the Db script validation - Java Beginners
Button Validation
function callEvent1...java script validation hi,
i have two radio buttons yea and no. all text fiels r deactivated, when i click no radio button. its get active
autosuggest box - Ajax
autosuggest box Java example How to implement auto suggest box using Ajax-DWR technology in jsp/html
To display suggestions in a text box - Ajax
, to get the suggestions i mean when i enter the alphabet in a text box(For ex:'A'), the names that starts from 'A' have to display in the text box.
The names must be get from database,Please help me to do this task.
Example: When i
java combo box
java combo box how to display messagedialogbox when the combobox is null,
Thanks in advance
Change background color of text box - Java Beginners
Change background color of text box Hi how can i change the background color to red of Javascript text box when ever user enters an incorrect value... check(){
if (document.getElementById('in').value=="amit
Login Box - Java Beginners
Login Box Hi, I am new to Java and I am in the process of developing a small application which needs a login page. I am planning to have my database... it with the corresponding password tosee if the login details match.I've
validation in java script
validation in java script i have put this code for only entering integer value in text box however error occured...="submit" value="Check">
</pre>
</form>
</html>
Validation
Validation Hi..
How to Validate blank textfield in java that can accepts only integers?
Have a look at the following link:
validation
validation please help me to check validation for
<form>...;select
<option value="1">Select</option>
<option value="2">Pancard</option>
<option value="3"> i want to open a new dialog box after clicking "upload" button, it should have a text field, browse button to browse the file from...);
JButton button=new JButton("Browse");
button.addActionListener(new
Swing Applet Example in java
Java - Swing Applet Example in java
... swing in an applet. In this example,
you will see that how resources of swing... button and if first text box is blank then the label lbl
shows the message
javascript form validation
javascript form validation How to validate radio button , dropdown list and list box using onsubmit event..please give me a sample example..If we do not select any option,it shows an error..
Hello Friend,
Try
How to Hide Button using Java Swing
How to Hide Button using Java Swing Hi,
I just begin to learn java programming. How to hide the button in Java programming. Please anyone suggest or provide online example reference.
Regards,
hi,
In java
Java Code - Swing AWT
Java Code How to Display a Save Dialog Box using JFileChooser... index;
BufferedImage bi, bufferImage;
int w, h;
static JButton button...);
}
button=new JButton("Save");
button.addActionListener(new ActionListener
check null exception java
check null exception java check null exception java - How to check the null exception on Java?
The null pointer exception in java occurs... this error. The only way to handle this error is validation. You need to check
Radio Button In Java
on the checkbox button. A check box
group button in a CheckboxGroup can be in the "...
Radio Button In Java
Introduction
In this section, you will learn how to create Radio
Button
JTable Cell Validation? - Swing AWT
://
Thanks it's not exactly...JTable Cell Validation? hi there
please i want a simple example...(table);
JLabel label=new JLabel("JTable validation Example",JLabel.CENTER
Swing - Applet
information on swing visit to : Hello,
I am creating a swing gui applet, which is trying to output all the numbers between a given number and add them up. For example
|
http://www.roseindia.net/tutorialhelp/comment/78048
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Haiti says no to dolphin captivity
From ANIMAL PEOPLE, June 2004:
PORT AU PRINCE–Six dolphins caught for exhibition in mid-May
by a Haitian firm with Spanish backing swam free on June 3 through
the intercession of Haitian environment minister Yves Andre
Wainwright and agriculture minister Philippe Mathieu.
Wainwright and Mathieu intervened at request of Dolphin
Project founder Ric O’Barry, whose 35-year-old effort to liberate
captive dolphins has operated since the beginning of 2004 under the
auspices of the French organization One Voice.
With a U.S. Coast Guard patrol boat maintaining security,
O’Barry and Guillermo Lopez, DVM, of the Dominican Republic Academy
of the Sciences dismantled the sea pen holding the dolphins.
Wife Helene O’Barry and Jane Regan of Associated Press snapped
digital photos from the beach.
The liberation marked the rejection of dolphin capturing as a
commercial enterprise in one of the poorest nations in the world,
even as entrepreneurs from other island nations rush to cash in on
the boom in marketing swim-with-dolphins tourist attractions.
The liberation also demonstrated the resolve of the present
Haitian government to start enforcing conservation laws that long
went ignored by their predecessors, as a succession of shaky regimes
have struggled to uphold any law and order at all.
Ric O’Barry flew to Haiti after One Voice received a tip on
May 18 that eight bottlenose dolphins had been impounded in a shallow
sea pen in the Arcadins Islands. O’Barry reached the scene on May 23.
Flash flooding and mudslides elsewhere in Haiti on May 26
killed at least 2,000 Haitians and displaced 40,000.
Meeting with O’Barry on June 2, “Mr. Matthieu highlighted
the connection between the recent flooding disaster and the dolphin
capture,” O’Barry recounted. “He pointed out that more than 90% of
Haiti is deforested, mainly because most of its eight million
inhabitants need charcoal to cook. When there are no roots in
the ground to reduce runoff and hold the topsoil, the pouring rain
runs freely down the mountains, slamming into villages along with
debris, mud, and gravel.”
O’Barry said Mattieu told him, “We need to find alternative
ways of surviving in order to ensure both our own future and that of
the environment. The same could be said about the dolphin issue.
Allowing entrepreneurs to profit from the misery of our natural
treasures is not going to solve any of our problems. Giving the
dolphins their freedom back is the right thing to do, both for the
dolphins and for the people of Haiti.”
The World Society for the Protection of Animals sponsored
Lopez’ presence, in case veterinary help or persuasion of Haitian
officials was needed, but Matthieu and Wainwright had already
authorized the release before Lopez arrived.
The dolphins required no pre-release treatment, although
O’Barry noted that most had “rake marks” and “stretcher burns” from
conflict with each other in the sea pen and rough handling–but eight
dolphins were captured, and two had died, O’Barry learned from Jose
Roy, whose company, called Action Haiti, arranged their capture.
Action Haiti, not to be confused with the UNICEF relief
project Humanitarian Action Haiti and the pro-Aristide political
group Haiti Action, applied for a permit to capture 10 dolphins on
December 22, 2003.
“On February 2, 2004,” 27 days before former President Jean
Baptiste Aristide was forced from office after five years of
increasing strife, “the permit was approved and issued to Alexandre
Paul, the lawyer representing Action Haiti,” O’Barry wrote.
Haitian law required a population study prior to issuing a
dolphin capture permit. That condition was not met.
The capture permit also stipulated that the dolphins were not
to be sold or transported out of Haiti, and could only be used for
purposes associated with “education and tourism” within Haiti.
“There are very few tourists coming to Haiti, and it is
highly questionable if a tourist attraction is at all viable in this
location,” O’Barry observed.
When O’Barry, Wainwright, and others met on May 22,
O’Barry said, “Everyone at the meeting seemed to think that Action
Haiti might try selling the dolphins to another facility,”
presumably after obtaining a transfer permit which Wainwright said
would not be granted.
After initially refusing to allow O’Barry and Wainwright to
view the dolphins, Roy was persuading by police.
“Roy revealed that a large Spanish corporation was financing
the entire operation,” O’Barry said. “He said that several dolphin
trainers from Mexico had been brought in to capture and train the
dolphins. He would not give any names. Nor would he disclose which
Mexican company had provided the staff to carry out the captures.”
O’Barry called the releases, “A powerful, positive message
to the rest of the world about Haiti’s respect for nature.”
Barbuda & Antigua
The news from Antigua was rather different. On February 11,
2004, the government of Antigua & Barbuda refused to allow Caribbean
developer John Mezzanotte to capture 12 dolphins per year from
Antiguan waters. On June 3, however, Mezzanotte was allowed to
import eight dolphins.
Mezzanotte is among the promoters of Dolphin Fantaseas, a
swim-with-dolphins attraction started on Anguilla in 1988 with six
dolphins imported from Cuba. The Dolphin Fantaseas facility in
Antigua & Barbuda was begun with three of those dolphins, who were
transferred from Anguilla in December 2001.
Martha Watkins-Gilkes, public relations officer for the
1,200-member Antigua & Barbuda Independent Tourism Promotion
Corporation, announced that her organization would investigate
possible legal action against the dolphin imports.
Weakening U.S. laws
In the U.S., Representative Wayne Gilchrist (R-Maryland) is
pushing a bill supported by the marine mammal exhibition industry to
eliminate federal tracking of dolphins, whales, sea lions, and
seals traded or sold overseas. Gilchrist chairs the Fisheries
Conservation, Wildlife & Oceans Subcomm-ittee of the House Committee
on Resources.
“U.S. parks would only have to report births, deaths, and
transfers of their animals annually, rather than when they occur,”
summarized Sally Kestin senior writer for the South Florida
Sun-Sentinel.
The Gilchrist bill cleared the Resources Committee in fall
2003. In April 2004 Gilchrist brought it back by proposing an
amendment to study abolishing the captive marine mammal inventory
maintained by the National Marine Fisheries Service, and to
eliminate the requirement that marine mammal parks selling or loaning
regulated marine mammals abroad must obtain a “letter of comity” from
the governments of the recipient nations, certifying that the
foreign facilities meet U.S. standards.
“The proposed changes are part of a pattern of decreased
government oversight of the now $1 billon a year marine park
industry,” wrote Kestin. “In 1994, parks lobbied successfully for
Congress to eliminate a requirement that they submit death reports,
called necropsies, to the government when a marine mammal dies.
Parks no longer needed permits to move their animals out of the
country, and succeeded in having full oversight responsibility of
their animals transferred from the Fisheries Service to the USDA.”
Kestin examined the regulatory relaxation in detail in a
five-part series entitled “Marine Attractions: Below the Surface,”
published between May 16 and May 26, 2004.
“Over nine months,” Kestin wrote in the first part of the
series, “the Sun-Sentinel examined more than 30 years’ worth of
federal documents on 7,127 marine animals that the government
collected but never analyzed. The investigation found that more than
3,850 sea lions, seals, dolphins and whales have died under human
care, many of them young. Of nearly 3,000 whose ages could be
determined, a quarter died before they reached age one, half by age
seven.
“Of about 2,400 deaths in which a specific cause is listed,”
Kestin continued, one in five marine mammals died of uniquely human
hazards or seemingly avoidable causes such as capture shock, stress
during transit, poisoning, and routine medical care. Thirty-five
animals died from ingesting foreign objects” found in their tanks.
Kestin asked 129 marine mammal facilities for their side of
the issues, but SeaWorld, Six Flags Inc., the Indianapolis Zoo,
the National Aquarium, the MGM Mirage, the West Edmonton Mall,
Theatre of the Sea, the Miami Seaquarium, Dolphin Research Center,
and Buttonwood Park Zoo all refused to share basic pertinent
information.
Most have been involved in controversies pertaining to marine
mammal captivity, and several still are.
The West Edmonton Mall, as of May 23, is no longer a
dolphin exhibitor. Howard, the last of four dolphins kept there
since 1985, was transferred to Theatre of the Sea in Islamorada,
Florida, near his capture site. The other three Edmonton Mall
dolphins died in 2000, 2001, and 2003.
Six Flags Inc., which sold the former SeaWorld of Ohio marine mammal
park in March 2004, while keeping the animals, in April transferred
Shouka the orca, age 10, to the Six Flags Marine World park in
Vallejo, California.
Formerly Marine World Africa USA, the Vallejo facility and a
predecessor whale stadium in Redwood City, California, had featured
orcas since 1969, but the last of them died in 2000.
|
https://newspaper.animalpeopleforum.org/2004/06/02/haiti-says-no-to-dolphin-captivity/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
go to bug id or search bugs for
New/Additional Comment:
Description:
------------
Compiling with PHP-FPM enabled on an older SPARC system will result in
/tmp/cc6w5Fh0.s: Assembler messages:
/tmp/cc6w5Fh0.s:39: Error: Architecture mismatch on "cas".
/tmp/cc6w5Fh0.s:39: (Requires v9|v9a|v9b; requested architecture is sparclite.)
Unfortunately my knowledge of SPARC assembly language isn't nearly good enough to fix that. I know that the v9 "cas" opcode does an atomic "compare and swap" operation but I wouldn't know how to translate that into v8 code.
Test script:
---------------
Copy /sapi/fpm/fpm/fpm_atomic.h to fpm_atomic.c and add bogus main() function:
int main () {
int result;
atomic_t mylock;
result = fpm_spinlock(&mylock, 1);
}
Compile using "gcc -mcpu=v8 fpm_atomic.c" will result in error message given.
Expected result:
----------------
Should compile without error.
Actual result:
--------------
sparky:~# gcc -mcpu=v8 fpm_atomic.c
/tmp/cciAbMrC.s: Assembler messages:
/tmp/cciAbMrC.s:121: Error: Architecture mismatch on "cas".
/tmp/cciAbMrC.s:121: (Requires v9|v9a|v9b; requested architecture is sparclite.)
sparky:~#
Add a Patch
Add a Pull Request
As the sparc documentation says
():
The SPARC v9 manual introduced the newest atomic instruction: compare and swap
(cas)
I don't know how to fix this right now. If you know someone who can, he's
welcome. I've already asked for help.
wait and see
Well, I blatantly copied from PostgreSQL's s_lock.h and came up with this:
diff -Nau fpm_atomic.h.org fpm_atomic.h
--- fpm_atomic.h.org 2009-12-14 09:18:53.000000000 +0000
+++ fpm_atomic.h 2010-11-15 01:50:31.000000000 +0000
@@ -82,7 +82,7 @@
#endif /* defined (__GNUC__) &&... */
#elif ( __sparc__ || __sparc ) /* Marcin Ochab */
-
+#if (__sparc_v9__)
#if (__arch64__ || __arch64)
typedef uint64_t atomic_uint_t;
typedef volatile atomic_uint_t atomic_t;
@@ -118,7 +118,23 @@
}
/* }}} */
#endif
+#else /* sparcv9 */
+typedef uint32_t atomic_uint_t;
+typedef volatile atomic_uint_t atomic_t;
+static inline int atomic_cas_32(atomic_t *lock) /* {{{ */
+{
+ register atomic_uint_t _res;
+ __asm__ __volatile__("ldstub [%2], %0" : "=r"(_res),
"+m"(*lock) : "r"(lock) : "memory");
+ return (int) _res;
+}
+/* }}} */
+
+static inline atomic_uint_t atomic_cmp_set(atomic_t *lock, atomic_uint_t old,
atomic_uint_t set) /* {{{ */
+{
+ return (atomic_cas_32(lock)==0);
+}
+/* }}} */
#else
#error unsupported processor. please write a patch and send it to me
Rationale:
If I'm reading the original code correctly, there's no actual locking done but
instead the code only tests whether it could acquire a lock. 'ldstub' works such
that it returns the current value of the memory region specified and sets it to
all '1' afterwards. Thus, if the return value is '-1' the lock was already set
by another process whereas if it's '0' we acquired the lock. Well, at least in
my certainly flawed logic ;) Since ldstub is atomic I didn't see a need to
explicitly "lock;" the code.
The patch should leave the 'cas' code intact when being compiled on v9 type
SPARC systems. Tested (for successful compilation only!) on Debian (etch) using
gcc 3.3.5. Thus I believe further testing is necessary to verify this is
actually working.
Well, please test and incorporate if you feel the code is doing what it's
supposed to do.
May I know as to why you need to compile with v8 ? compiling with v9 does not
automatically make your application 64-bit . if that is the reason you want to
choose -v8 in here.
v8 sparc instruction is decade old - and is not
being used in any hardware. so, i see no reason as to why we need to use / support
this specific instruction set.
Automatic comment from SVN on behalf of fat
Revision:
Log: - Fixed #53310 (sparc < v9 won't is not supported)
we've decided sparc < v9 won't be supported. I've just updated the source code to
warn specificaly about this.
Of course you may ask: because I'm porting PHP to the ReadyNAS platform which
happens to use a SPARC v8 compatible CPU and thus *needs* the v8 instruction set.
Seeing that you've already made up your mind though, so I guess there's nothing
more to add here. Makes me wonder why I can't get a response in > 24 hours as to
my patch but you can't wait for me to answer for like 4 hours.
you should be able to compile with a gcc version which provides the
__sync_bool_compare_and_swap builtin function (>= 4.1).
It's supported by FPM. If with this version of GCC FPM is not able to be compiled,
there is a bug in FPM. We'll take care of it.
It this a reasonable solution ?
And you can still use FastCGI, btw. FPM is fairly new, and if new SAPIs have to support soon to be dead OSes, then we will cruelly need more developers to maintain everything :)
As you may have read in my initial post, the compiler I (have to) use is gcc
3.3.5 which falls a bit short of 4.1 ;) Also, you may want to read the
backend/port/tas/solaris_sparc.s file from the official PostgreSQL sources:
! "cas" only works on sparcv9 and sparcv8plus chips, and
! requies a compiler targeting these CPUs. It will fail
! on a compiler targeting sparcv8, and of course will not
! be understood by a sparcv8 CPU. gcc continues to use
! "ldstub" because it targets sparcv7.
There they work around this by using a condition (for the SUN compiler) like
this:
#if defined(__sparcv9) || defined(__sparcv8plus)
cas [%o0],%o2,%o1
#else
ldstub [%o0],%o1
#endif
and in their actual generic lock implementation (src/include/storage/s_lock.h)
the code is this:
#if defined(__sparc__) /* Sparc */
#define HAS_TEST_AND_SET
typedef unsigned char slock_t;
#define TAS(lock) tas(lock)
static __inline__ int
tas(volatile slock_t *lock)
{
register slock_t _res;
/*
* See comment in /pg/backend/port/tas/solaris_sparc.s for why this
* uses "ldstub", and that file uses "cas". gcc currently
generates
* sparcv7-targeted binaries, so "cas" use isn't possible.
*/
__asm__ __volatile__(
" ldstub [%2], %0 \n"
: "=r"(_res), "+m"(*lock)
: "r"(lock)
: "memory");
return (int) _res;
}
#endif /* __sparc__ */
Now my general idea was that if there's a reason for PostgreSQL to keep that
code around, there might be a reason for PHP to do so as well. Obviously I was
wrong there.
I also do not see the real advantage of 'cas' over 'ldstub' in the current
scenario since both are atomic, both are supported (ldstub even on v7) and both
do the job perfectly well.
@pajoye@php.net
Did it ever occur to you that I found this bug/problem because I *specifically
wanted to use FPM* in the first place? Had I wanted to use FastCGI I'd have
certainly done so. Seeing that there already was a solution for Sparc v9 I
thought there might be interest in a solution that would allow PHP to run on
older machines.
Hardware that maybe you're laughing about. But hardware that's still in fairly
wide use. And, come to think of it, hardware that may also be the only hardware
people in poorer countries than the one you're obviously living in are able to
get their hands on. So I first asked and then got my hands dirty and even
provided a possible solution - and one that could be easily implemented, too.
And what for? Only to get ignorance and witty remarks in return. Well, I almost
forgot that the PHP project has such a bad reputation when it comes to bugs and
patches. Thanks for reminding me why. Now you can safely go back to your ivory
tower and think about supporting next decade's hardware only. For my part, I
promise to keep any bugs/problems in PHP I may find in the future to myself and
will do the same for any patches I may come up with.
Btw: the boxes I'm talking about are running Linux (which you could have seen by
looking at the "OS:" tag) and I really have no idea why you'd call that a "soon
to be dead OS". If you have a problem understanding the difference between a CPU
and an OS, may I ask what exactly makes you think you can give some valuable
input here? As for the "cruelly needed developers" you mention: I don't see why
you should need those as long as the community comes up with patches you could
use. Ok, if you keep driving away people like this, I start to have an idea as
to why ;)
It was not badly meant, only trying to show you alternative.
I can't know nor judge the reason why you need v8 support, but have been there many times in the past for my numerous projects.
We have to make decisions about which platforms we can support, and also which we stop to support. There is nothing personal or aggressive in our replies, only trying to explain the status and the reasoning behind it.
Sorry if you took it so badly, that's not the aim of our comments, or mines in particular.
and I was wiling to write arch, not OS....
@stefan at whocares dot de
Did you run your patch on a ReadyNAS box ? If you test it and tell us it works,
there is not reason not to integrate it. As far as I know, it's not been tested
but for compilation only.
We don't want to leave someone behind, but as pierre told you there is priorities.
We'll be glad if you help us.
First of all: thanks for not taking my rant badly :)
Of course I can run this code and, well "test" it. I would have been happier
however if someone besides me had looked over the code and said "yes, that looks
like it could work" ;)
Right now it *is* running on two ReadyNAS (Sparc) boxes as well as on my SunFire
280R. It doesn't segfault which to me is a good sign and it's producing normale
output from the small test scripts I have run. Haven't done extensive testing so
far but will try running Wordpress and Drupal in the next couple of days. If
there's any special test you'd like to see me run against the patched version of
PHP, let me know.
one simple test is to make php core the less as possible. You can create a file
test.php wich does nothing but an "echo".
Then you stress this page with FPM with ab (ab -c 100 -n 10000
While the test is running you check the status page and see how it's goin' on.
it should be a good primary test.
there is difference between these 2 instructions:
ldstub -> operates on a 8 byte value
casa -> operates on a 32-bit word
now, if some one wanted to use these instructions to implement a atomic mutex
lock, then one could argue that both instruction set are interchangeable. in
this case, that is not the case. hence, i would argue that there is a valid case
for using this specific 'compare and swap' instruction set.
using 'ldstub instruction set' in this context is not what we want.
few curl or http get requests cannot display the potential race conditions.
not using 'atomic' operation will be the better approach for your scenario.
Thank you for your input. Just so I understand better:
You said "in this context is not what we want". Could you clarify that a bit? As
far as I understand the code (and I don't claim to fully understand it) it just
checks whether if a specific memory region contains a value of "0" and if so, it
tries to set it to "1". At least I couldn't find any calls to the atomic "add"
functions although they are provided for some architectures.
Also you said: "not using 'atomic' operation will be the better approach for
your scenario". Would you have an example / explanation on how to do that? I'd
really be interested in that so I could eventually come up with a good and
working solution.
As per request here the results when running 'ab' against a patched version of
PHP 5.3.3 running on the ReadyNAS.
Environment:
============
Web Server: Nginx 0.8.53
PHP-FPM : Running with dynamic processes, 2 min, 2 minspare, 3 maxspare
Please keep in mind that the CPU of the ReadyNAS is running at whooping 186 MHz,
so the results obviously won't be lightning fast:
Results:
========
desktop:~$ ab -c 100 -n 10000
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Licensed to The Apache Software Foundation,
Benchmarking develnas (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: nginx/0.8.53
Server Hostname: develnas
Server Port: 8880
Document Path: /test.php
Document Length: 16 bytes
Concurrency Level: 100
Time taken for tests: 110.629 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1700000 bytes
HTML transferred: 160000 bytes
Requests per second: 90.39 [#/sec] (mean)
Time per request: 1106.289 [ms] (mean)
Time per request: 11.063 [ms] (mean, across all concurrent requests)
Transfer rate: 15.01 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.6 0 10
Processing: 105 1101 212.8 1122 2348
Waiting: 105 1100 212.8 1121 2348
Total: 107 1101 212.7 1122 2349
Percentage of the requests served within a certain time (ms)
50% 1122
66% 1129
75% 1139
80% 1145
90% 1547
95% 1552
98% 1564
99% 1571
100% 2349 (longest request)
desktop:~$
well, what i meant by 'in this context' is -> the compare and set is a general
purpose function within fpm and is not intended to simply test&swap a single
byte . hence, it won't be appropriate to replace cas with ldstub instruction.
also, what i meant by not using the atomic option is - fpm currently implements
varieties of common api's by using assembly instructions. It is possible to do
the same by using mutex lock and regular c code. this alternate way allows some
one to run in varieties of architecture. though, this might run slightly slower.
doing this way would be a better option for your case.
Thanks for your input, greatly appreciated.
It'd be nice if you could help me out a bit more. I grepped through the whole
source and the only use of 'atomic_cmp_set' I could find was within fpm_atomic.h
itself:
(333)sparky:~/devel/build/php5-5.3.3# grep -R 'atomic_cmp: return atomic_cmp_set(lock, 0, 1) ? 0 :
-1;
sapi/fpm/fpm/fpm_atomic.h: if (atomic_cmp_set(lock, 0, 1)) {
(333)sparky:~/devel/build/php5-5.3.3#
Since you're saying it's a general purpose function I'm obviously missing
something here and because I'm an enquiring mind I'd like to know what it is I'm
missing.
I agree that using C code would improve portability. But since the machine I'm
building for is slow enough already, I'd prefer to stick with assembler if
possible,
stefan, wasn't sure if you're still subscribed to this, but I'd like to know if
you found a workaround/patch/etc.?
ALL of our servers are run on Sparcv8 / Solaris 11, so this issues is very
important to me.
|
https://bugs.php.net/bug.php?id=53310&edit=1
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Hi all,
I've just bought PyCharm for my Django-based projects, and having participated in the beta test program, I am looking forward to keep on using this excellent IDE.
But, during the beta-phase, I was not using virtualenv based projects. Now I am, and I'm running into troubles for my unit tests. I can't figure out how to configure it all correctly, for the unit testing to run as it should.
Already from the start, it appears "C:\Program Files (x86)\JetBrains\PyCharm 1.1\helpers\pycharm\django_manage.py" is being run as the "manage.py" script, whereas it would be more logical should this be the manage.py from my project's directory.
Also, the tests aren't run in my project's directory, which gives "App with label app_name could not be found".
Next to that, I have different settings (dev, acc, prd) inside a settings directory. I can load these when invoking manage.py from the command line, but I can't seem to find a way for PyCharm to take them into account (instead, I see "pycharm django settings imported", which is of course not what I want).
I have the impression that the "only" virtualenv support there is, is the possibility to select the virtualenv-specific python interpreter.
Has anyone succeeded in setting up tests (or more globally: use manage.py as it would from the command line with virtualenv activated) within a virtualenv environment?
I should say I'm primarily working on Windows 7 (which complicates matters further, I'm sure).
Thanks in advance!
(PS: The text editor for creating posts doesn't work as it should on Opera 10).
Hello Mathieu,
The way it's supposed to work is the following:
- django_settings.py is needed to override the TEST_RUNNER setting and to
hook PyCharm's graphical test runner into the Django testing system
- django_settings.py imports your actual Django settings module. You can
tell it which one to import by filling the "Settings file" field in the Django
tests run configuration. By default, it tries to import module 'settings'.
- the working directory can also be specified in the Django tests run configuration.
Hope this helps.
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!"
Thanks for clarifying that bit.
I still can't load my specific settings though; could it be that the path is not added when loading them?
The directory structure is as follows:
apps/
app1
app2
...
settings/
__init__.py
common.py
development.py
...
Even when pointing the test runner config to c:/(...)/apps/settings/development.py, these settings are not loaded.
The workaround I'm using now is through Python run commmands (manage.py test, etcetera), but of course this way I lose the nice test integration :-)
Hello Mathieu,
You can edit django_settings.py and print out the value of settings_file
to see what exactly is being imported. The code does support specifying the
path to a settings file in a subdirectory.
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!"
It still seems my settings aren't being loaded correctly. django_settings.py reports that they are (see the traceback, below), but debugging a django test run points out that settings.INSTALLED_APPS is an empty list (which it isn't when running manage.py test items from the shell - that works fine).
Here's the traceback:
E:\Development\django_projects\penguinproject\Scripts\python.exe "C:\Program Files (x86)\JetBrains\PyCharm 1.1\helpers\pydev\pydevd.py" --client 127.0.0.1 --port 49898 --file "C:\Program Files (x86)\JetBrains\PyCharm 1.1\helpers\pycharm\django_manage.py" test items
Testing started at 20:36 ...
pydev debugger: warning: psyco not available for speedups (the debugger will still work correctly, but a bit slower)
pydev debugger: starting
E:\Development\django_projects\penguinproject\lib\site-packages\path.py:32: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import sys, warnings, os, fnmatch, glob, shutil, codecs, md5
settings file: development
pycharm django settings imported
Manager file: manage
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 1.1\helpers\pydev\pydevd.py", line 1165, in <module>
debugger.run(setup['file'], None, None)
File "C:\Program Files (x86)\JetBrains\PyCharm 1.1\helpers\pydev\pydevd.py", line 929, in run
execfile(file, globals, locals) #execute the script
File "C:\Program Files (x86)\JetBrains\PyCharm 1.1\helpers\pycharm\django_manage.py", line 15, in <module>
run_module(manage_file, None, '__main__')
File "c:\python26\Lib\runpy.py", line 140, in run_module
fname, loader, pkg_name)
File "c:\python26\Lib\runpy.py", line 34, in _run_code
exec code in run_globals
File "E:\Development\django_projects\penguinproject\yabe\manage.py", line 11, in <module>
execute_manager(settings)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\core\management\__init__.py", line 438, in execute_manager
utility.execute()
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\core\management\__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\core\management\base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\core\management\base.py", line 220, in execute
output = self.handle(*args, **options)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\core\management\commands\test.py", line 37, in handle
failures = test_runner.run_tests(test_labels)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\test\simple.py", line 396, in run_tests
suite = self.build_suite(test_labels, extra_tests)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\test\simple.py", line 285, in build_suite
app = get_app(label)
File "E:\Development\django_projects\penguinproject\lib\site-packages\django\db\models\loading.py", line 140, in get_app
raise ImproperlyConfigured("App with label %s could not be found" % app_label)
django.core.exceptions.ImproperlyConfigured: App with label items could not be found
Hi, was this post ever resolved? It seems as though this is still an issue.
I am running into a similiar issue with this test tool. It seems to ignore my django version and everything, as the error that I am getting when using the test runner is below. This is the same error that I get when trying to run the newer django 1.4 on my site instead of the specified django version.
Traceback (most recent call last):
File "/home/mikewright/.IntelliJIdea11/config/plugins/python/helpers/pycharm/django_test_manage.py", line 105, in <module>
utility.execute()
File "/home/mikewright/.IntelliJIdea11/config/plugins/python/helpers/pycharm/django_test_manage.py", line 73, in execute
PycharmTestCommand().run_from_argv(self.argv)
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 49, in run_from_argv
Creating test database for alias 'default'...
super(Command, self).run_from_argv(argv)
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/home/mikewright/.IntelliJIdea11/config/plugins/python/helpers/pycharm/django_test_manage.py", line 60, in handle
failures = TestRunner(test_labels, verbosity=verbosity, interactive=interactive)
File "/home/mikewright/.IntelliJIdea11/config/plugins/python/helpers/pycharm/django_test_runner.py", line 125, in run_tests
return DjangoTeamcityTestRunner().run_tests(test_labels, extra_tests=extra_tests)
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/test/simple.py", line 381, in run_tests
old_config = self.setup_databases()
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/test/simple.py", line 317, in setup_databases
self.verbosity, autoclobber=not self.interactive)
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/db/backends/creation.py", line 256, in create_test_db
self._create_test_db(verbosity, autoclobber)
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/db/backends/creation.py", line 321, in _create_test_db
cursor = self.connection.cursor()
File "/home/mikewright/.virtualenvs/musicpeeps/local/lib/python2.7/site-packages/django/db/backends/dummy/base.py", line 15, in complain
raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.
Hi Michael,
Sorry, I didn't understand your issue. Pycharm uses Django from interpreter configured in Settings->Project Interpreter.
|
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205802119-Best-way-of-using-virtualenv-Django-and-PyCharm
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Jacapo - ASE python interface for Dacapo¶
Introduction¶
Jacapo is an ASE interface for Dacapo that is fully compatible with ASE. It replaces the old Dacapo interface using Numeric python and ASE2. The code was originally developed by John Kitchin and detailed documentation as well as many examples are available online:
Jacapo is included as an optional calculator in ASE and small differences to the above documentation may occur, and the documentation is no longer maintained.
Jacapo calculator¶
The Jacapo interface is automatically installed with ase and can be imported using:
from ase.calculators.jacapo import Jacapo
(You will need to have a working installation of Dacapo, however.)
Here is a list of available keywords to initialize the calculator:
Example¶
Here is an example of how to calculate the total energy of a H atom.
Warning
This is an example only - the parameters are not physically meaningful!
from ase import Atoms, Atom from ase.io import write from ase.calculators.jacapo import Jacapo atoms = Atoms([Atom('H',[0,0,0])], cell=(2,2,2), pbc=True) calc = Jacapo('Jacapo-test.nc', pw=200, nbands=2, kpts=(1,1,1), spinpol=False, dipole=False, symmetry=False, ft=0.01) atoms.set_calculator(calc) print(atoms.get_potential_energy()) write('Jacapo-test.traj', atoms)
Note that all calculator parameters should be set in the calculator definition itself. Do not attempt to use the calc.set_* commands as they are intended to be internal to the calculator. Note also that Dacapo can only operate with periodic boundary conditions, so be sure that pbc is set to True.
Restarting from an old calculation¶
If the file you specify to Jacapo with the
nc keyword exists, Jacapo will
assume you are attempting to restart an existing calculation. If you do not
want this behavior, turn the flag
deletenc to True in your calculator
definition.
For example, it is possible to continue a geometry optimization with something like this:
calc = Jacapo('old.nc', stay_alive=True) atoms = calc.get_atoms() dyn = QuasiNewton(atoms, logfile='qn.log') dyn.run(fmax=0.05)
Note, that the stay_alive flag is not stored in the .nc file and must be set when the calculator instance is created.
Atom-projected density of states¶
To find the atom-projected density of states with Jacapo, first specify the ados dictionary in your calculator definition, as in:
calc = Jacapo( ... , ados={'energywindow': (-10., 5.), 'energywidth': 0.2, 'npoints': 250, 'cutoff': 1.0})
After this is established, you can use the get_ados command to get the desired ADOS data. For example:
energies, dos = calc.get_ados(atoms=[0], orbitals=['d'], cutoff='short', spin=[0])
|
https://wiki.fysik.dtu.dk/ase/ase/calculators/jacapo.html
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
What's changed between ECObjects 2 supported in Power Platform and ECObjects 3 supported in iModel.js
This page if for people who have used ECObjects in Power Platform and want to learn how it has changed in iModel.js. It does not cover all new features of EC 3, rather it focuses on what has changed between the two versions.
Changes to Schema
- Schema version now has three digits RR.WW.mm where equal RR indicates read compatibility, equal WW indicates write compatibility, mm is for additive changes or modifications that do not break read or write compatibility.
- The namespacePrefix attribute has been replaced with 'alias'.
Changes to Classes
- ECClass has been split into 3 discrete sub types, ECEntityClass, ECStructClass and ECCustomAttributeClass. These three types replace the individual 'isDomainClass', 'isStruct' and 'isCustomAttributeClass' attributes respectively. A consequence of this is that a class may be of only one type in EC 3, where you could make classes which were domain, struct and custom attribute in EC 2.
- All 4 class types add a 'modifier' flag which can be None (concrete and not sealed/final), Abstract or Sealed.
- A class may only have a base class of the same type (e.g. an ECRelationshipClass cannot have an ECEntityClass as a base class)
- Multiple base classes are only supported on ECEntityClasses in EC 3 and additional restrictions are applied.
- Only one base class may be a concrete class (modifier=None)
- Additional base classes must be abstract and have the IsMixin custom attribute applied
- Properties must be unique across all base classes (e.g. Only one base class may have a 'Name' property defined or inherited)
- ECStructClass and ECCustomAttributeClass definitions should not have any base classes
Changes to Relationships
- Relationship classes may only have one base class and that base class must have constraints which are equal to or more broad than the derived class
- Relationship constraint classes and multiplicity on an endpoint of a relationship must have a common base class and that base class must be specified as the 'abstract constraint' of the relationship.
- The cardinality attribute has been renamed multiplicity and the format has changed from (x,y) to (x..y), unbounded y is represented by * instead of N. See ECRelationships for more details on relationship constraints.
Changes to Properties
- ECStructArrayProperty replaces ECArrayProperty with 'isStruct' set to true
Changes to important metadata
- Units - Now a first class concept with Unit, KindOfQuantity and Format definitions as top level EC items. The UnitSpecification custom attribute on ECProperty has been replaced with the 'kindOfQuantity' attribute. A KindOfQuantity definition defines the persistence and presentation of the value.
- Property Category - Now a first class concept with PropertyCategory, a top level EC item.
- The StandardValues custom attribute has been replaced with ECEnumerations, a top level EC item.
- All standard custom attributes supported in EC 3 have been moved to the CoreCustomAttributes schema. EditorCustomAttributes and Bentley_Standard_CustomAttributes should no longer be used.
Last Updated: 08 January, 2020
|
https://www.imodeljs.org/bis/ec/differences-between-ec2-and-ec3/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Request for Comments: 7808
Category: Standards Track
ISSN: 2070-1721
Spherical Cow Group
C. Daboo
Apple
March 2016
Time Zone Data Distribution Service
Abstract
This document defines a time zone data distribution service that allows reliable, secure, and fast delivery of time zone data and leap-second rules to client systems such as calendaring and scheduling applications or operating . . . . . . . . . . . . . . . . . . . . . . . 4 2. Architectural Overview . . . . . . . . . . . . . . . . . . . 5 3. General Considerations . . . . . . . . . . . . . . . . . . . 7 3.1. Time Zone . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2. Time Zone Data . . . . . . . . . . . . . . . . . . . . . 7 3.3. Time Zone Metadata . . . . . . . . . . . . . . . . . . . 7 3.4. Time Zone Data Server . . . . . . . . . . . . . . . . . . 7 3.5. Observance . . . . . . . . . . . . . . . . . . . . . . . 7 3.6. Time Zone Identifiers . . . . . . . . . . . . . . . . . . 7 3.7. Time Zone Aliases . . . . . . . . . . . . . . . . . . . . 8 3.8. Time Zone Localized Names . . . . . . . . . . . . . . . . 8 3.9. Truncating Time Zones . . . . . . . . . . . . . . . . . . 9 3.10. Time Zone Versions . . . . . . . . . . . . . . . . . . . 10 4. Time Zone Data Distribution Service Protocol . . . . . . . . 10 4.1. Server Protocol . . . . . . . . . . . . . . . . . . . . . 10 4.1.1. Time Zone Queries . . . . . . . . . . . . . . . . . . 11 4.1.2. Time Zone Formats . . . . . . . . . . . . . . . . . . 11 4.1.3. Time Zone Localization . . . . . . . . . . . . . . . 12 4.1.4. Conditional Time Zone Requests . . . . . . . . . . . 12 4.1.5. Expanded Time Zone Data . . . . . . . . . . . . . . . 14 4.1.6. Server Requirements . . . . . . . . . . . . . . . . . 14 4.1.7. Error Responses . . . . . . . . . . . . . . . . . . . 14 4.1.8. Extensions . . . . . . . . . . . . . . . . . . . . . 14 4.2. Client Guidelines . . . . . . . . . . . . . . . . . . . . 14 4.2.1. Discovery . . . . . . . . . . . . . . . . . . . . . . 14 4.2.1.1. SRV Service Labels for the Time Zone Data Distribution Service . . . . . . . . . . . . . . 15 4.2.1.2. TXT Records for a Time Zone Data Distribution Service . . . . . . . . . . . . . . . . . . . . . 15 4.2.1.3. Well-Known URI for a Time Zone Data Distribution Service . . . . . . . . . . . . . . . . . . . . . 16 4.2.1.3.1. Example: Well-Known URI Redirects to Actual Context Path . . . . . . . . . . . . . . . . 17 4.2.2. Synchronization of Time Zones . . . . . . . . . . . . 17 4.2.2.1. Initial Synchronization of All Time Zones . . . . 17 4.2.2.2. Subsequent Synchronization of All Time Zones . . 17 4.2.2.3. Synchronization with Preexisting Time Zone Data . 18 5. Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5.1. "capabilities" Action . . . . . . . . . . . . . . . . . . 18 5.1.1. Example: get capabilities . . . . . . . . . . . . . . 19 5.2. "list" Action . . . . . . . . . . . . . . . . . . . . . . 21 5.2.1. Example: List Time Zone Identifiers . . . . . . . . . 22 5.3. "get" Action . . . . . . . . . . . . . . . . . . . . . . 23 5.3.1. Example: Get Time Zone Data . . . . . . . . . . . . . 24 5.3.2. Example: Conditional Get Time Zone Data . . . . . . . 25 5.3.3. Example: Get Time Zone Data Using a Time Zone Alias . 25 5.3.4. Example: Get Truncated Time Zone Data . . . . . . . . 26 5.3.5. Example: Request for a Nonexistent Time Zone . . . . 27 5.4. "expand" Action . . . . . . . . . . . . . . . . . . . . . 27 5.4.1. Example: Expanded JSON Data Format . . . . . . . . . 29 5.5. "find" Action . . . . . . . . . . . . . . . . . . . . . . 30 5.5.1. Example: find action . . . . . . . . . . . . . . . . 31 5.6. "leapseconds" Action . . . . . . . . . . . . . . . . . . 32 5.6.1. Example: Get Leap-Second Information . . . . . . . . 33 6. JSON Definitions . . . . . . . . . . . . . . . . . . . . . . 34 6.1. capabilities Action Response . . . . . . . . . . . . . . 34 6.2. list/find Action Response . . . . . . . . . . . . . . . . 37 6.3. expand Action Response . . . . . . . . . . . . . . . . . 38 6.4. leapseconds Action Response . . . . . . . . . . . . . . . 39 7. New iCalendar Properties . . . . . . . . . . . . . . . . . . 40 7.1. Time Zone Upper Bound . . . . . . . . . . . . . . . . . . 40 7.2. Time Zone Identifier Alias Property . . . . . . . . . . . 41 8. Security Considerations . . . . . . . . . . . . . . . . . . . 42 9. Privacy Considerations . . . . . . . . . . . . . . . . . . . 43 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 44 10.1. Service Actions Registration . . . . . . . . . . . . . . 45 10.1.1. Service Actions Registration Procedure . . . . . . . 45 10.1.2. Registration Template for Actions . . . . . . . . . 46 10.1.3. Actions Registry . . . . . . . . . . . . . . . . . . 47 10.2. timezone Well-Known URI Registration . . . . . . . . . . 47 10.3. Service Name Registrations . . . . . . . . . . . . . . . 47 10.3.1. timezone Service Name Registration . . . . . . . . . 47 10.3.2. timezones Service Name Registration . . . . . . . . 48 10.4. TZDIST Identifiers Registry . . . . . . . . . . . . . . 48 10.4.1. Registration of invalid-action Error URN . . . . . . 49 10.4.2. Registration of invalid-changedsince Error URN . . . 49 10.4.3. Registration of tzid-not-found Error URN . . . . . . 50 10.4.4. Registration of invalid-format Error URN . . . . . . 50 10.4.5. Registration of invalid-start Error URN . . . . . . 50 10.4.6. Registration of invalid-end Error URN . . . . . . . 51 10.4.7. Registration of invalid-pattern Error URN . . . . . 51 10.5. iCalendar Property Registrations . . . . . . . . . . . . 52 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 52 11.1. Normative References . . . . . . . . . . . . . . . . . . 52 11.2. Informative References . . . . . . . . . . . . . . . . . 55 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . 55 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 56
1. Introduction
Time zone data typically combines a coordinated universal time (UTC) offset with daylight saving time (DST) rules. Time zones are typically tied to specific geographic and geopolitical regions. Whilst the UTC offset for particular regions changes infrequently, DST rules can change frequently and sometimes with very little notice (maybe hours before a change comes into effect).
Calendaring and scheduling systems, such as those that use iCalendar [RFC5545], as well as operating systems, critically rely on time zone data to determine the correct local time. As such, they need to be kept up to date with changes to time zone data. To date, there has been no fast and easy way to do that. Time zone data is often supplied in the form of a set of data files that have to be "compiled" into a suitable database format for use by the client application or operating system. In the case of operating systems, often those changes only get propagated to client machines when there is an operating system update, which can be infrequent, resulting in inaccurate time zone data being present for significant amounts of time. In some cases, old versions of operating systems stop being supported, but are still in use and thus require users to manually "patch" their system to keep up to date with time zone changes.
Along with time zone data, it is also important to track the use of leap seconds to allow a mapping between International Atomic Time (TAI) and UTC. Leap seconds can be added (or possibly removed) at various times of year in an irregular pattern typically determined by precise astronomical observations. The insertion of leap seconds into UTC is currently the responsibility of the International Earth Rotation Service.
This specification defines a time zone data distribution service protocol that allows for fast, reliable, and accurate delivery of time zone data and leap-second information to client systems. This protocol is based on HTTP [RFC7230] using a simple JSON-based API [RFC7159].
This specification does not define the source of the time zone data or leap-second information. It is assumed that a reliable and accurate source is available. One such source is the IANA-hosted time zone database [RFC6557].
1.1. Conventions
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
Unless otherwise indicated, UTC date-time values as specified in [RFC3339] use a "Z" suffix, and not fixed numeric offsets.
This specification contains examples of HTTP requests and responses. In some cases, additional line breaks have been introduced into the request or response data to match maximum line-length limits of this document.
2. Architectural Overview
The overall process for the delivery of time zone data can be visualized via the diagram below.
==================== ==================== (a) | Contributors | | Contributors | ==================== ==================== | | ==================== ==================== (b) | Publisher A | | Publisher B | ==================== ==================== \ / ==================== (c) | Root Provider | ==================== / | \ / | \ ====================== | ====================== (d) | Secondary Provider | | | Secondary Provider | ====================== | ====================== | | | | | | | | ========== ========== ========== ========== (e) | Client | | Client | | Client | | Client | ========== ========== ========== ==========
Figure 1: Time Zone Data Distribution Service Architecture
The overall service is made up of several layers:
(a) Contributors: Individuals, governments, or organizations that provide information about time zones to the publishing process. There can be many contributors. Note this specification does not address how contributions are made. (b) Publishers: Publishers aggregate information from contributors, determine the reliability of the information and, based on that, generate time zone data. There can be many publishers, each getting information from many different contributors. In some cases, a publisher may choose to "republish" data from another publisher. (c) Root Providers: Servers that obtain and then provide the time zone data from publishers and make that available to other servers or clients. There can be many root providers. Root providers can choose to supply time zone data from one or more publishers. (d) Secondary Providers: Servers that handle the bulk of the requests and reduce the load on root servers. These will typically be simple, caches of the root server, located closer to clients. For example a large Internet Service Provider (ISP) may choose to set up their own secondary provider to allow clients within their network to make requests of that server rather than make requests of servers outside their network. Secondary servers will cache and periodically refresh data from the root servers. (e) Clients: Applications, operating systems, etc., that make use of time zone data and retrieve that from either root or secondary providers.
Some of those layers may be coalesced by implementors. For example, a vendor may choose to implement the entire service as a single monolithic virtual server with the address embedded in distributed systems. Others may choose to provide a service consisting of multiple layers of providers, many secondary servers, and a small number of root servers.
This specification is concerned only with the protocol used to exchange data between providers and from provider to client. This specification does not define how contributors pass their information to publishers, nor how those publishers vet that information to obtain trustworthy data, nor the format of the data produced by the publishers.
3. General Considerations
This section defines several terms and explains some key concepts used in this specification.
3.1. Time Zone
A time zone is a description of the past and predicted future timekeeping practices of a collection of clocks that are intended to agree.
Note that the term "time zone" does not have the common meaning of a region of the world at a specific UTC offset, possibly modified by daylight saving time. For example, the "Central European Time" zone can correspond to several time zones "Europe/Berlin", "Europe/Paris", etc., because subregions have kept time differently in the past.
3.2. Time Zone Data
Time zone data is data that defines a single time zone, including an identifier, UTC offset values, DST rules, and other information such as time zone abbreviations.
3.3. Time Zone Metadata
Time zone metadata is data that describes additional properties of a time zone that is not itself included in the time zone data. This can include such things as the publisher name, version identifier, aliases, and localized names (see below).
3.4. Time Zone Data Server
A time zone data server is a server implementing the Time Zone Data Distribution Service Protocol defined by this specification.
3.5. Observance
A time zone with varying rules for the UTC offset will have adjacent periods of time that use different UTC offsets. Each period of time with a constant UTC offset is called an observance.
3.6. Time Zone Identifiers
Time zone identifiers are unique names associated with each time zone, as defined by publishers. The iCalendar [RFC5545] specification has a "TZID" property and parameter whose value is set to the corresponding time zone identifier and used to identify time zone data and relate time zones to start and end dates in events, etc. This specification does not define what format of time zone identifiers should be used. It is possible that time zone identifiers from different publishers overlap, and there might be a need for a provider to distinguish those with some form of "namespace" prefix identifying the publisher. However, development of a standard (global) naming scheme for time zone identifiers is out of scope for this specification.
3.7. Time Zone Aliases
Time zone aliases map a name onto a time zone identifier. For example, "US/Eastern" is usually mapped on to "America/New_York". Time zone aliases are typically used interchangeably with time zone identifiers when presenting information to users.
A time zone data distribution service needs to maintain time zone alias mapping information and expose that data to clients as well as allow clients to query for time zone data using aliases. When returning time zone data to a client, the server returns the data with an identifier matching the query, but it can include one or more additional identifiers in the data to provide a hint to the client that alternative identifiers are available. For example, a query for "US/Eastern" could include additional identifiers for "America/ New_York" or "America/Montreal".
The set of aliases may vary depending on whether time zone data is truncated (see Section 3.9). For example, a client located in the US state of Michigan may see "US/Eastern" as an alias for "America/ Detroit", whereas a client in the US state of New Jersey may see it as an alias for "America/New_York", and all three names may be aliases if time zones are truncated to post-2013 data.
3.8. Time Zone Localized Names
Localized names are names for time zones that can be presented to a user in their own language. Each time zone may have one or more localized names associated with it. Names would typically be unique in their own locale as they might be presented to the user in a list. Localized names are distinct from abbreviations commonly used for UTC offsets within a time zone. For example, the time zone "America/ New_York" may have the localized name "Nueva York" in a Spanish locale, as distinct from the abbreviations "EST" and "EDT", which may or may not have their own localizations.
A time zone data distribution service might need to maintain localized name information, for one or more chosen languages, as well as allow clients to query for time zone data using localized names.
3.9. Truncating Time Zones
Time zone data can contain information about past and future UTC offsets that may not be relevant for a particular server's intended clients. For example, calendaring and scheduling clients are likely most concerned with time zone data that covers a period for one or two years in the past on into the future, as users typically create new events only for the present and future. Similarly, time zone data might contain a large amount of "future" information about transitions occurring many decades into the future. Again, clients might be concerned only with a smaller range into the future, and data past that point might be unnecessary.
To avoid having to send unnecessary data, servers can choose to truncate time zone data to a range determined by start- and end-point date-time values, and to provide only offsets and rules between those points. If such truncation is done, the server MUST include the ranges it is using in the "capabilities" action response (see Section 6.1), so that clients can take appropriate action if they need time zone data for times outside of those ranges.
The truncation points at the start and end of a range are always a UTC date-time value, with the start point being "inclusive" to the overall range, and the end point being "exclusive" to the overall range (i.e., the end value is just past the end of the last valid value in the range). A server will advertise a truncation range for the truncated data it can supply or will provide an indicator that it can truncate at any start or end point to produce arbitrary ranges. In addition, the server can advertise that it supplies untruncated data -- that is, data that covers the full range of times available from the source publisher. In the absence of any indication of truncated data available on the server, the server will supply only untruncated data.
When truncating the start of a "VTIMEZONE" component, the server MUST include exactly one "STANDARD" or "DAYLIGHT" subcomponent with a "DTSTART" property value that matches the start point of the truncation range, and appropriate "TZOFFSETFROM" and "TZOFFSETTO" properties to indicate the correct offset in effect right before and after the start point of the truncation range. This subcomponent, which is the first observance defined by the time zone data, represents the earliest valid date-time covered by the time zone data in the truncated "VTIMEZONE" component.
When truncating the end of a "VTIMEZONE" component, the server MUST include a "TZUNTIL" iCalendar property (Section 7.1) in the "VTIMEZONE" component to indicate the end point of the truncation range.
3.10. Time Zone Versions
Time zone data changes over time, and it is important for consumers of that data to stay up to date with the latest versions. As a result, it is useful to identify individual time zones with a specific version number or version identifier as supplied by the time zone data publisher. There are two common models that time zone data publishers might use to publish updates to time zone data:
a. with the "monolithic" model, the data for all time zones is
published in one go, with a single version number or identifier applied to the entire data set. For example, a publisher producing data several times a year might use version identifiers "2015a", "2015b", etc.
b. with the "incremental" model, each time zone has its own version
identifier, so that each time zone can be independently updated without impacting any others. For example, if the initial data has version "A.1" for time zone "A", and "B.1" for time zone "B", and then time zone "B" changes; when the data is next published, time zone "A" will still have version "A.1", but time zone "B" will now have "B.2".
A time zone data distribution service needs to ensure that the version identifiers used by the time zone data publisher are available to any client, along with the actual publisher name on a per-time-zone basis. This allows clients to compare publisher/ version details on any server, with existing locally cached client data, and only fetch those time zones that have actually changed (see Section 4.2.2 for more details on how clients synchronize data from the server).
4. Time Zone Data Distribution Service Protocol
4.1. Server Protocol
The time zone data distribution service protocol uses HTTP [RFC7230] for query and delivery of time zone data, metadata, and leap-second information. The interactions with the HTTP server can be broken down into a set of "actions" that define the overall function being requested (see Section 5). Each action targets a specific HTTP resource using the GET method, with various request-URI parameters altering the behavior as needed.
The HTTP resources used for requests will be identified via URI templates [RFC6570]. The overall time zone data distribution service has a "context path" request-URI template defined as "{/service- prefix}". This "root" prefix is discovered by the client as per
Section 4.2.1. Request-URIs that target time zone data directly use the prefix template "{/service-prefix,data-prefix}". The second component of the prefix template can be used to introduce additional path segments in the request-URI to allow for alternative ways to "partition" the time zone data. For example, time zone data might be partitioned by publisher release dates or version identifiers. This specification does not define any partitions; that is left for future extensions. When the "data-prefix" variable is empty, the server is expected to return the current version of time zone data it has for all publishers it supports.
All URI template variable values, and URI request parameters that contain text values, MUST be encoded using the UTF-8 [RFC3629] character set. All responses MUST return data using the UTF-8 [RFC3629] character set. It is important to note that any "/" characters, which are frequently found in time zone identifiers, are percent-encoded when used in the value of a path segment expansion variable in a URI template (as per Section 3.2.6 of [RFC6570]). Thus, the time zone identifier "America/New_York" would appear as "America%2FNew_York" when used as the value for the "{/tzid}" URI template variable defined later in this specification.
The server provides time zone metadata in the form of a JSON [RFC7159] object. Clients can directly request the time zone metadata or issue queries for subsets of metadata that match specific criteria.
Security and privacy considerations for this protocol are discussed in detail in Sections 8 and 9, respectively.
4.1.1. Time Zone Queries
Time zone identifiers, aliases, or localized names can be used to query for time zone data or metadata. This will be more explicitly defined below for each action. In general, however, if a "tzid" URI template variable is used, then the value may be an identifier or an alias. When the "pattern" URI query parameter is used, it may be an identifier, an alias, or a localized name.
4.1.2. Time Zone Formats
The default media type [RFC2046] format for returning time zone data is the iCalendar [RFC5545] data format. In addition, the iCalendar- in-XML [RFC6321] and iCalendar-in-JSON [RFC7265] representations are available. Clients use the HTTP Accept header field (see Section 5.3.2 of [RFC7231]) to indicate their preference for the returned data format. Servers indicate the available formats that they support via the "capabilities" action response (Section 5.1).
4.1.3. Time Zone Localization
As per Section 3.8, time zone data can support localized names. Clients use the HTTP Accept-Language header field (see Section 5.3.5 of [RFC7231]) to indicate their preference for the language used for localized names in the response data.
4.1.4. Conditional Time Zone Requests
When time zone data or metadata changes, it needs to be distributed in a timely manner because changes to local time offsets might occur within a few days of the publication of the time zone data changes. Typically, the number of time zones that change is small, whilst the overall number of time zones can be large. Thus, when a client is using more than a few time zones, it is more efficient for the client to be able to download only those time zones that have changed (an incremental update).
Clients initially request a full list of time zones from the server using a "list" action request (see Section 5.2). The response to that request includes two items the client caches for use with subsequent "conditional" (incremental update) requests:
- An opaque synchronization token in the "synctoken" JSON member. This token changes whenever there is a change to any metadata associated with one or more time zones (where the metadata is the information reported in the "list" action response for each time zone).
- The HTTP ETag header field value for each time zone returned in the response. The ETag header field value is returned in the "etag" JSON member, and it corresponds to the ETag header field value that would be returned when executing a "get" action request (see Section 5.3) against the corresponding time zone data resource.
For subsequent updates to cached data, clients can use the following procedure:
a. Send a "list" action request with a "changedsince" URI query
parameter with its value set to the last opaque synchronization token returned by the server. The server will return time zone metadata for only those time zones that have changed since the last request.
b. The client will cache the new opaque synchronization token
returned in the response for the next incremental update, along with the returned time zone metadata information.
c. The client will check each time zone metadata to see if the
"etag" value is different from that of any cached time zone data it has.
d. The client will use a "get" action request to update any cached
time zone data for those time zones whose ETag header field value has changed.
Note that time zone metadata will always change when the corresponding time zone data changes. However, the converse is not true: it is possible for some piece of the time zone metadata to change without the corresponding time zone data changing. e.g., for the case of a "monolithic" publisher (see Section 3.10), the version identifier in every time zone metadata element will change with each new published revision; however, only a small subset of time zone data will actually change.
If a client needs data for only one or a small set of time zones (e.g., a clock in a fixed location), then it can use a conditional HTTP request to determine if the time zone data has changed and retrieve the new data. The full details of HTTP conditional requests are described in [RFC7232]; what follows is a brief summary of what a client typically does.
a. When the client retrieves the time zone data from the server
using a "get" action (see Section 5.3), the server will include an HTTP ETag header field in the response.
b. The client will store the value of that header field along with
the request-URI used for the request.
c. When the client wants to check for an update, it issues another
"get" action HTTP request on the original request-URI, but this time it includes an If-None-Match HTTP request header field, with a value set to the ETag header field value from the previous response. If the data for the time zone has not changed, the server will return a 304 (Not Modified) HTTP response. If the data has changed, the server will return a normal HTTP success response that will include the changed data, as well as a new value for the ETag header field.
Clients SHOULD poll for changes, using an appropriate conditional request, at least once a day. A server acting as a secondary provider, caching time zone data from another server, SHOULD poll for changes once per hour. See Section 8 on expected client and server behavior regarding high request rates.
4.1.5. Expanded Time Zone Data
Determining time zone offsets at a particular point in time is often a complicated process, as the rules for daylight saving time can be complex. To help with this, the time zone data distribution service provides an action that allows clients to request the server to expand a time zone into a set of "observances" over a fixed period of time (see Section 5.4). Each of these observances describes a UTC onset time and UTC offsets for the prior time and the observance time. Together, these provide a quick way for "thin" clients to determine an appropriate UTC offset for an arbitrary date without having to do full time zone expansion themselves.
4.1.6. Server Requirements
To enable a simple client implementation, servers SHOULD ensure that they provide or cache data for all commonly used time zones, from various publishers. That allows client implementations to configure a single server to get all time zone data. In turn, any server can refresh any of the data from any other server -- though the root servers may provide the most up-to-date copy of the data.
4.1.7. Error Responses
When an HTTP error response is returned to the client, the server SHOULD return a JSON "problem details" object in the response body, as per [RFC7807]. Every JSON "problem details" object MUST include a "type" member with a URI value matching the applicable error code (defined for each action in Section 5).
4.1.8. Extensions
This protocol is designed to be extensible through a standards-based registration mechanism (see Section 10). It is anticipated that other useful time zone actions will be added in the future (e.g., mapping a geographical location to time zone identifiers, getting change history for time zones), and so, servers MUST return a description of their capabilities. This will allow clients to determine if new features have been installed and, if not, fall back on earlier features or disable some client capabilities.
4.2. Client Guidelines
4.2.1. Discovery
Client implementations need to either know where the time zone data distribution service is located or discover it through some mechanism. To use a time zone data distribution service, a client needs a Fully Qualified Domain Name (FQDN), port, and HTTP request- URI path. The request-URI path found via discovery is the "context path" for the service itself. The "context path" is used as the value of the "service-prefix" URI template variable when executing actions (see Section 5).
The following subsections describe two methods of service discovery using DNS SRV records [RFC2782] and an HTTP "well-known" [RFC5785] resource. However, alternative mechanisms could also be used (e.g., a DHCP server option [RFC2131]).
4.2.1.1. SRV Service Labels for the Time Zone Data Distribution Service
[RFC2782] defines a DNS-based service discovery protocol that has been widely adopted as a means of locating particular services within a local area network and beyond, using SRV RR records. This can be used to discover a service's FQDN and port.
This specification adds two service types for use with SRV records:
timezone: Identifies a time zone data distribution server that uses HTTP without Transport Layer Security ([RFC2818]). timezones: Identifies a time zone data distribution server that uses HTTP with Transport Layer Security ([RFC2818]).
Clients MUST honor "TTL", "Priority", and "Weight" values in the SRV records, as described by [RFC2782].
Example:
service record for server without Transport Layer Security.
_timezone._tcp SRV 0 1 80 tz.example.com.
Example:
service record for server with transport layer security.
_timezones._tcp SRV 0 1 443 tz.example.com.
4.2.1.2. TXT Records for a Time Zone Data Distribution Service
When SRV RRs are used to advertise a time zone data distribution service, it is also convenient to be able to specify a "context path" in the DNS to be retrieved at the same time. To enable that, this specification uses a TXT RR that follows the syntax defined in Section 6 of [RFC6763] and defines a "path" key for use in that record. The value of the key MUST be the actual "context path" to the corresponding service on the server.
A site might provide TXT records in addition to SRV records for each service. When present, clients MUST use the "path" value as the "context path" for the service in HTTP requests. When not present, clients use the ".well-known" URI approach described in Section 4.2.1.3.
As per Section 8, the server MAY require authentication when a client tries to access the path URI specified by the TXT RR (i.e., the server would return a 401 status response to the unauthenticated request from the client, then return a redirect response after a successful authentication by the client).
Example:
text record for service with Transport Layer Security.
_timezones._tcp TXT path=/timezones
4.2.1.3. Well-Known URI for a Time Zone Data Distribution Service
A "well-known" URI [RFC5785] is registered by this specification for the Time Zone Data Distribution service, "timezone" (see Section 10). This URI points to a resource that the client can use as the initial "context path" for the service they are trying to connect to. The server MUST redirect HTTP requests for that resource to the actual "context path" using one of the available mechanisms provided by HTTP (e.g., using an appropriate 3xx status response). Clients MUST handle HTTP redirects on the ".well-known" URI, taking into account security restrictions on redirects described in Section 8. Servers MUST NOT locate the actual time zone data distribution service endpoint at the ".well-known" URI as per Section 1.1 of [RFC5785]. The "well-known" URI MUST be present on the server, even when a TXT RR (Section 4.2.1.2) is used in the DNS to specify a "context path".
Servers SHOULD set an appropriate Cache-Control header field value (as per Section 5.2 of [RFC7234]) in the redirect response to ensure caching occurs as needed, or as required by the type of response generated. For example, if it is anticipated that the location of the redirect might change over time, then an appropriate "max-age" value would be used.
As per Section 8, the server MAY require authentication when a client tries to access the ".well-known" URI (i.e., the server would return a 401 status response to the unauthenticated request from the client, then return the redirect response after a successful authentication by the client).
4.2.1.3.1. Example: Well-Known URI Redirects to Actual Context Path
A time zone data distribution server has a "context path" that is "/servlet/timezone". The client will use "/.well-known/timezone" as the path for the service after it has first found the FQDN and port number via an SRV lookup or via manual entry of information by the user. When the client makes its initial HTTP request against "/.well-known/timezone", the server would issue an HTTP 301 redirect response with a Location response header field using the path "/servlet/timezone". The client would then "follow" this redirect to the new resource and continue making HTTP requests there. The client would also cache the redirect information, subject to any Cache- Control directive, for use in subsequent requests.
4.2.2. Synchronization of Time Zones
This section discusses possible client synchronization strategies using the various protocol elements provided by the server for that purpose.
4.2.2.1. Initial Synchronization of All Time Zones
When a secondary service or a client wishing to cache all time zone data first starts, or wishes to do a full refresh, it synchronizes with another server by issuing a "list" action to retrieve all the time zone metadata. The client preserves the returned opaque token for subsequent use (see "synctoken" in Section 5.2.1). The client stores the metadata for each time zone returned in the response. Time zone data for each corresponding time zone can then be fetched and stored locally. In addition, a mapping of aliases to time zones can be built from the metadata. A typical "list" action response size is about 50-100 KB of "pretty printed" JSON data, for a service using the IANA time zone database [RFC6557], as of the time of publication of this specification.
4.2.2.2. Subsequent Synchronization of All Time Zones
A secondary service or a client caching all time zones needs to periodically synchronize with a server. To do so, it issues a "list" action with the "changedsince" URI query parameter set to the value of the opaque token returned by the last synchronization. The client again preserves the returned opaque token for subsequent use. The client updates its stored time zone metadata using the new values returned in the response, which contains just the time zone metadata for those time zones changed since the last synchronization. In addition, it compares the "etag" value in each time zone metadata to the ETag header field value for the corresponding time zone data resource it has previously cached; if they are different, it fetches the new time zone data. Note that if the client presents the server with a "changedsince" value that the server does not support, all time zone data is returned, as it would for the case where the request did not include a "changedsince" value.
Publishers should take into account the fact that the "outright" deletion of time zone names will cause problems to simple clients, and so aliasing a deleted time zone identifier to a suitable alternate one is preferable.
4.2.2.3. Synchronization with Preexisting Time Zone Data
A client might be pre-provisioned with time zone data from a source other than the time zone data distribution service it is configured to use. In such cases, the client might want to minimize the amount of time zone data it synchronizes by doing an initial "list" action to retrieve all the time zone metadata, but then only fetch time zone data for those time zones that do not match the publisher and version details for the pre-provisioned data.
5. Actions
Servers MUST support the following actions. The information below shows details about each action: the request-URI the client targets (in the form of a URI template [RFC6570]), a description, the set of allowed query parameters, the nature of the response, and a set of possible error codes for the response (see Section 4.1.7).
For any error not covered by the specific error codes defined below, the "urn:ietf:params:tzdist:error:invalid-action" error code is returned to the client in the JSON "problem details" object.
The examples in the following subsections presume that the timezone context path has been discovered to be "/servlet/timezone" (as in the example in Section 4.2.1.3.1).
5.1. "capabilities" Action
Name: capabilities
Request-URI Template:
-
{/service-prefix}/capabilities Description: This action returns the capabilities of the server, allowing clients to determine if a specific feature has been deployed and/or enabled. Parameters: None Response: A JSON object containing a "version" member, an "info" member, and an "actions" member; see Section 6.1. Possible Error Codes: No specific code.
5.1.1. Example: get capabilities
>> Request << GET /servlet/timezone/capabilities HTTP/1.1 Host: tz.example.com >> Response << HTTP/1.1 200 OK Date: Wed, 4 Jun 2008 09:32:12 GMT Content-Type: application/json; charset="utf-8" Content-Length: xxxx
{
"version":
1,
"info": { "primary-source": "Olson:2011m", "formats": [ "text/calendar", "application/calendar+xml", "application/calendar+json" ], "truncated" : { "any": false, "ranges": [ { "start": "1970-01-01T00:00:00Z", "end": "*" }, { "start":"2010-01-01T00:00:00Z", "end":"2020-01-01T00:00:00Z" } ], "untruncated": true }, "provider-details": "", "contacts": ["mailto:tzs@example.org"] },
"actions": [
-
{ "name": "capabilities", "uri-template": "/servlet/timezone/capabilities", "parameters": [] }, { "name": "list", "uri-template": "/servlet/timezone/zones{?changedsince}", "parameters": [ { "name": "changedsince", "required": false, "multi": false } ] }, { "name": "get", "uri-template": "/servlet/timezone/zones{/tzid}{?start,end}", "parameters": [ { "name": "start", "required": false, "multi": false }, { "name": "end", "required": false, "multi": false } ] },
{
-
"name": "expand", "uri-template": "/servlet/timezone/zones{/tzid}/observances{?start,end}", "parameters": [ { "name": "start", "required": true, "multi": false }, { "name": "end", "required": true, "multi": false } ] }, { "name": "find", "uri-template": "/servlet/timezone/zones{?pattern}", "parameters": [ { "name": "pattern", "required": true, "multi": false } ] }, { "name": "leapseconds", "uri-template": "/servlet/timezone/leapseconds", "parameters": [] } ] }
5.2. "list" Action
Name: list
Request-URI Template:
-
{/service-prefix,data-prefix}/zones{?changedsince} Description: This action lists all time zone identifiers in summary format, with publisher, version, aliases, and optional localized data. In addition, it returns an opaque synchronization token for the entire response. If the "changedsince" URI query parameter is present, its value MUST correspond to a previously returned synchronization token value. When "changedsince" is used, the server MUST return only those time zones that have changed since the specified synchronization token. If the "changedsince" value is not supported by the server, the server MUST return all time zones, treating the request as if it had no "changedsince".
Parameters:
changedsince
OPTIONAL, and MUST NOT occur more than once.
Response: A JSON object containing a "synctoken" member and a "timezones" member; see Section 6.2.
Possible Error Codes:
urn:ietf:params:tzdist:error:invalid-changedsince
The "changedsince" URI query parameter appears more than once.
5.2.1. Example: List Time Zone Identifiers
In this example the client requests the full set of time zone identifiers.
>> Request << GET /servlet/timezone/zones" } ] }, ...other time zones... ] }
5.3. "get" Action
Name: get
Request-URI Template:
-
{/service-prefix,data-prefix}/zones{/tzid}{?start,end}
The "tzid" variable value is REQUIRED in order to distinguish this action from the "list" action.
Description: This action returns a time zone. The response MUST contain an ETag response header field indicating the current value of the strong entity tag of the time zone resource.
In the absence of any Accept HTTP request header field, the server MUST return time zone data with the "text/calendar" media type.
If the "tzid" variable value is actually a time zone alias, the server will return the matching time zone data with the alias as the identifier in the time zone data. The server MAY include one or more "TZID-ALIAS-OF" properties (see Section 7.2) in the time zone data to indicate additional identifiers that have the matching time zone identifier as an alias.
Parameters:
start=<date-time>
OPTIONAL, and MUST NOT occur more than once. Specifies the inclusive UTC date-time value at which the returned time zone data is truncated at its start.
end=<date-time>
OPTIONAL, and MUST NOT occur more than once. Specifies the exclusive UTC date-time value at which the returned time zone data is truncated at its end.
Response: A document containing all the requested time zone data in the format specified.
Possible Error Codes:
urn:ietf:params:tzdist:error:tzid-not-found
No time zone associated with the specified "tzid" path segment value was found.
urn:ietf:params:tzdist:error:invalid-format
The Accept request header field supplied by the client did not contain a media type for time zone data supported by the server.
urn:ietf:params:tzdist:error:invalid-start
The "start" URI query parameter has an incorrect value, or appears more than once, or does not match one of the fixed truncation range start values advertised in the "capabilities" action response.
urn:ietf:params:tzdist:error:invalid-end
The "end" URI query parameter has an incorrect value, or appears more than once, or has a value less than or equal to the "start" URI query parameter, or does not match one of the fixed truncation range end values advertised in the "capabilities" action response.
5.3.1. Example: Get Time Zone Data
In this example, the client requests that the time zone with a specific time zone identifier be returned.
>> Request << GET /servlet/timezone/zones/America%2FNew_York ... END:VTIMEZONE END:VCALENDAR
5.3.2. Example: Conditional Get Time Zone Data
In this example the client requests that the time zone with a specific time zone identifier be returned, but uses an If-None-Match header field in the request, set to the value of a previously returned ETag header field, or the value of the "etag" member in a JSON "timezone" object returned from a "list" action response. In this example, the data on the server has not changed, so a 304 response is returned.
>> Request << GET /servlet/timezone/zones/America%2FNew_York HTTP/1.1 Host: tz.example.com Accept:text/calendar If-None-Match: "123456789-000-111" >> Response << HTTP/1.1 304 Not Modified Date: Wed, 4 Jun 2008 09:32:12 GMT
5.3.3. Example: Get Time Zone Data Using a Time Zone Alias
In this example, the client requests that the time zone with an aliased time zone identifier be returned, and the server returns the time zone data with that identifier and two aliases.
>> Request << GET /servlet/timezone/zones/US%2FEastern:US/Eastern TZID-ALIAS-OF:America/New_York TZID-ALIAS-OF:America/Montreal ... END:VTIMEZONE END:VCALENDAR
5.3.4. Example: Get Truncated Time Zone Data
Assume the server advertises a "truncated" object in its "capabilities" response that appears as:
"truncated": { "any": false, "ranges": [ {"start": "1970-01-01T00:00:00Z", "end": "*"}, {"start":"2010-01-01T00:00:00Z", "end":"2020-01-01T00:00:00Z"} ], "untruncated": false }
In this example, the client requests that the time zone with a specific time zone identifier truncated at one of the ranges specified by the server be returned. Note the presence of a "STANDARD" component that matches the start point of the truncation range (converted to the local time for the UTC offset in effect at the matching UTC time). Also, note the presence of the "TZUNTIL" (Section 7.1) iCalendar property in the "VTIMEZONE" component, indicating the upper bound on the validity period of the time zone data.
>> Request << GET /servlet/timezone/zones/America%2FNew_York ?start=2010-01-01T00:00:00Z&end=2020-01-01T00:00:00Z TZUNTIL:20200101T000000Z BEGIN:STANDARD DTSTART:20101231T190000 TZNAME:EST TZOFFSETFROM:-0500 TZOFFSETTO:-0500 END:STANDARD ... END:VTIMEZONE END:VCALENDAR
5.3.5. Example: Request for a Nonexistent Time Zone
In this example, the client requests that the time zone with a specific time zone identifier be returned. As it turns out, no time zone exists with that identifier.
>> Request << GET /servlet/timezone/zones/America%2FPittsburgh HTTP/1.1 Host: tz.example.com Accept:application/calendar+json >> Response << HTTP/1.1 404 Not Found Date: Wed, 4 Jun 2008 09:32:12 GMT Content-Type: application/problem+json; charset="utf-8" Content-Language: en Content-Length: xxxx { "type": "urn:ietf:params:tzdist:error:tzid-not-found", "title": "Time zone identifier was not found on this server", "status": 404 }
5.4. "expand" Action
Name: expand
Request-URI Template:
-
{/service-prefix,data-prefix}/zones{/tzid}/observances{?start,end}
The "tzid" variable value is REQUIRED.
Description: This action expands the specified time zone into a list of onset start date/time values (in UTC) and UTC offsets. The response MUST contain an ETag response header field indicating the current value of the strong entity tag of the time zone being expanded.
Parameters:
start=<date-time>: REQUIRED, and MUST occur only once. Specifies the inclusive UTC date-time value for the start of the period of interest. end=<date-time>: REQUIRED, and MUST occur only once. Specifies the exclusive UTC date-time value for the end of the period of interest. Note that this is the exclusive end value, i.e., it represents the date just after the range of interest. For if a client wants the expanded date just for the year 2014, it would use a start value of "2014-01-01T00:00:00Z" and an end value of "2015-01-01T00:00:00Z". An error occurs if the end value is less than or equal to the start value. Response: A JSON object containing a "tzid" member and an "observances" member; see Section 6.3. If the time zone being expanded is not fully defined over the requested time range (e.g., because of truncation), then the server MUST include "start" and/ or "end" members in the JSON response to indicate the actual start and end points for the observances being returned. The server MUST include an expanded observance representing the time zone information in effect at the start of the returned observance period. Possible Error Codes
urn:ietf:params:tzdist:error:tzid-not-found
No time zone associated with the specified "tzid" path segment value was found.
urn:ietf:params:tzdist:error:invalid-start
The "start" URI query parameter has an incorrect value, or appears more than once, or is missing, or has a value outside any fixed truncation ranges advertised in the "capabilities" action response.
urn:ietf:params:tzdist:error:invalid-end
The "end" URI query parameter has an incorrect value, or appears more than once, or has a value less than or equal to the "start" URI query parameter, or has a value outside any fixed truncation ranges advertised in the "capabilities" action response.
5.4.1. Example: Expanded JSON Data Format
In this example, the client requests a time zone in the expanded form.
>> Request << GET /servlet/timezone/zones/America%2FNew_York/observances ?start=2008-01-01T00:00:00Z&end=2009-01-01T00:00:00Z HTTP/1.1 Host: tz.example.com >> Response << HTTP/1.1 200 OK Date: Mon, 11 Oct 2009 09:32:12 GMT Content-Type: application/json; charset="utf-8" Content-Length: xxxx ETag: "123456789-000-111" { "tzid": "America/New_York", "observances": [ { "name": "Standard", "onset": "2008-01-01T00:00:00Z", "utc-offset-from": -18000, "utc-offset-to": -18000 }, { "name": "Daylight", "onset": "2008-03-09T07:00:00Z", "utc-offset-from": -18000, "utc-offset-to": -14400 }, { "name": "Standard", "onset": "2008-11-02T06:00:00Z", "utc-offset-from": -14400, "utc-offset-to": -18000 }, ] }
5.5. "find" Action
Name: find
Request-URI Template:
-
{/service-prefix,data-prefix}/zones{?pattern} Description: This action allows a client to query the time zone data distribution service for a matching identifier, alias, or localized name, using a simple "glob" style patter match against the names known to the server (with an asterisk (*) as the wildcard character). Pattern-match strings (which have to be percent-encoded and then decoded when used in the URI query parameter) have the following options: * not present: An exact text match is done, e.g., "xyz" * first character only: An ends-with text match is done, e.g., "*xyz" * last character only: A starts-with text match is done, e.g., "xyz*" * first and last characters only: A substring text match is done, e.g., "*xyz*" Escaping \ and *: To match 0x2A ("*") and 0x5C ("\") characters in a time zone identifier, those characters have to be "escaped" in the pattern by prepending a single 0x5C ("\") character. For example, a pattern "\*Test\\Time\*Zone\*" is used for an exact match against the time zone identifier "*Test\Time*Zone*". An unescaped "*" character MUST NOT appear in the middle of the string and MUST result in an error. An unescaped "\" character MUST NOT appear anywhere in the string and MUST result in an error.
In addition, when matching:
Underscores: Underscore characters (0x5F) in time zone identifiers MUST be mapped to a single space character (0x20) prior to string comparison in both the pattern and time zone identifiers being matched. This allows time zone identifiers such as "America/New_York" to match a query for "*New York*". Case mapping: ASCII characters in the range 0x41 ("A") through 0x5A ("Z") MUST be mapped to their lowercase equivalents in both the pattern and time zone identifiers being matched.
Parameters:
pattern=<text>
REQUIRED, and MUST occur only once.
Response: The response has the same format as the "list" action, with one result object per successful match; see Section 6.2. Possible Error Codes
urn:ietf:params:tzdist:error:invalid-pattern
The "pattern" URI query parameter has an incorrect value or appears more than once.
5.5.1. Example: find action
In this example, the client asks for data about the time zone "US/Eastern".
>> Request << GET /servlet/timezone/zones?pattern=US/Eastern" } ] }, { "tzid": "America/Detroit", "etag": "123456789-999-222", "last-modified": "2009-09-17T01:39:34Z", "publisher": "Example.com", "version": "2015a", "aliases":["US/Eastern"], "local-names": [ { "name": "America/Detroit", "lang": "en_US" } ] }, ... ] }
5.6. "leapseconds" Action
Name: leapseconds
Request-URI Template:
-
{/service-prefix,data-prefix}/leapseconds Description: This action allows a client to query the time zone data distribution service to retrieve the current leap-second information available on the server. Parameters: None Response: A JSON object containing an "expires" member, a "publisher" member, a "version" member, and a "leapseconds" member; see Section 6.4. The "expires" member in the JSON response indicates the latest date covered by leap-second information. For example (as in Section 5.6.1), if the "expires" value is set to "2014-06-28" and the latest leap-second change indicated was at "2012-07-01", then the data indicates that there are no leap seconds added (or removed) between those two dates, and information for leap seconds beyond the "expires" date is not yet available.
The "leapseconds" member contains a list of JSON objects each of which contains a "utc-offset" and "onset" member. The "onset" member specifies the date (with the implied time of 00:00:00 UTC) at which the corresponding UTC offset from TAI takes effect. In other words, a leap second is added or removed just prior to time 00:00:00 UTC of the specified onset date. When a leap second is added, the "utc-offset" value will be incremented by one; when a leap second is removed, the "utc-offset" value will be decremented by one.
Possible Error Codes No specific code.
5.6.1. Example: Get Leap-Second Information
In this example, the client requests the current leap-second information from the server.
>> Request << GET /servlet/timezone/leapseconds HTTP/1.1 Host: tz.example.com >> Response << HTTP/1.1 200 OK Date: Wed, 4 Jun 2008 09:32:12 GMT Content-Type: application/json; charset="utf-8" Content-Length: xxxx { "expires": "2015-12-28", "publisher": "Example.com", "version": "2015d", "leapseconds": [ { "utc-offset": 10, "onset": "1972-01-01", }, { "utc-offset": 11, "onset": "1972-07-01", }, ... { "utc-offset": 35, "onset": "2012-07-01", }, { "utc-offset": 36, "onset": "2015-07-01", } ] }
6. JSON Definitions
[RFC7159] defines the structure of JSON objects using a set of primitive elements. The structure of JSON objects used by this specification is described by the following set of rules:
OBJECT represents a JSON object, defined in Section 4 of [RFC7159]. "OBJECT" is followed by a parenthesized list of "MEMBER" rule names. If a member rule name is preceded by a "?" (0x3F) character, that member is optional; otherwise, all members are required. If two or more member rule names are present, each separated from the other by a "|" (0x7C) character, then only one of those members MUST be present in the JSON object. JSON object members are unordered, and thus the order used in the rules is not significant. MEMBER represents a member of a JSON object, defined in Section 4 of [RFC7159]. "MEMBER" is followed by a rule name, the name of the member, a ":", and then the value. A value can be one of "OBJECT", "ARRAY", "NUMBER", "STRING", or "BOOLEAN" rules. ARRAY represents a JSON array, defined in Section 5 of [RFC7159]. "ARRAY" is followed by a value (one of "OBJECT", "ARRAY", "NUMBER", "STRING", or "BOOLEAN"), indicating the type of items used in the array. NUMBER represents a JSON number, defined in Section 6 of [RFC7159]. STRING represents a JSON string, defined in Section 7 of [RFC7159]. BOOLEAN represents either of the JSON values "true" or "false", defined in Section 3 of [RFC7159].
; a line starting with a ";" (0x3B) character is a comment.
Note, clients MUST ignore any unexpected JSON members in responses from the server.
6.1. capabilities Action Response
Below are the rules for the JSON document returned for a "capabilities" action request.
; root object OBJECT (version, info, actions)
; The version number of the protocol supported - MUST be 1 MEMBER version "version" : NUMBER
; object containing service information ; Only one of primary_source or secondary_source MUST be present MEMBER info "info" : OBJECT ( primary_source | secondary_source, formats, ?truncated, ?provider_details, ?contacts )
; The source of the time zone data provided by a "primary" server MEMBER primary_source "primary-source" : STRING
; The time zone data server from which data is provided by a ; "secondary" server
MEMBER secondary_source "secondary-source" : STRING
; Array of one or more media types for the time zone data formats ; that the server can return
MEMBER formats "formats" : ARRAY STRING
; Present if the server is providing truncated time zone data. The ; value is an object providing details of the supported truncation ; modes. MEMBER truncated "truncated" : OBJECT: ( any, ?ranges, ?untruncated )
; Indicates whether the server can truncate time zone data at any ; start or end point. When set to "true", any start or end point is ; a valid value for use with the "start" and "end" URI query ; parameters in a "get" action request.
MEMBER any "any" : BOOLEAN
; Indicates which ranges of time the server has truncated data for. ; A value from this list may be used with the "start" and "end" URI ; query parameters in a "get" action request. Not present if "any" ; is set to "true".
MEMBER ranges "ranges" : ARRAY OBJECT (range-start, range-end)
; UTC date-time value (per [RFC3339]) for inclusive start of the ; range, or the single character "*" to indicate a value ; corresponding to the lower bound supplied by the publisher of the ; time zone data
MEMBER range-start "start" : STRING
; UTC date-time value (per [RFC3339]) for exclusive end of the range, ; or the single character "*" to indicate a value corresponding to ; the upper bound supplied by the publisher of the time zone data MEMBER range-end "end" : STRING
; Indicates whether the server can supply untruncated data. When ; set to "true", indicates that, in addition to truncated data being ; available, the server can return untruncated data if a "get" ; action request is executed without a "start" or "end" URI query ; parameter.
MEMBER untruncated "untruncated" : BOOLEAN
; A URI where human-readable details about the time zone service ; is available
MEMBER provider_details "provider-details" : STRING
; Array of URIs providing contact details for the server ; administrator
MEMBER contacts "contacts" : ARRAY STRING
; Array of actions supported by the server MEMBER actions "actions" : ARRAY OBJECT ( action_name, action_params )
; Name of the action
MEMBER action_name: "name" : STRING
; Array of request-URI query parameters supported by the action MEMBER action_params: "parameters" ARRAY OBJECT ( param_name, ?param_required, ?param_multi, ?param_values )
; Name of the parameter
MEMBER param_name "name" : STRING
; If true, the parameter has to be present in the request-URI ; default is false
MEMBER param_required "required" : BOOLEAN
; If true, the parameter can occur more than once in the request-URI ; default is false
MEMBER param_multi "multi" : BOOLEAN,
; An array that defines the allowed set of values for the parameter ; In the absence of this member, any string value is acceptable MEMBER param_values "values" ARRAY STRING
6.2. list/find Action Response
Below are the rules for the JSON document returned for a "list" or "find" action request.
; root object OBJECT (synctoken, timezones)
; Server-generated opaque token used for synchronizing changes MEMBER synctoken "synctoken" : STRING
; Array of time zone objects MEMBER timezones "timezones" : ARRAY OBJECT ( tzid, etag, last_modified, publisher, version, ?aliases, ?local_names, )
; Time zone identifier
MEMBER tzid "tzid" : STRING
; Current ETag for the corresponding time zone data resource MEMBER etag "etag" : STRING
; Date/time when the time zone data was last modified
; UTC date-time value as specified in [RFC3339] MEMBER last_modified "last-modified" : STRING
; Time zone data publisher
MEMBER publisher "publisher" : STRING
; Current version of the time zone data as defined by the ; publisher
MEMBER version "version" : STRING
; An array that lists the set of time zone aliases available ; for the corresponding time zone
MEMBER aliases "aliases" : ARRAY STRING
; An array that lists the set of localized names available ; for the corresponding time zone MEMBER local_names "local-names" : ARRAY OBJECT ( lname, lang, ?pref )
; Language tag for the language of the associated name MEMBER: lang "lang" : STRING
; Localized name
MEMBER lname "name" : STRING
; Indicates whether this is the preferred name for the associated ; language default: false
MEMBER pref "pref" : BOOLEAN
6.3. expand Action Response
Below are the rules for the JSON document returned for a "expand" action request.
; root object OBJECT ( tzid, ?start, ?end, observances )
; Time zone identifier
MEMBER tzid "tzid" : STRING
; The actual inclusive start point for the returned observances ; if different from the value of the "start" URI query parameter MEMBER start "start" : STRING
; The actual exclusive end point for the returned observances ; if different from the value of the "end" URI query parameter MEMBER end "end" : STRING
; Array of time zone objects MEMBER observances "observances" : ARRAY OBJECT ( oname, ?olocal_names, onset, utc_offset_from, utc_offset_to )
; Observance name
MEMBER oname "name" : STRING
; Array of localized observance names
MEMBER olocal_names "local-names" : ARRAY STRING
; UTC date-time value (per [RFC3339]) at which the observance takes ; effect
MEMBER onset "onset" : STRING
; The UTC offset in seconds before the start of this observance MEMBER utc_offset_from "utc-offset-from" : NUMBER
; The UTC offset in seconds at and after the start of this observance MEMBER utc_offset_to "utc-offset-to" : NUMBER
6.4. leapseconds Action Response
Below are the rules for the JSON document returned for a "leapseconds" action request.
; root object OBJECT ( expires, publisher, version, leapseconds )
; Last valid date covered by the data in this response ; full-date value as specified in [RFC3339]
MEMBER expires "expires" : STRING
; Leap-second information publisher
MEMBER publisher "publisher" : STRING
; Current version of the leap-second information as defined by the ; publisher
MEMBER version "version" : STRING
; Array of leap-second objects MEMBER leapseconds "leapseconds" : ARRAY OBJECT ( utc_offset, onset )
; The UTC offset from TAI in seconds in effect at and after the ; specified date
MEMBER utc_offset "utc-offset" : NUMBER
; full-date value (per [RFC3339]) at which the new UTC offset takes ; effect, at T00:00:00Z MEMBER onset "onset" : STRING
7. New iCalendar Properties
7.1. Time Zone Upper Bound
Property Name: TZUNTIL Purpose: This property specifies an upper bound for the validity period of data within a "VTIMEZONE" component. Value Type: DATE-TIME Property Parameters: IANA and non-standard property parameters can be specified on this property. Conformance: This property can be specified zero times or one time within "VTIMEZONE" calendar components. Description: The value MUST be specified in the UTC time format.
Time zone data in a "VTIMEZONE" component might cover only a fixed period of time. The start of such a period is clearly indicated by the earliest observance defined by the "STANDARD" and "DAYLIGHT" subcomponents. However, an upper bound on the validity period of the time zone data cannot be simply derived from the observance with the latest onset time, and [RFC5545] does not define a way to get such an upper bound. This specification introduces the "TZUNTIL" property for that purpose. It specifies an "exclusive" UTC date-time value that indicates the last time at which the time zone data is to be considered valid.
This property is also used by time zone data distribution servers to indicate the truncation range end point of time zone data (as described in Section 3.9).
Format Definition: This property is defined by the following notation in ABNF [RFC5234]: tzuntil = "TZUNTIL" tzuntilparam ":" date-time CRLF
tzuntilparam = *(";" other-param)
Example: Suppose a time zone based on astronomical observations has well-defined onset times through the year 2025, but the first onset in 2026 is currently known only approximately. In that case, the "TZUNTIL" property could be specified as follows: TZUNTIL:20260101T000000Z
7.2. Time Zone Identifier Alias Property
Property Name: TZID-ALIAS-OF Purpose: This property specifies a time zone identifier for which the main time zone identifier is an alias. Value Type: TEXT Property Parameters: IANA and non-standard property parameters can be specified on this property. Conformance: This property can be specified zero or more times within "VTIMEZONE" calendar components. Description: When the "VTIMEZONE" component uses a time zone identifier alias for the "TZID" property value, the "TZID-ALIAS- OF" property is used to indicate the time zone identifier of the other time zone (see Section 3.7). Format Definition: This property is defined by the following notation in ABNF [RFC5234]: tzid-alias-of = "TZID-ALIAS-OF" tzidaliasofparam ":" [tzidprefix] text CRLF
Example: The following is an example of this property: TZID-ALIAS-OF:America/New_York
8. Security Considerations
Time zone data is critical in determining local or UTC time for devices and in calendaring and scheduling operations. As such, it is vital that a reliable source of time zone data is used. Servers providing a time zone data distribution service MUST support HTTP over Transport Layer Security (TLS) (as defined by [RFC2818] and [RFC5246], with best practices described in [RFC7525]). Servers MAY support a time zone data distribution service over HTTP without TLS. However, secondary servers MUST use TLS to fetch data from a primary server.
Clients SHOULD use Transport Layer Security as defined by [RFC2818], unless they are specifically configured otherwise. Clients that have been configured to use the TLS-based service MUST NOT fall back to using the non-TLS service if the TLS-based service is not available. In addition, clients MUST NOT follow HTTP redirect requests from a TLS service to a non-TLS service. When using TLS, clients MUST verify the identity of the server, using a standard, secure mechanism such as the certificate verification process specified in [RFC6125] or DANE [RFC6698].
A malicious attacker with access to the DNS server data, or able to get spoofed answers cached in a recursive resolver, can potentially cause clients to connect to any server chosen by the attacker. In the absence of a secure DNS option, clients SHOULD check that the target FQDN returned in the SRV record is the same as the original service domain that was queried, or is a sub-domain of the original service domain. In many cases, the client configuration is likely to be handled automatically without any user input; as such, any mismatch between the original service domain and the target FQDN is treated as a failure and the client MUST NOT attempt to connect to the target server. In addition, when Transport Layer Security is being used, the Transport Layer Security certificate SHOULD include an SRV-ID field as per [RFC4985] matching the expected DNS SRV queries clients will use for service discovery. If an SRV-ID field is present in a certificate, clients MUST match the SRV-ID value with the service type and domain that matches the DNS SRV request made by the client to discover the service.
Time zone data servers SHOULD protect themselves against poorly implemented or malicious clients by throttling high request rates or frequent requests for large amounts of data. Clients can avoid being throttled by using the polling capabilities outlined in Section 4.1.4. Servers MAY require some form of authentication or authorization of clients (including secondary servers), as per [RFC7235], to restrict which clients are allowed to access their service or provide better identification of problematic clients.
9. Privacy Considerations
The type and pattern of requests that a client makes can be used to "fingerprint" specific clients or devices and thus potentially used to track information about what the users of the clients might be doing. In particular, a client that only downloads time zone data on an as-needed basis, will leak the fact that a user's device has moved from one time zone to another or that the user is receiving scheduling messages from another user in a different time zone.
Clients need to be aware of the potential ways in which an untrusted server or a network observer might be able to track them and take precautions such as the following:
- Always use TLS to connect to the server.
- Avoid use of TLS session resumption.
- Always fetch and synchronize the entire set of time zone data to avoid leaking information about which time zones are actually in use by the client.
- Randomize the order in which individual time zones are fetched using the "get" action, when retrieving a set of time zones based on a "list" action response.
- Avoid use of conditional HTTP requests [RFC7232] with the "get" action to prevent tracking of clients by servers generating client-specific ETag header field values.
- Avoid use of authenticated HTTP requests.
- When doing periodic polling to check for updates, apply a random (positive or negative) offset to the next poll time to avoid servers being able to identify the client by the specific periodicity of its polling behavior.
- A server trying to "fingerprint" clients might insert a "fake" time zone into the time zone data, using a unique identifier for each client making a request. The server can then watch for client requests that refer to that "fake" time zone and thus track the activity of each client. It is hard for clients to identify a "fake" time zone given that new time zones are added occasionally. One option to mitigate this would be for the client to make use of two time zone data distribution servers from two independent providers that provide time zone data from the same publisher. The client can then compare the list of time zones from each server (assuming they both have the same version of time zone data from the common publisher) and detect ones that appear to be added on one server and not the other. Alternatively, the client can check the publisher data directly to verify that time zones match the set the publisher has.
Note that some of the above recommendations will result in less efficient use of the protocol due to fetching data that might not be relevant to the client.
An organization can set up a secondary server within their own domain and configure their clients to use that server to protect the organization's users from the possibility of being tracked by an untrusted time zone data distribution server. Clients can then use more-efficient protocol interactions, free from the concerns above, on the basis that their organization's server is trusted. When doing this, the secondary server would follow the recommendations for clients (listed in the previous paragraph) so that the untrusted server is not able to gain information about the organization as a whole. Note, however, that client requests to the secondary server are subject to tracking by a network observer, so clients ought to apply some of the randomization techniques from the list above.
Servers that want to avoid accidentally storing information that could be used to identify clients can take the following precautions:
- Avoid logging client request activity, or anonymize information in any logs (e.g., client IP address, client user-agent details, authentication credentials, etc.).
- Add an unused HTTP response header to each response with a random amount of data in it (e.g., to pad the overall request size to the nearest power-of-2 or 128-byte boundary) to avoid exposing which time zones are being fetched when TLS is being used, via network traffic analysis.
10. IANA Considerations
This specification defines a new registry of "actions" for the time zone data distribution service protocol, defines a "well-known" URI using the registration procedure and template from Section 5.1 of [RFC5785], creates two new SRV service label aliases, and defines one new iCalendar property parameter as per the registration procedure in [RFC5545]. It also adds a new "TZDIST Identifiers Registry" to the IETF parameters URN sub-namespace as per [RFC3553] for use with protocol related error codes.
10.1. Service Actions Registration
IANA has created a new top-level category called "Time Zone Data Distribution Service (TZDIST) Parameters" and has put all the registries created herein into that category.
IANA has created a new registry called "TZDIST Service Actions", as defined below.
10.1.1. Service Actions Registration Procedure
This registry uses the "Specification Required" policy defined in [RFC5226], which makes use of a designated expert to review potential registrations.
The IETF has created a mailing list, tzdist-service@ietf.org, which is used for public discussion of time zone data distribution service actions proposals prior to registration. The IESG has appointed a designated expert who will monitor the tzdist-service@ietf.org mailing list and review registrations.
A Standards Track RFC is REQUIRED for changes to actions previously documented in a Standards Track RFC; otherwise, any public specification that satisfies the requirements of [RFC5226] is acceptable.
The registration procedure begins when a completed registration template, as defined below, is sent to tzdist-service@ietf.org and iana@iana.org. The designated expert is expected to tell IANA and the submitter of the registration whether the registration is approved, approved with minor changes, or rejected with cause, within two weeks. When a registration is rejected with cause, it can be resubmitted if the concerns listed in the cause are addressed. Decisions made by the designated expert can be appealed as per Section 7 of [RFC5226].
The designated expert MUST take the following requirements into account when reviewing the registration:
- A valid registration template MUST be provided by the submitter, with a clear description of what the action does.
- A proposed new action name MUST NOT conflict with any existing registered action name. A conflict includes a name that duplicates an existing one or that appears to be very similar to an existing one and could be a potential source of confusion.
- A proposed new action MUST NOT exactly duplicate the functionality of any existing actions. In cases where the new action functionality is very close to an existing action, the designated expert SHOULD clarify whether the submitter is aware of the existing action, and has an adequate reason for creating a new action with slight differences from an existing one.
- If a proposed action is an extension to an existing action, the changes MUST NOT conflict with the intent of the existing action, or in a way that could cause interoperability problems for existing deployments of the protocol.
The IANA registry contains the name of the action ("Action Name") and a reference to the section of the specification where the action registration template is defined ("Reference").
10.1.2. Registration Template for Actions
An action is defined by completing the following template.
Name: The name of the action. Request-URI Template: The URI template used in HTTP requests for the action. Description: A general description of the action, its purpose, etc. Parameters: A list of allowed request URI query parameters, indicating whether they are "REQUIRED" or "OPTIONAL" and whether they can occur only once or multiple times, together with the expected format of the parameter values. Response: The nature of the response to the HTTP request, e.g., what format the response data is in. Possible Error Codes: Possible error codes reported in a JSON "problem details" object if an HTTP request fails.
10.1.3. Actions Registry
The following table provides the initial content of the actions registry.
+---------------+------------------------+ | Action Name | Reference | +---------------+------------------------+ | capabilities | RFC 7808, Section 5.1 | | list | RFC 7808, Section 5.2 | | get | RFC 7808, Section 5.3 | | expand | RFC 7808, Section 5.4 | | find | RFC 7808, Section 5.5 | | leapseconds | RFC 7808, Section 5.6 | +---------------+------------------------+
10.2. timezone Well-Known URI Registration
IANA has added the following to the "Well-Known URIs" [RFC5785] registry:
URI suffix: timezone Change controller: IESG. Specification document(s): RFC 7808 Related information: None.
10.3. Service Name Registrations
IANA has added two new service names to the "Service Name and Transport Protocol Port Number Registry" [RFC6335], as defined below.
10.3.1. timezone Service Name Registration
Service Name: timezone Transport Protocol(s): TCP Assignee: IESG <iesg@ietf.org> Contact: IETF Chair <chair@ietf.org> Description: Time Zone Data Distribution Service - non-TLS Reference: RFC 7808 Assignment Note: This is an extension of the http service. Defined TXT keys: path=<context path> (as per Section 6 of [RFC6763]).
10.3.2. timezones Service Name Registration
Service Name: timezones Transport Protocol(s): TCP Assignee: IESG <iesg@ietf.org> Contact: IETF Chair <chair@ietf.org> Description: Time Zone Data Distribution Service - over TLS Reference: RFC 7808 Assignment Note: This is an extension of the https service. Defined TXT keys: path=<context path> (as per Section 6 of [RFC6763]).
10.4. TZDIST Identifiers Registry
IANA has registered a new URN sub-namespace within the IETF URN Sub- namespace for Registered Protocol Parameter Identifiers defined in [RFC3553].
Registrations in this registry follow the "IETF Review" [RFC5226] policy.
Registry name: TZDIST Identifiers URN prefix: urn:ietf:params:tzdist Specification: RFC 7808 Repository: Index value: Values in this registry are URNs or URN prefixes that start with the prefix "urn:ietf:params:tzdist:". Each is registered independently. The prefix "urn:ietf:params:tzdist:error:" is used to represent specific error codes within the protocol as defined in the list of actions in Section 5 and used in problem reports (Section 4.1.7).
Each registration in the "TZDIST "TZDIST Identifiers" registry has the initial registrations included in the following sections.
10.4.1. Registration of invalid-action Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:invalid-action Description: Generic error code for any invalid action. Specification: RFC 7808, Section 5 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.4.2. Registration of invalid-changedsince Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:invalid-changedsince Description: Error code for incorrect use of the "changedsince" URI query parameter. Specification: RFC 7808, Section 5.2 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.4.3. Registration of tzid-not-found Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:tzid-not-found Description: Error code for missing time zone identifier. Specification: RFC 7808, Sections 5.3 and 5.4 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.4.4. Registration of invalid-format Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:invalid-format Description: Error code for unsupported HTTP Accept request header field value. Specification: RFC 7808, Section 5.3 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.4.5. Registration of invalid-start Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:invalid-start Description: Error code for incorrect use of the "start" URI query parameter. Specification: RFC 7808, Sections 5.3 and 5.4 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.4.6. Registration of invalid-end Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:invalid-end Description: Error code for incorrect use of the "end" URI query parameter. Specification: RFC 7808, Sections 5.3 and 5.4 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.4.7. Registration of invalid-pattern Error URN
The following URN has been registered in the "tzdist Identifiers" registry.
URN: urn:ietf:params:tzdist:error:invalid-pattern Description: Error code for incorrect use of the "pattern" URI query parameter. Specification: RFC 7808, Section 5.5 Repository: Contact: IESG <iesg@ietf.org> Index value: N/A.
10.5. iCalendar Property Registrations
This document defines the following new iCalendar properties, which have been added to the "Properties" registry under "iCalendar Element Registries" [RFC5545]:
+----------------+----------+------------------------+ | Property | Status | Reference | +----------------+----------+------------------------+ | TZUNTIL | Current | RFC 7808, Section 7.1 | | TZID-ALIAS-OF | Current | RFC 7808, Section 7.2 | +----------------+----------+------------------------+
11. References
114985] Santesson, S., "Internet X.509 Public Key Infrastructure Subject Alternative Name for Expression of Service Name", RFC 4985, DOI 10.17487/RFC4985, August 2007, <>. 5545] Desruisseaux, B., Ed., "Internet Calendaring and Scheduling Core Object Specification (iCalendar)", RFC 5545, DOI 10.17487/RFC5545, September265] Barth, A., "HTTP State Management Mechanism", RFC 6265, DOI 10.17487/RFC6265, April 2011, <>. [RFC6321] Daboo, C., Douglass, M., and S. Lees, "xCal: The XML Format for iCalendar", RFC 6321, DOI 10.17487/RFC6321,6557] Lear, E. and P. Eggert, "Procedures for Maintaining the Time Zone Database", BCP 175, RFC 6557, DOI 10.17487/RFC6557, February 2012, <>. [RFC6570] Gregorio, J., Fielding, R., Hadley, M., Nottingham, M., and D. Orchard, "URI Template", RFC 6570, DOI 10.17487/RFC6570, March265] Kewisch, P., Daboo, C., and M. Douglass, "jCal: The JSON Format for iCalendar", RFC 7265, DOI 10.17487/RFC7265, May807] Nottingham, M. and E. Wilde, "Problem Details for HTTP APIs", RFC 7807, DOI 10.17487/RFC7807, March 2016, <>.
11.2. Informative References
[RFC2131] Droms, R., "Dynamic Host Configuration Protocol", RFC 2131, DOI 10.17487/RFC2131, March 1997, <>.
Acknowledgements
The authors would like to thank the members of the Calendaring and Scheduling Consortium's Time Zone Technical Committee, and the participants and chairs of the IETF tzdist working group. In particular, the following individuals have made important contributions to this work: Steve Allen, Lester Caine, Stephen Colebourne, Tobias Conradi, Steve Crocker, Paul Eggert, Daniel Kahn Gillmor, John Haug, Ciny Joy, Bryan Keller, Barry Leiba, Andrew McMillan, Ken Murchison, Tim Parenti, Arnaud Quillaud, Jose Edvaldo Saraiva, and Dave Thewlis.
This specification originated from work at the Calendaring and Scheduling Consortium, which has supported the development and testing of implementations of the specification.
Authors' Addresses
Michael Douglass Spherical Cow Group 226 3rd Street Troy, NY 12180 United States Email: mdouglass@sphericalcowgroup.com URI: Cyrus Daboo Apple Inc. 1 Infinite Loop Cupertino, CA 95014 United States Email: cyrus@daboo.name URI:
|
http://pike.lysator.liu.se/docs/ietf/rfc/78/rfc7808.xml
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
A Quick Guide to Deploying Java Apps on OpenShift
A Quick Guide to Deploying Java Apps on OpenShift
Get your Java apps containerized with this guide to OpenShift deployments. Learn how to configure the tools you need, pass credentials, and trigger updates.
Join the DZone community and get the full member experience.Join For Free
In this article, I’m going to show you how to deploy your applications on OpenShift (Minishift), connect them with other services exposed there, or use some other interesting deployment features provided by OpenShift. OpenShift is built on top of Docker containers and the Kubernetes container cluster orchestrator.
Running Minishift
We use Minishift to run a single-node OpenShift cluster on the local machine. The only requirement before installing MiniShift is having a virtualization tool installed. I use Oracle VirtualBox as a hypervisor, so I should set the
--vm-driver parameter to
virtualbox in my running command.
$ minishift start --vm-driver=virtualbox --memory=3G
Running Docker
It turns out that you can easily reuse the Docker daemon managed by Minishift in order to run Docker commands directly from your command line without any additional installation. To achieve this, just run the following command after starting Minishift.
@FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i
Running the OpenShift CLI
The last tool that is required before starting any practical exercise with Minishift is the CLI. It is available under the command
oc. To enable it on your command line, run the following commands:
$ minishift oc-env $ SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.9.0\windows;%PATH% $ REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
Alternatively, you can use the OpenShift web console which is available under port 8443. On my Windows machine, it is, by default, launched under the address 192.168.99.100.
Building Docker Images of the Sample Applications
I prepared the two sample applications that are used for the purposes of presenting OpenShift deployment process. These are simple Java and Vert.x applications that provide an HTTP API and store data in MongoDB. We need to build Docker images with these applications. The source code is available on GitHub in the branch openshift. Here’s a sample Dockerfile for
account-vertx-service.
FROM openjdk:8-jre-alpine ENV VERTICLE_FILE account-vertx-service-1.0-SNAPSHOT.jar ENV VERTICLE_HOME /usr/verticles ENV DATABASE_USER mongo ENV DATABASE_PASSWORD mongo ENV DATABASE_NAME db EXPOSE 8095 COPY target/$VERTICLE_FILE $VERTICLE_HOME/ WORKDIR $VERTICLE_HOME ENTRYPOINT ["sh", "-c"] CMD ["exec java -jar $VERTICLE_FILE"]
Go to the
account-vertx-service directory and run the following command to build an image from the Dockerfile visible above.
$ docker build -t piomin/account-vertx-service .
The same steps should be performed for
customer-vertx-service. After, you will have two images built, both in the same version
latest, which now can be deployed and run on Minishift.
Preparing the OpenShift Deployment Descriptor
When working with OpenShift, the first step of our application’s deployment is to create a YAML configuration file. This file contains basic information about the deployment like the containers used for running applications (1), scaling (2), triggers that drive automated deployments in response to events (3), or a strategy of deploying your pods on the platform (4).
kind: "DeploymentConfig" apiVersion: "v1" metadata: name: "account-service" spec: template: metadata: labels: name: "account-service" spec: containers: # (1) - name: "account-vertx-service" image: "piomin/account-vertx-service:latest" ports: - containerPort: 8095 protocol: "TCP" replicas: 1 # (2) triggers: # (3) - type: "ConfigChange" - type: "ImageChange" imageChangeParams: automatic: true containerNames: - "account-vertx-service" from: kind: "ImageStreamTag" name: "account-vertx-service:latest" strategy: # (4) type: "Rolling" paused: false revisionHistoryLimit: 2
Deployment configurations can be managed with the
oc command like any other resource. You can create a new configuration or update the existing one by using the
oc apply command.
$ oc apply -f account-deployment.yaml
You might be a little surprised, but this command does not trigger any build and does not start the pods. In fact, you have only created a resource of type
deploymentConfig, which describes the deployment process. You can start this process using some other
oc commands, but first, let’s take a closer look at the resources required by our application.
Injecting Environment Variables
As I have mentioned before, our sample applications use an external datasource. They need to open the connection to the existing MongoDB instance in order to store their data passed using HTTP endpoints exposed by the application. Here’s our
MongoVerticle class, which is responsible for establishing a client connection with MongoDB. It uses environment variables for setting security credentials and a database name.
public class MongoVerticle extends AbstractVerticle { @Override public void start() throws Exception { ConfigStoreOptions envStore = new ConfigStoreOptions() .setType("env") .setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME"))); ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore); ConfigRetriever retriever = ConfigRetriever.create(vertx, options); retriever.getConfig(r -> { String user = r.result().getString("DATABASE_USER"); String password = r.result().getString("DATABASE_PASSWORD"); String db = r.result().getString("DATABASE_NAME"); JsonObject config = new JsonObject(); config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db); final MongoClient client = MongoClient.createShared(vertx, config); final AccountRepository service = new AccountRepositoryImpl(client); ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service"); }); } }
MongoDB is available in OpenShift’s catalog of predefined Docker images. You can easily deploy it on your Minishift instance just by clicking the “MongoDB” icon in “Catalog” tab. Your username and password will be automatically generated if you do not provide them during the deployment setup. All the properties are available as deployment environment variables and are stored as
secrets/mongodb, where
mongodb is the name of the deployment.
Environment variables can be easily injected into any other deployments using the
oc set command, and therefore, they are injected into the pod after performing the deployment process. The following command injects all secrets assigned to the
mongodb deployment to the configuration of our sample application’s deployment.
$ oc set env --from=secrets/mongodb dc/account-service
Importing Docker Images to OpenShift
A deployment configuration is ready. So, in theory, we could have started the deployment process. However, let's go back for a moment to the deployment config defined in Step 5, the section on deployment descriptors. We defined two triggers that cause a new replication controller to be created, which results in deploying a new version of the pod. The first of them is a configuration change trigger that fires whenever changes are detected in the pod template of the deployment configuration (
ConfigChange).
The second of them, the image change trigger (
ImageChange), fires when a new version of the Docker image is pushed to the repository. To be able to see whether an image in a repository has been changed, we have to define and create an image stream. Such an image stream does not contain any image data, but presents a single virtual view of related images, something similar to an image repository. Inside the deployment config file, we referred to the image stream
account-vertx-service, so the same name should be provided inside the image stream definition. In turn, when setting the
spec.dockerImageRepository field, we define the Docker pull specification for the image.
apiVersion: "v1" kind: "ImageStream" metadata: name: "account-vertx-service" spec: dockerImageRepository: "piomin/account-vertx-service"
Finally, we can create resource on OpenShift platform.
$ oc apply -f account-image.yaml
Running the Deployment
Once a deployment configuration has been prepared, and the Docker images have been successfully imported into the repository managed by the OpenShift instance, we may trigger the build using the following
oc commands.
$ oc rollout latest dc/account-service $ oc rollout latest dc/customer-service
If everything goes fine, the new pods should be started for the defined deployments. You can easily check it out using the OpenShift web console.
Updating the Image Streams
We have already created two image streams related to the Docker repositories. Here’s the screen from the OpenShift web console that shows the list of available image streams.
To be able to push a new version of an image to OpenShift's internal Docker registry, we should first perform a
docker login against this registry using the user’s authentication token. To obtain the token from OpenShift, use the
oc whoami command, then pass it to your
docker login command with the
-p parameter.
$ oc whoami -t Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM $ docker login -u developer -p Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM
Now, if you perform any change in your application and rebuild your Docker image with the
latest tag, you have to push that image to the image stream on OpenShift. The address of the internal registry has been automatically generated by OpenShift, and you can check it out in the image stream’s details. For me, it is 172.30.1.1:5000.
$ docker tag piomin/account-vertx-service 172.30.1.1:5000/sample-deployment/account-vertx-service:latest $ docker push 172.30.1.1:5000/sample-deployment/account-vertx-service
After pushingthe new version of the Docker image to the image stream, a rollout of the application is started automatically. Here’s the screen from the OpenShift web console that shows the history of account-service's deployments.
Conclusion
I have shown you the steps of deploying your application on the OpenShift platform. Based on a sample Java application that connects to a database, I illustrated how to inject credentials to that application’s pod entirely transparently for a developer. I also perform an update of the application’s Docker image in order to show how to trigger a new deployment upon image change.
>>IMAGE }}
|
https://dzone.com/articles/a-quick-guide-to-deploying-java-apps-on-openshift
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Real-Time Dashboard With MongoDB
Real-Time Dashboard With MongoDB
In this article, look at a tutorial that explains how to build a real-time dashboard with MongoDB.
Join the DZone community and get the full member experience.Join For Free
A real-time dashboard is a dashboard that contains charts that are automatically updated with the most current data available. The typical use case is to load a chart with some historical data first and then update it live as new data comes in. In this tutorial, you will learn how to build such real-time dashboards with only open-source tools and without any third-party services.
The main challenge of building such a dashboard is to design a proper architecture to react to changes in data all the way up from the database to the charts on the frontend. The part from the server to the frontend is a simple one, since we have a lot of technologies and frameworks built to handle real-time data updates. Going from database to server is much trickier. The underlying problem is that most of the databases, which are good for analytic workload, don't provide out-of-the-box ways to subscribe to changes in the data. Instead, they are designed to be polled.
You may also like: Getting Started With MongoDB
Cube.js, which acts as a middleman between your database and analytics dashboard, can provide a real-time WebSockets-based API for the front end while polling the database for changes in data.
You can check out the demo of the real-time dashboard built with Cube.js here.
On the frontend, Cube.js provides an API to load initial historical data and subscribe to all subsequent updates.
xxxxxxxxxx
import cubejs from '@cubejs-client/core';
import WebSocketTransport from '@cubejs-client/ws-transport';
const cubejsApi = cubejs({
transport: new WebSocketTransport({
authorization: CUBEJS_TOKEN,
apiUrl: 'ws://localhost:4000/'
})
});
cubejsApi.subscribe({
measures: ['Logs.count'],
timeDimensions: [{
dimension: 'Logs.time',
granularity: 'hour',
dateRange: 'last 1440 minutes'
}]
}, (e, result) => {
if (e) {
// handle new error
} else {
// handle new result set
}
});
In our tutorial, we are going to use React as a frontend framework. Cube.js has a
@cubejs-client/react package, which provides React components for easy integration of Cube.js into the React app. It uses React hooks to load queries and subscribes for changes.
xxxxxxxxxx
import { useCubeQuery } from '@cubejs-client/react';
const Chart = ({ query, cubejsApi }) => {
const {
resultSet,
error,
isLoading
} = useCubeQuery(query, { subscribe: true, cubejsApi });
if (isLoading) {
return <div>Loading...</div>;
}
if (error) {
return <p>{error.toString()}</pre>;
}
if (!resultSet) {
return null;
}
return <LineChart resultSet={resultSet}></LineChart>;
};
In this tutorial, I'll show you how to build a real-time dashboard with MongoDB. The same approach could be used for any databases that Cube.js supports.
For quite a long time, doing analytics with MongoDB required additional overhead compared to modern SQL RDBMS and Data Warehouses associated with aggregation pipeline and MapReduce practices..
Setting up MongoDB and BI Connector
If you don’t have a MongoDB instance, you can download it here. The BI Connector can be downloaded here. Please make sure you use the MongoDB version that supports the MongoDB connector for BI.
After the BI connector has been installed, please start a
mongod instance first. If you use the downloaded installation, it can be started from its home directory like so:
xxxxxxxxxx
$ bin/mongod
The BI connector itself can be started the same way:
xxxxxxxxxx
$ bin/mongosqld
Please note that
mongosqld resides in another
bin directory. If everything works correctly, you should see a success log message in your shell for the
mongosqld process:
xxxxxxxxxx
[initandlisten] waiting for connections at 127.0.0.1:3307
If you’re using the MongoDB Atlas, you can use this guide to enable BI connector.
Getting a Sample Dataset
You can skip this step if you already have data for your dashboard.
We host a sample events collection, which you can use for a demo dashboard. Use the following commands to download and import it.
xxxxxxxxxx
$ curl > events-dump.zip
$ unzip events-dump.zip
$ bin/mongorestore dump/stats/events.bson
Please make sure to restart the MongoDB BI connector instance in order to generate an up-to-date MySQL schema from the just added collection.
Creating Cube.js Application
We are going to use Cube.js CLI to create our backend application; let's first install it.
xxxxxxxxxx
$ npm install -g cubejs-cli
Next, create a new Cube.js application with the MongoBI driver.
xxxxxxxxxx
$ cubejs create real-time-dashboard -d mongobi
Go to the just created
real-time-dashboard folder and update the
.env file with your MongoDB credentials.
xxxxxxxxxx
CUBEJS_DB_HOST=localhost
CUBEJS_DB_NAME=stats
CUBEJS_DB_PORT=3307
CUBEJS_DB_TYPE=mongobi
CUBEJS_API_SECRET=SECRET
Now let's start a Cube.js development server.
xxxxxxxxxx
$ npm run dev
This starts a development server with a playground. We'll use it to generate Cube.js schema, test our data and, finally, build a dashboard. Open in your browser.
Cube.js uses the data schema to generate an SQL code, which will be executed in your database. Data schema is a JavaScript code, which defines measures and dimensions and how they map to SQL queries.
Cube.js can generate a simple data schema based on the database’s tables. Select the
events table and click “Generate Schema.” the Cube.js backend.
Although auto-generated schema is a good way to get started, in many cases you'd need to add more complex logic into your Cube.js schema. You can learn more about data schema and its features here. In our case, we want to create several advanced measures and dimensions for our real-time dashboard.
Replace the content of
schema/Events.js with the following.
xxxxxxxxxx
cube(`Events`, {
sql: `SELECT * FROM stats.events`,
refreshKey: {
sql: `SELECT UNIX_TIMESTAMP()`
},
measures: {
count: {
type: `count`
},
online: {
type: `countDistinct`,
sql : `${anonymousId}`,
filters: [
{ sql: `${timestamp} > date_sub(now(), interval 3 minute)` }
]
},
pageView: {
type: `count`,
filters: [
{ sql: `${eventType} = 'pageView'` }
]
},
buttonClick: {
type: `count`,
filters: [
{ sql: `${eventType} = 'buttonCLicked'` }
]
}
},
dimensions: {
secondsAgo: {
sql: `TIMESTAMPDIFF(SECOND, timestamp, NOW())`,
type: `number`
},
anonymousId: {
sql: `anonymousId`,
type: `string`
},
eventType: {
sql: `eventType`,
type: `string`
},
timestamp: {
sql: `timestamp`,
type: `time`
}
}
});
First,. Setting it to
SELECT UNIX_TIMESTAMP() will refresh the cache every second. You need to carefully select the best refresh strategy depending on your data to get the freshest data when you need it, but, at the same time, not overwhelm the database with a lot of unnecessary queries.
So far, we've successfully configured a database and created a Cube.js schema our dashboard. Now it is time to build a dashboard itself!
Cube.js Playground can generate a boilerplate frontend app. It is a convenient way to start developing a dashboard or analytics application. You can select your favorite frontend framework and charting library and Playground will generate a new application and wire all things together to work with the Cube.js backend API.
We'll use React and Chart.js in our tutorial. To generate a new application, navigate to "Dashboard App,” select "React Antd Static" with "Chart.js," and click on the “Create dashboard app” button.
It could take a while to generate an app and install all the dependencies. Once it is done, you will have a
dashboard-app folder inside your Cube.js project folder. To start a dashboard app, either go to the “Dashboard App” tab in the playground and hit the “Start” button, or run the following command inside the
dashboard-app folder:
xxxxxxxxxx
$ npm start
Make sure the Cube.js backend process is up and running since our dashboard uses its API. The frontend application is running on.
To add a chart on the dashboard, you can either edit the
dashboard-app/src/pages/DashboardPage.js file or use Cube.js Playground. To add a chart via Playground, navigate to the "Build" tab, build a chart you want, and click the "Add to Dashboard" button.
Configure Cube.js for Real-Time Data Fetch
We need to do a few things for real-time support in Cube.js. First, let's enable WebSockets transport on the backend by setting the
CUBEJS_WEB_SOCKETS environment variable.
Add the following line to the
.env file.
xxxxxxxxxx
CUBEJS_WEB_SOCKETS=true
Next, we need to update the
index.js file to pass a few additional options to the Cube.js server.
Update the content of the
index.js file the following.
xxxxxxxxxx
const CubejsServer = require('@cubejs-backend/server');
const server = new CubejsServer({
processSubscriptionsInterval: 1,
orchestratorOptions: {
queryCacheOptions: {
refreshKeyRenewalThreshold: 1,
}
}
});
server.listen().then(({ port }) => {
console.log(`�� Cube.js server is listening on ${port}`);
}).catch(e => {
console.error('Fatal error during server start: ');
console.error(e.stack || e);
});
We have passed two configuration options to the Cube.js backend. The first,
processSubscriptionsInterval, controls the polling interval. The default value is 5 seconds; we are setting it to 1 second to make it slightly more real-time.
The second,
refreshKeyRenewalThreshold, controls how often the
refreshKey is executed. The default value of this option is 120, which is 2 minutes. In the previous part, we've changed
refreshKey to reset a cache every second, so it doesn't make sense for us to wait an additional 120 seconds to invalidate the
refreshKey result itself, that’s why we are changing it to 1 second as well.
That is all the updates we need to make on the backend part. Now, let's update the code of our dashboard app. First, let's install the
@cubejs-client/ws-transport package. It provides a WebSocket transport to work with the Cube.js real-time API.
Run the following command in your terminal.
xxxxxxxxxx
$ cd dashboard-app
$ npm install -s -client/ws-transport
Next, update the
src/App.js file to use real-time transport to work with the Cube.js API.
xxxxxxxxxx
-const API_URL = "";
+import WebSocketTransport from '@cubejs-client/ws-transport';
const CUBEJS_TOKEN = "SECRET";
-const cubejsApi = cubejs(CUBEJS_TOKEN, {
- apiUrl: `${API_URL}/cubejs-api/v1`
+const cubejsApi = cubejs({
+ transport: new WebSocketTransport({
+ authorization: CUBEJS_TOKEN,
+ apiUrl: 'ws://localhost:4000/'
+ })
});
Now, we need to update how we request a query itself in the
src/components/ChartRenderer.js. Make the following changes.
xxxxxxxxxx
-const ChartRenderer = ({ vizState }) => {
+const ChartRenderer = ({ vizState, cubejsApi }) => {
const { query, chartType } = vizState;
const component = TypeToMemoChartComponent[chartType];
- const renderProps = useCubeQuery(query);
+ const renderProps = useCubeQuery(query, { subscribe: true, cubejsApi });;
return component && renderChart(component)(renderProps);
};
That's it! Now you can add more charts to your dashboard, perform changes in the database, and see how charts are updating in real time.
The GIF below shows the dashboard with the total count of events, number of users online, and the table with the last events. You can see the charts update in real time as I insert new data in the database.
You can also check this online live demo with various charts displaying real-time data.
Congratulations on completing this guide!
I’d love to hear from you about your experience following this guide, please feel free to leave a comment below!
To learn how to deploy this dashboard, you can see the full version of the Real-Time Dashboard guide here.
Further Reading
Building MongoDB Dashboard Using Node.js
Which Is the Best MongoDB GUI? — 2019 Update
Published at DZone with permission of Artyom Keydunov . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/real-time-dashboard-with-mongodb
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Downloads in this releaseYou can download the WebJobs SDK from the NuGet gallery and can install or update these packages from the NuGet gallery using the NuGet Package Manager Console using the following command: Install-Package Microsoft.Azure.WebJobs -Pre To use Microsoft Azure Service Bus triggers, use the following command to install the package: Install-Package Microsoft.Azure.WebJobs.ServiceBus -Pre
Updates in this previewThis update contains several bug fixes and we are highlighting a few here.
- Blob Triggers are not triggered for new or updated blobs- There have been reports of Blob Triggers not working and we identified the root cause of the issue! Our findings indicate that the SDK was not listening for all the Azure Storage blob events and wasn't able to detect new or updated blobs in some cases. An example of this is Azure WebJob not processing all Blobs. Some other ways of experiencing this issue is if you were using PutBlock or PutBlockList or uploading blobs using Visual Studio Server Explorer.
- Instance methods can be triggered by the SDK- Starting with this release, the SDK will index both static and non-static methods.
public class Functions { public void ProcessQueue([QueueTrigger("input")] string input) { } }
SamplesSamples for WebJobs SDK can be found at . Some samples include:
- How to use triggers and bindings for blobs, tables, queues and Service Bus
- PhluffyShuffy where a customer can upload pictures that trigger functions to process the pictures from blob storage
- PhluffyLogs
|
https://azure.microsoft.com/ja-jp/blog/announcing-the-1-0-1-alpha-preview-of-microsoft-azure-webjobs-sdk/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
added package target to Makefile.in some: doc-if 750: doc-ahead 751: doc-then 752: doc-begin 753: doc-until 754: doc-again 755: doc-cs-pick 756: doc: doc-else 767: doc-while 768: doc-repeat 769: 770: Counted loop words constitute a separate group of words: 771: 772: doc-?do 773: doc-do 774: doc-for 775: doc-loop 776: doc-s+loop 777: doc-+loop 778: doc-next 779: doc-leave 780: doc-?leave 781: doc-unloop 782: doc definition (@code{LOOP} etc. compile an @code{UNLOOP} on the 788: fall-through path). Also, you have to ensure that all @code{LEAVE}s are 789: resolved (by using one of the loop-ending words or @code{UNDO}). 790: 791: Another group of control structure words are 792: 793: doc-case 794: doc-endcase 795: doc-of 796: doc-endof 797: 798: @i{case-sys} and @i{of-sys} cannot be processed using @code{cs-pick} and 799: @code{cs-roll}. 800: 801: @subsubsection Programming Style 802: 803: In order to ensure readability we recommend that you do not create 804: arbitrary control structures directly, but define new control structure 805: words for the control structure you want and use these words in your 806: program. 807: 808: E.g., instead of writing 809: 810: @example 811: begin 812: ... 813: if [ 1 cs-roll ] 814: ... 815: again then 816: @end example 817: 818: we recommend defining control structure words, e.g., 819: 820: @example 821: : while ( dest -- orig dest ) 822: POSTPONE if 823: 1 cs-roll ; immediate 824: 825: : repeat ( orig dest -- ) 826: POSTPONE again 827: POSTPONE then ; immediate 828: @end example 829: 830: and then using these to create the control structure: 831: 832: @example 833: begin 834: ... 835: while 836: ... 837: repeat 838: @end example 839: 840: That's much easier to read, isn't it? Of course, @code{BEGIN} and 841: @code{WHILE} are predefined, so in this example it would not be 842: necessary to define them. 843: 844: @subsection Calls and returns 845: 846: A definition can be called simply be writing the name of the 847: definition. When the end of the definition is reached, it returns. An earlier return can be forced using 848: 849: doc-exit 850: 851: Don't forget to clean up the return stack and @code{UNLOOP} any 852: outstanding @code{?DO}...@code{LOOP}s before @code{EXIT}ing. The 853: primitive compiled by @code{EXIT} is 854: 855: doc-;s 856: 857: @subsection Exception Handling 858: 859: doc-catch 860: doc-throw 861: 862: @node Locals 863: @section Locals 864: 865: Local variables can make Forth programming more enjoyable and Forth 866: programs easier to read. Unfortunately, the locals of ANS Forth are 867: laden with restrictions. Therefore, we provide not only the ANS Forth 868: locals wordset, but also our own, more powerful locals wordset (we 869: implemented the ANS Forth locals wordset through our locals wordset). 870: 871: @menu 872: @end menu 873: 874: @subsection gforth locals 875: 876: Locals can be defined with 877: 878: @example 879: @{ local1 local2 ... -- comment @} 880: @end example 881: or 882: @example 883: @{ local1 local2 ... @} 884: @end example 885: 886: E.g., 887: @example 888: : max @{ n1 n2 -- n3 @} 889: n1 n2 > if 890: n1 891: else 892: n2 893: endif ; 894: @end example 895: 896: The similarity of locals definitions with stack comments is intended. A 897: locals definition often replaces the stack comment of a word. The order 898: of the locals corresponds to the order in a stack comment and everything 899: after the @code{--} is really a comment. 900: 901: This similarity has one disadvantage: It is too easy to confuse locals 902: declarations with stack comments, causing bugs and making them hard to 903: find. However, this problem can be avoided by appropriate coding 904: conventions: Do not use both notations in the same program. If you do, 905: they should be distinguished using additional means, e.g. by position. 906: 907: The name of the local may be preceded by a type specifier, e.g., 908: @code{F:} for a floating point value: 909: 910: @example 911: : CX* @{ F: Ar F: Ai F: Br F: Bi -- Cr Ci @} 912: \ complex multiplication 913: Ar Br f* Ai Bi f* f- 914: Ar Bi f* Ai Br f* f+ ; 915: @end example 916: 917: GNU Forth currently supports cells (@code{W:}, @code{W^}), doubles 918: (@code{D:}, @code{D^}), floats (@code{F:}, @code{F^}) and characters 919: (@code{C:}, @code{C^}) in two flavours: a value-flavoured local (defined 920: with @code{W:}, @code{D:} etc.) produces its value and can be changed 921: with @code{TO}. A variable-flavoured local (defined with @code{W^} etc.) 922: produces its address (which becomes invalid when the variable's scope is 923: left). E.g., the standard word @code{emit} can be defined in therms of 924: @code{type} like this: 925: 926: @example 927: : emit @{ C^ char* -- @} 928: char* 1 type ; 929: @end example 930: 931: A local without type specifier is a @code{W:} local. Both flavours of 932: locals are initialized with values from the data or FP stack. 933: 934: Currently there is no way to define locals with user-defined data 935: structures, but we are working on it. 936: 937: GNU Forth allows defining locals everywhere in a colon definition. This poses the following questions: 938: 939: @subsubsection Where are locals visible by name? 940: 941: Basically, the answer is that locals are visible where you would expect 942: it in block-structured languages, and sometimes a little longer. If you 943: want to restrict the scope of a local, enclose its definition in 944: @code{SCOPE}...@code{ENDSCOPE}. 945: 946: doc-scope 947: doc-endscope 948: 949: These words behave like control structure words, so you can use them 950: with @code{CS-PICK} and @code{CS-ROLL} to restrict the scope in 951: arbitrary ways. 952: 953: If you want a more exact answer to the visibility question, here's the 954: basic principle: A local is visible in all places that can only be 955: reached through the definition of the local@footnote{In compiler 956: construction terminology, all places dominated by the definition of the 957: local.}. In other words, it is not visible in places that can be reached 958: without going through the definition of the local. E.g., locals defined 959: in @code{IF}...@code{ENDIF} are visible until the @code{ENDIF}, locals 960: defined in @code{BEGIN}...@code{UNTIL} are visible after the 961: @code{UNTIL} (until, e.g., a subsequent @code{ENDSCOPE}). 962: 963: The reasoning behind this solution is: We want to have the locals 964: visible as long as it is meaningful. The user can always make the 965: visibility shorter by using explicit scoping. In a place that can 966: only be reached through the definition of a local, the meaning of a 967: local name is clear. In other places it is not: How is the local 968: initialized at the control flow path that does not contain the 969: definition? Which local is meant, if the same name is defined twice in 970: two independent control flow paths? 971: 972: This should be enough detail for nearly all users, so you can skip the 973: rest of this section. If you relly must know all the gory details and 974: options, read on. 975: 976: In order to implement this rule, the compiler has to know which places 977: are unreachable. It knows this automatically after @code{AHEAD}, 978: @code{AGAIN}, @code{EXIT} and @code{LEAVE}; in other cases (e.g., after 979: most @code{THROW}s), you can use the word @code{UNREACHABLE} to tell the 980: compiler that the control flow never reaches that place. If 981: @code{UNREACHABLE} is not used where it could, the only consequence is 982: that the visibility of some locals is more limited than the rule above 983: says. If @code{UNREACHABLE} is used where it should not (i.e., if you 984: lie to the compiler), buggy code will be produced. 985: 986: Another problem with this rule is that at @code{BEGIN}, the compiler 987: does not know which locals will be visible on the incoming 988: back-edge. All problems discussed in the following are due to this 989: ignorance of the compiler (we discuss the problems using @code{BEGIN} 990: loops as examples; the discussion also applies to @code{?DO} and other 991: loops). Perhaps the most insidious example is: 992: @example 993: AHEAD 994: BEGIN 995: x 996: [ 1 CS-ROLL ] THEN 997: { x } 998: ... 999: UNTIL 1000: @end example 1001: 1002: This should be legal according to the visibility rule. The use of 1003: @code{x} can only be reached through the definition; but that appears 1004: textually below the use. 1005: 1006: From this example it is clear that the visibility rules cannot be fully 1007: implemented without major headaches. Our implementation treats common 1008: cases as advertised and the exceptions are treated in a safe way: The 1009: compiler makes a reasonable guess about the locals visible after a 1010: @code{BEGIN}; if it is too pessimistic, the 1011: user will get a spurious error about the local not being defined; if the 1012: compiler is too optimistic, it will notice this later and issue a 1013: warning. In the case above the compiler would complain about @code{x} 1014: being undefined at its use. You can see from the obscure examples in 1015: this section that it takes quite unusual control structures to get the 1016: compiler into trouble, and even then it will often do fine. 1017: 1018: If the @code{BEGIN} is reachable from above, the most optimistic guess 1019: is that all locals visible before the @code{BEGIN} will also be 1020: visible after the @code{BEGIN}. This guess is valid for all loops that 1021: are entered only through the @code{BEGIN}, in particular, for normal 1022: @code{BEGIN}...@code{WHILE}...@code{REPEAT} and 1023: @code{BEGIN}...@code{UNTIL} loops and it is implemented in our 1024: compiler. When the branch to the @code{BEGIN} is finally generated by 1025: @code{AGAIN} or @code{UNTIL}, the compiler checks the guess and 1026: warns the user if it was too optimisitic: 1027: @example 1028: IF 1029: { x } 1030: BEGIN 1031: \ x ? 1032: [ 1 cs-roll ] THEN 1033: ... 1034: UNTIL 1035: @end example 1036: 1037: Here, @code{x} lives only until the @code{BEGIN}, but the compiler 1038: optimistically assumes that it lives until the @code{THEN}. It notices 1039: this difference when it compiles the @code{UNTIL} and issues a 1040: warning. The user can avoid the warning, and make sure that @code{x} 1041: is not used in the wrong area by using explicit scoping: 1042: @example 1043: IF 1044: SCOPE 1045: { x } 1046: ENDSCOPE 1047: BEGIN 1048: [ 1 cs-roll ] THEN 1049: ... 1050: UNTIL 1051: @end example 1052: 1053: Since the guess is optimistic, there will be no spurious error messages 1054: about undefined locals. 1055: 1056: If the @code{BEGIN} is not reachable from above (e.g., after 1057: @code{AHEAD} or @code{EXIT}), the compiler cannot even make an 1058: optimistic guess, as the locals visible after the @code{BEGIN} may be 1059: defined later. Therefore, the compiler assumes that no locals are 1060: visible after the @code{BEGIN}. However, the useer can use 1061: @code{ASSUME-LIVE} to make the compiler assume that the same locals are 1062: visible at the BEGIN as at the point where the item was created. 1063: 1064: doc-assume-live 1065: 1066: E.g., 1067: @example 1068: { x } 1069: AHEAD 1070: ASSUME-LIVE 1071: BEGIN 1072: x 1073: [ 1 CS-ROLL ] THEN 1074: ... 1075: UNTIL 1076: @end example 1077: 1078: Other cases where the locals are defined before the @code{BEGIN} can be 1079: handled by inserting an appropriate @code{CS-ROLL} before the 1080: @code{ASSUME-LIVE} (and changing the control-flow stack manipulation 1081: behind the @code{ASSUME-LIVE}). 1082: 1083: Cases where locals are defined after the @code{BEGIN} (but should be 1084: visible immediately after the @code{BEGIN}) can only be handled by 1085: rearranging the loop. E.g., the ``most insidious'' example above can be 1086: arranged into: 1087: @example 1088: BEGIN 1089: { x } 1090: ... 0= 1091: WHILE 1092: x 1093: REPEAT 1094: @end example 1095: 1096: @subsubsection How long do locals live? 1097: 1098: The right answer for the lifetime question would be: A local lives at 1099: least as long as it can be accessed. For a value-flavoured local this 1100: means: until the end of its visibility. However, a variable-flavoured 1101: local could be accessed through its address far beyond its visibility 1102: scope. Ultimately, this would mean that such locals would have to be 1103: garbage collected. Since this entails un-Forth-like implementation 1104: complexities, I adopted the same cowardly solution as some other 1105: languages (e.g., C): The local lives only as long as it is visible; 1106: afterwards its address is invalid (and programs that access it 1107: afterwards are erroneous). 1108: 1109: @subsubsection Programming Style 1110: 1111: The freedom to define locals anywhere has the potential to change 1112: programming styles dramatically. In particular, the need to use the 1113: return stack for intermediate storage vanishes. Moreover, all stack 1114: manipulations (except @code{PICK}s and @code{ROLL}s with run-time 1115: determined arguments) can be eliminated: If the stack items are in the 1116: wrong order, just write a locals definition for all of them; then 1117: write the items in the order you want. 1118: 1119: This seems a little far-fetched and eliminating stack manipulations is 1120: unlikely to become a conscious programming objective. Still, the 1121: number of stack manipulations will be reduced dramatically if local 1122: variables are used liberally (e.g., compare @code{max} in \sect{misc} 1123: with a traditional implementation of @code{max}). 1124: 1125: This shows one potential benefit of locals: making Forth programs more 1126: readable. Of course, this benefit will only be realized if the 1127: programmers continue to honour the principle of factoring instead of 1128: using the added latitude to make the words longer. 1129: 1130: Using @code{TO} can and should be avoided. Without @code{TO}, 1131: every value-flavoured local has only a single assignment and many 1132: advantages of functional languages apply to Forth. I.e., programs are 1133: easier to analyse, to optimize and to read: It is clear from the 1134: definition what the local stands for, it does not turn into something 1135: different later. 1136: 1137: E.g., a definition using @code{TO} might look like this: 1138: @example 1139: : strcmp @{ addr1 u1 addr2 u2 -- n @} 1140: u1 u2 min 0 1141: ?do 1142: addr1 c@ addr2 c@ - ?dup 1143: if 1144: unloop exit 1145: then 1146: addr1 char+ TO addr1 1147: addr2 char+ TO addr2 1148: loop 1149: u1 u2 - ; 1150: @end example 1151: Here, @code{TO} is used to update @code{addr1} and @code{addr2} at 1152: every loop iteration. @code{strcmp} is a typical example of the 1153: readability problems of using @code{TO}. When you start reading 1154: @code{strcmp}, you think that @code{addr1} refers to the start of the 1155: string. Only near the end of the loop you realize that it is something 1156: else. 1157: 1158: This can be avoided by defining two locals at the start of the loop that 1159: are initialized with the right value for the current iteration. 1160: @example 1161: : strcmp @{ addr1 u1 addr2 u2 -- n @} 1162: addr1 addr2 1163: u1 u2 min 0 1164: ?do @{ s1 s2 @} 1165: s1 c@ s2 c@ - ?dup 1166: if 1167: unloop exit 1168: then 1169: s1 char+ s2 char+ 1170: loop 1171: 2drop 1172: u1 u2 - ; 1173: @end example 1174: Here it is clear from the start that @code{s1} has a different value 1175: in every loop iteration. 1176: 1177: @subsubsection Implementation 1178: 1179: GNU Forth uses an extra locals stack. The most compelling reason for 1180: this is that the return stack is not float-aligned; using an extra stack 1181: also eliminates the problems and restrictions of using the return stack 1182: as locals stack. Like the other stacks, the locals stack grows toward 1183: lower addresses. A few primitives allow an efficient implementation: 1184: 1185: doc-@local# 1186: doc-f@local# 1187: doc-laddr# 1188: doc-lp+!# 1189: doc-lp! 1190: doc->l 1191: doc-f>l 1192: 1193: In addition to these primitives, some specializations of these 1194: primitives for commonly occurring inline arguments are provided for 1195: efficiency reasons, e.g., @code{@@local0} as specialization of 1196: @code{@@local#} for the inline argument 0. The following compiling words 1197: compile the right specialized version, or the general version, as 1198: appropriate: 1199: 1200: doc-compile-@@local 1201: doc-compile-f@@local 1202: doc-compile-lp+! 1203: 1204: Combinations of conditional branches and @code{lp+!#} like 1205: @code{?branch-lp+!#} (the locals pointer is only changed if the branch 1206: is taken) are provided for efficiency and correctness in loops. 1207: 1208: A special area in the dictionary space is reserved for keeping the 1209: local variable names. @code{@{} switches the dictionary pointer to this 1210: area and @code{@}} switches it back and generates the locals 1211: initializing code. @code{W:} etc.@ are normal defining words. This 1212: special area is cleared at the start of every colon definition. 1213: 1214: A special feature of GNU Forths dictionary is used to implement the 1215: definition of locals without type specifiers: every wordlist (aka 1216: vocabulary) has its own methods for searching 1217: etc. (@xref{dictionary}). For the present purpose we defined a wordlist 1218: with a special search method: When it is searched for a word, it 1219: actually creates that word using @code{W:}. @code{@{} changes the search 1220: order to first search the wordlist containing @code{@}}, @code{W:} etc., 1221: and then the wordlist for defining locals without type specifiers. 1222: 1223: The lifetime rules support a stack discipline within a colon 1224: definition: The lifetime of a local is either nested with other locals 1225: lifetimes or it does not overlap them. 1226: 1227: At @code{BEGIN}, @code{IF}, and @code{AHEAD} no code for locals stack 1228: pointer manipulation is generated. Between control structure words 1229: locals definitions can push locals onto the locals stack. @code{AGAIN} 1230: is the simplest of the other three control flow words. It has to 1231: restore the locals stack depth of the corresponding @code{BEGIN} 1232: before branching. The code looks like this: 1233: @format 1234: @code{lp+!#} current-locals-size @minus{} dest-locals-size 1235: @code{branch} <begin> 1236: @end format 1237: 1238: @code{UNTIL} is a little more complicated: If it branches back, it 1239: must adjust the stack just like @code{AGAIN}. But if it falls through, 1240: the locals stack must not be changed. The compiler generates the 1241: following code: 1242: @format 1243: @code{?branch-lp+!#} <begin> current-locals-size @minus{} dest-locals-size 1244: @end format 1245: The locals stack pointer is only adjusted if the branch is taken. 1246: 1247: @code{THEN} can produce somewhat inefficient code: 1248: @format 1249: @code{lp+!#} current-locals-size @minus{} orig-locals-size 1250: <orig target>: 1251: @code{lp+!#} orig-locals-size @minus{} new-locals-size 1252: @end format 1253: The second @code{lp+!#} adjusts the locals stack pointer from the 1254: level at the {\em orig} point to the level after the @code{THEN}. The 1255: first @code{lp+!#} adjusts the locals stack pointer from the current 1256: level to the level at the orig point, so the complete effect is an 1257: adjustment from the current level to the right level after the 1258: @code{THEN}. 1259: 1260: In a conventional Forth implementation a dest control-flow stack entry 1261: is just the target address and an orig entry is just the address to be 1262: patched. Our locals implementation adds a wordlist to every orig or dest 1263: item. It is the list of locals visible (or assumed visible) at the point 1264: described by the entry. Our implementation also adds a tag to identify 1265: the kind of entry, in particular to differentiate between live and dead 1266: (reachable and unreachable) orig entries. 1267: 1268: A few unusual operations have to be performed on locals wordlists: 1269: 1270: doc-common-list 1271: doc-sub-list? 1272: doc-list-size 1273: 1274: Several features of our locals wordlist implementation make these 1275: operations easy to implement: The locals wordlists are organised as 1276: linked lists; the tails of these lists are shared, if the lists 1277: contain some of the same locals; and the address of a name is greater 1278: than the address of the names behind it in the list. 1279: 1280: Another important implementation detail is the variable 1281: @code{dead-code}. It is used by @code{BEGIN} and @code{THEN} to 1282: determine if they can be reached directly or only through the branch 1283: that they resolve. @code{dead-code} is set by @code{UNREACHABLE}, 1284: @code{AHEAD}, @code{EXIT} etc., and cleared at the start of a colon 1285: definition, by @code{BEGIN} and usually by @code{THEN}. 1286: 1287: Counted loops are similar to other loops in most respects, but 1288: @code{LEAVE} requires special attention: It performs basically the same 1289: service as @code{AHEAD}, but it does not create a control-flow stack 1290: entry. Therefore the information has to be stored elsewhere; 1291: traditionally, the information was stored in the target fields of the 1292: branches created by the @code{LEAVE}s, by organizing these fields into a 1293: linked list. Unfortunately, this clever trick does not provide enough 1294: space for storing our extended control flow information. Therefore, we 1295: introduce another stack, the leave stack. It contains the control-flow 1296: stack entries for all unresolved @code{LEAVE}s. 1297: 1298: Local names are kept until the end of the colon definition, even if 1299: they are no longer visible in any control-flow path. In a few cases 1300: this may lead to increased space needs for the locals name area, but 1301: usually less than reclaiming this space would cost in code size. 1302: 1303: 1304: @subsection ANS Forth locals 1305: 1306: The ANS Forth locals wordset does not define a syntax for locals, but 1307: words that make it possible to define various syntaxes. One of the 1308: possible syntaxes is a subset of the syntax we used in the gforth locals 1309: wordset, i.e.: 1310: 1311: @example 1312: @{ local1 local2 ... -- comment @} 1313: @end example 1314: or 1315: @example 1316: @{ local1 local2 ... @} 1317: @end example 1318: 1319: The order of the locals corresponds to the order in a stack comment. The 1320: restrictions are: 1321: 1322: @itemize @bullet 1323: @item 1324: Locals can only be cell-sized values (no type specifers are allowed). 1325: @item 1326: Locals can be defined only outside control structures. 1327: @item 1328: Locals can interfere with explicit usage of the return stack. For the 1329: exact (and long) rules, see the standard. If you don't use return stack 1330: accessing words in a definition using locals, you will we all right. The 1331: purpose of this rule is to make locals implementation on the return 1332: stack easier. 1333: @item 1334: The whole definition must be in one line. 1335: @end itemize 1336: 1337: Locals defined in this way behave like @code{VALUE}s 1338: (@xref{values}). I.e., they are initialized from the stack. Using their 1339: name produces their value. Their value can be changed using @code{TO}. 1340: 1341: Since this syntax is supported by gforth directly, you need not do 1342: anything to use it. If you want to port a program using this syntax to 1343: another ANS Forth system, use @file{anslocal.fs} to implement the syntax 1344: on the other system. 1345: 1346: Note that a syntax shown in the standard, section A.13 looks 1347: similar, but is quite different in having the order of locals 1348: reversed. Beware! 1349: 1350: The ANS Forth locals wordset itself consists of the following word 1351: 1352: doc-(local) 1353: 1354: The ANS Forth locals extension wordset defines a syntax, but it is so 1355: awful that we strongly recommend not to use it. We have implemented this 1356: syntax to make porting to gforth easy, but do not document it here. The 1357: problem with this syntax is that the locals are defined in an order 1358: reversed with respect to the standard stack comment notation, making 1359: programs harder to read, and easier to misread and miswrite. The only 1360: merit of this syntax is that it is easy to implement using the ANS Forth 1361: locals wordset. 1362: 1363: @node Internals 1364: @chapter Internals 1365: 1366: Reading this section is not necessary for programming with gforth. It 1367: should be helpful for finding your way in the gforth sources. 1368: 1369: @section Portability 1370: 1371: One of the main goals of the effort is availability across a wide range 1372: of personal machines. fig-Forth, and, to a lesser extent, F83, achieved 1373: this goal by manually coding the engine in assembly language for several 1374: then-popular processors. This approach is very labor-intensive and the 1375: results are short-lived due to progress in computer architecture. 1376: 1377: Others have avoided this problem by coding in C, e.g., Mitch Bradley 1378: (cforth), Mikael Patel (TILE) and Dirk Zoller (pfe). This approach is 1379: particularly popular for UNIX-based Forths due to the large variety of 1380: architectures of UNIX machines. Unfortunately an implementation in C 1381: does not mix well with the goals of efficiency and with using 1382: traditional techniques: Indirect or direct threading cannot be expressed 1383: in C, and switch threading, the fastest technique available in C, is 1384: significantly slower. Another problem with C is that it's very 1385: cumbersome to express double integer arithmetic. 1386: 1387: Fortunately, there is a portable language that does not have these 1388: limitations: GNU C, the version of C processed by the GNU C compiler 1389: (@pxref{C Extensions, , Extensions to the C Language Family, gcc.info, 1390: GNU C Manual}). Its labels as values feature (@pxref{Labels as Values, , 1391: Labels as Values, gcc.info, GNU C Manual}) makes direct and indirect 1392: threading possible, its @code{long long} type (@pxref{Long Long, , 1393: Double-Word Integers, gcc.info, GNU C Manual}) corresponds to Forths 1394: double numbers. GNU C is available for free on all important (and many 1395: unimportant) UNIX machines, VMS, 80386s running MS-DOS, the Amiga, and 1396: the Atari ST, so a Forth written in GNU C can run on all these 1397: machines@footnote{Due to Apple's look-and-feel lawsuit it is not 1398: available on the Mac (@pxref{Boycott, , Protect Your Freedom--Fight 1399: ``Look And Feel'', gcc.info, GNU C Manual}).}. 1400: 1401: Writing in a portable language has the reputation of producing code that 1402: is slower than assembly. For our Forth engine we repeatedly looked at 1403: the code produced by the compiler and eliminated most compiler-induced 1404: inefficiencies by appropriate changes in the source-code. 1405: 1406: However, register allocation cannot be portably influenced by the 1407: programmer, leading to some inefficiencies on register-starved 1408: machines. We use explicit register declarations (@pxref{Explicit Reg 1409: Vars, , Variables in Specified Registers, gcc.info, GNU C Manual}) to 1410: improve the speed on some machines. They are turned on by using the 1411: @code{gcc} switch @code{-DFORCE_REG}. Unfortunately, this feature not 1412: only depends on the machine, but also on the compiler version: On some 1413: machines some compiler versions produce incorrect code when certain 1414: explicit register declarations are used. So by default 1415: @code{-DFORCE_REG} is not used. 1416: 1417: @section Threading 1418: 1419: GNU C's labels as values extension (available since @code{gcc-2.0}, 1420: @pxref{Labels as Values, , Labels as Values, gcc.info, GNU C Manual}) 1421: makes it possible to take the address of @var{label} by writing 1422: @code{&&@var{label}}. This address can then be used in a statement like 1423: @code{goto *@var{address}}. I.e., @code{goto *&&x} is the same as 1424: @code{goto x}. 1425: 1426: With this feature an indirect threaded NEXT looks like: 1427: @example 1428: cfa = *ip++; 1429: ca = *cfa; 1430: goto *ca; 1431: @end example 1432: For those unfamiliar with the names: @code{ip} is the Forth instruction 1433: pointer; the @code{cfa} (code-field address) corresponds to ANS Forths 1434: execution token and points to the code field of the next word to be 1435: executed; The @code{ca} (code address) fetched from there points to some 1436: executable code, e.g., a primitive or the colon definition handler 1437: @code{docol}. 1438: 1439: Direct threading is even simpler: 1440: @example 1441: ca = *ip++; 1442: goto *ca; 1443: @end example 1444: 1445: Of course we have packaged the whole thing neatly in macros called 1446: @code{NEXT} and @code{NEXT1} (the part of NEXT after fetching the cfa). 1447: 1448: @subsection Scheduling 1449: 1450: There is a little complication: Pipelined and superscalar processors, 1451: i.e., RISC and some modern CISC machines can process independent 1452: instructions while waiting for the results of an instruction. The 1453: compiler usually reorders (schedules) the instructions in a way that 1454: achieves good usage of these delay slots. However, on our first tries 1455: the compiler did not do well on scheduling primitives. E.g., for 1456: @code{+} implemented as 1457: @example 1458: n=sp[0]+sp[1]; 1459: sp++; 1460: sp[0]=n; 1461: NEXT; 1462: @end example 1463: the NEXT comes strictly after the other code, i.e., there is nearly no 1464: scheduling. After a little thought the problem becomes clear: The 1465: compiler cannot know that sp and ip point to different addresses (and 1466: the version of @code{gcc} we used would not know it even if it could), 1467: so it could not move the load of the cfa above the store to the 1468: TOS. Indeed the pointers could be the same, if code on or very near the 1469: top of stack were executed. In the interest of speed we chose to forbid 1470: this probably unused ``feature'' and helped the compiler in scheduling: 1471: NEXT is divided into the loading part (@code{NEXT_P1}) and the goto part 1472: (@code{NEXT_P2}). @code{+} now looks like: 1473: @example 1474: n=sp[0]+sp[1]; 1475: sp++; 1476: NEXT_P1; 1477: sp[0]=n; 1478: NEXT_P2; 1479: @end example 1480: This can be scheduled optimally by the compiler (see \sect{TOS}). 1481: 1482: This division can be turned off with the switch @code{-DCISC_NEXT}. This 1483: switch is on by default on machines that do not profit from scheduling 1484: (e.g., the 80386), in order to preserve registers. 1485: 1486: @subsection Direct or Indirect Threaded? 1487: 1488: Both! After packaging the nasty details in macro definitions we 1489: realized that we could switch between direct and indirect threading by 1490: simply setting a compilation flag (@code{-DDIRECT_THREADED}) and 1491: defining a few machine-specific macros for the direct-threading case. 1492: On the Forth level we also offer access words that hide the 1493: differences between the threading methods (@pxref{Threading Words}). 1494: 1495: Indirect threading is implemented completely 1496: machine-independently. Direct threading needs routines for creating 1497: jumps to the executable code (e.g. to docol or dodoes). These routines 1498: are inherently machine-dependent, but they do not amount to many source 1499: lines. I.e., even porting direct threading to a new machine is a small 1500: effort. 1501: 1502: @subsection DOES> 1503: One of the most complex parts of a Forth engine is @code{dodoes}, i.e., 1504: the chunk of code executed by every word defined by a 1505: @code{CREATE}...@code{DOES>} pair. The main problem here is: How to find 1506: the Forth code to be executed, i.e. the code after the @code{DOES>} (the 1507: DOES-code)? There are two solutions: 1508: 1509: In fig-Forth the code field points directly to the dodoes and the 1510: DOES-code address is stored in the cell after the code address 1511: (i.e. at cfa cell+). It may seem that this solution is illegal in the 1512: Forth-79 and all later standards, because in fig-Forth this address 1513: lies in the body (which is illegal in these standards). However, by 1514: making the code field larger for all words this solution becomes legal 1515: again. We use this approach for the indirect threaded version. Leaving 1516: a cell unused in most words is a bit wasteful, but on the machines we 1517: are targetting this is hardly a problem. The other reason for having a 1518: code field size of two cells is to avoid having different image files 1519: for direct and indirect threaded systems (@pxref{image-format}). 1520: 1521: The other approach is that the code field points or jumps to the cell 1522: after @code{DOES}. In this variant there is a jump to @code{dodoes} at 1523: this address. @code{dodoes} can then get the DOES-code address by 1524: computing the code address, i.e., the address of the jump to dodoes, 1525: and add the length of that jump field. A variant of this is to have a 1526: call to @code{dodoes} after the @code{DOES>}; then the return address 1527: (which can be found in the return register on RISCs) is the DOES-code 1528: address. Since the two cells available in the code field are usually 1529: used up by the jump to the code address in direct threading, we use 1530: this approach for direct threading. We did not want to add another 1531: cell to the code field. 1532: 1533: @section Primitives 1534: 1535: @subsection Automatic Generation 1536: 1537: Since the primitives are implemented in a portable language, there is no 1538: longer any need to minimize the number of primitives. On the contrary, 1539: having many primitives is an advantage: speed. In order to reduce the 1540: number of errors in primitives and to make programming them easier, we 1541: provide a tool, the primitive generator (@file{prims2x.fs}), that 1542: automatically generates most (and sometimes all) of the C code for a 1543: primitive from the stack effect notation. The source for a primitive 1544: has the following form: 1545: 1546: @format 1547: @var{Forth-name} @var{stack-effect} @var{category} [@var{pronounc.}] 1548: [@code{""}@var{glossary entry}@code{""}] 1549: @var{C code} 1550: [@code{:} 1551: @var{Forth code}] 1552: @end format 1553: 1554: The items in brackets are optional. The category and glossary fields 1555: are there for generating the documentation, the Forth code is there 1556: for manual implementations on machines without GNU C. E.g., the source 1557: for the primitive @code{+} is: 1558: @example 1559: + n1 n2 -- n core plus 1560: n = n1+n2; 1561: @end example 1562: 1563: This looks like a specification, but in fact @code{n = n1+n2} is C 1564: code. Our primitive generation tool extracts a lot of information from 1565: the stack effect notations@footnote{We use a one-stack notation, even 1566: though we have separate data and floating-point stacks; The separate 1567: notation can be generated easily from the unified notation.}: The number 1568: of items popped from and pushed on the stack, their type, and by what 1569: name they are referred to in the C code. It then generates a C code 1570: prelude and postlude for each primitive. The final C code for @code{+} 1571: looks like this: 1572: 1573: @example 1574: I_plus: /* + ( n1 n2 -- n ) */ /* label, stack effect */ 1575: /* */ /* documentation */ 1576: { 1577: DEF_CA /* definition of variable ca (indirect threading) */ 1578: Cell n1; /* definitions of variables */ 1579: Cell n2; 1580: Cell n; 1581: n1 = (Cell) sp[1]; /* input */ 1582: n2 = (Cell) TOS; 1583: sp += 1; /* stack adjustment */ 1584: NAME("+") /* debugging output (with -DDEBUG) */ 1585: { 1586: n = n1+n2; /* C code taken from the source */ 1587: } 1588: NEXT_P1; /* NEXT part 1 */ 1589: TOS = (Cell)n; /* output */ 1590: NEXT_P2; /* NEXT part 2 */ 1591: } 1592: @end example 1593: 1594: This looks long and inefficient, but the GNU C compiler optimizes quite 1595: well and produces optimal code for @code{+} on, e.g., the R3000 and the 1596: HP RISC machines: Defining the @code{n}s does not produce any code, and 1597: using them as intermediate storage also adds no cost. 1598: 1599: There are also other optimizations, that are not illustrated by this 1600: example: Assignments between simple variables are usually for free (copy 1601: propagation). If one of the stack items is not used by the primitive 1602: (e.g. in @code{drop}), the compiler eliminates the load from the stack 1603: (dead code elimination). On the other hand, there are some things that 1604: the compiler does not do, therefore they are performed by 1605: @file{prims2x.fs}: The compiler does not optimize code away that stores 1606: a stack item to the place where it just came from (e.g., @code{over}). 1607: 1608: While programming a primitive is usually easy, there are a few cases 1609: where the programmer has to take the actions of the generator into 1610: account, most notably @code{?dup}, but also words that do not (always) 1611: fall through to NEXT. 1612: 1613: @subsection TOS Optimization 1614: 1615: An important optimization for stack machine emulators, e.g., Forth 1616: engines, is keeping one or more of the top stack items in 1617: registers. If a word has the stack effect {@var{in1}...@var{inx} @code{--} 1618: @var{out1}...@var{outy}}, keeping the top @var{n} items in registers 1619: @itemize 1620: @item 1621: is better than keeping @var{n-1} items, if @var{x>=n} and @var{y>=n}, 1622: due to fewer loads from and stores to the stack. 1623: @item is slower than keeping @var{n-1} items, if @var{x<>y} and @var{x<n} and 1624: @var{y<n}, due to additional moves between registers. 1625: @end itemize 1626: 1627: In particular, keeping one item in a register is never a disadvantage, 1628: if there are enough registers. Keeping two items in registers is a 1629: disadvantage for frequent words like @code{?branch}, constants, 1630: variables, literals and @code{i}. Therefore our generator only produces 1631: code that keeps zero or one items in registers. The generated C code 1632: covers both cases; the selection between these alternatives is made at 1633: C-compile time using the switch @code{-DUSE_TOS}. @code{TOS} in the C 1634: code for @code{+} is just a simple variable name in the one-item case, 1635: otherwise it is a macro that expands into @code{sp[0]}. Note that the 1636: GNU C compiler tries to keep simple variables like @code{TOS} in 1637: registers, and it usually succeeds, if there are enough registers. 1638: 1639: The primitive generator performs the TOS optimization for the 1640: floating-point stack, too (@code{-DUSE_FTOS}). For floating-point 1641: operations the benefit of this optimization is even larger: 1642: floating-point operations take quite long on most processors, but can be 1643: performed in parallel with other operations as long as their results are 1644: not used. If the FP-TOS is kept in a register, this works. If 1645: it is kept on the stack, i.e., in memory, the store into memory has to 1646: wait for the result of the floating-point operation, lengthening the 1647: execution time of the primitive considerably. 1648: 1649: The TOS optimization makes the automatic generation of primitives a 1650: bit more complicated. Just replacing all occurrences of @code{sp[0]} by 1651: @code{TOS} is not sufficient. There are some special cases to 1652: consider: 1653: @itemize 1654: @item In the case of @code{dup ( w -- w w )} the generator must not 1655: eliminate the store to the original location of the item on the stack, 1656: if the TOS optimization is turned on. 1657: @item Primitives with stack effects of the form {@code{--} 1658: @var{out1}...@var{outy}} must store the TOS to the stack at the start. 1659: Likewise, primitives with the stack effect {@var{in1}...@var{inx} @code{--}} 1660: must load the TOS from the stack at the end. But for the null stack 1661: effect @code{--} no stores or loads should be generated. 1662: @end itemize 1663: 1664: @subsection Produced code 1665: 1666: To see what assembly code is produced for the primitives on your machine 1667: with your compiler and your flag settings, type @code{make engine.s} and 1668: look at the resulting file @file{engine.c}. 1669: 1670: @section System Architecture 1671: 1672: Our Forth system consists not only of primitives, but also of 1673: definitions written in Forth. Since the Forth compiler itself belongs 1674: to those definitions, it is not possible to start the system with the 1675: primitives and the Forth source alone. Therefore we provide the Forth 1676: code as an image file in nearly executable form. At the start of the 1677: system a C routine loads the image file into memory, sets up the 1678: memory (stacks etc.) according to information in the image file, and 1679: starts executing Forth code. 1680: 1681: The image file format is a compromise between the goals of making it 1682: easy to generate image files and making them portable. The easiest way 1683: to generate an image file is to just generate a memory dump. However, 1684: this kind of image file cannot be used on a different machine, or on 1685: the next version of the engine on the same machine, it even might not 1686: work with the same engine compiled by a different version of the C 1687: compiler. We would like to have as few versions of the image file as 1688: possible, because we do not want to distribute many versions of the 1689: same image file, and to make it easy for the users to use their image 1690: files on many machines. We currently need to create a different image 1691: file for machines with different cell sizes and different byte order 1692: (little- or big-endian)@footnote{We consider adding information to the 1693: image file that enables the loader to change the byte order.}. 1694: 1695: Forth code that is going to end up in a portable image file has to 1696: comply to some restrictions: addresses have to be stored in memory 1697: with special words (@code{A!}, @code{A,}, etc.) in order to make the 1698: code relocatable. Cells, floats, etc., have to be stored at the 1699: natural alignment boundaries@footnote{E.g., store floats (8 bytes) at 1700: an address dividable by~8. This happens automatically in our system 1701: when you use the ANSI alignment words.}, in order to avoid alignment 1702: faults on machines with stricter alignment. The image file is produced 1703: by a metacompiler (@file{cross.fs}). 1704: 1705: So, unlike the image file of Mitch Bradleys @code{cforth}, our image 1706: file is not directly executable, but has to undergo some manipulations 1707: during loading. Address relocation is performed at image load-time, not 1708: at run-time. The loader also has to replace tokens standing for 1709: primitive calls with the appropriate code-field addresses (or code 1710: addresses in the case of direct threading). 1711: 1712: @contents 1713: @bye 1714:
|
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/Attic/gforth.ds?rev=1.3;content-type=text%2Fx-cvsweb-markup;sortby=log;f=h;only_with_tag=MAIN;ln=1
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Convex hull trick/acquire.cpp
From PEGWiki
/* ID: brian_bi21 PROG: acquire LANG: C++ */ /* 6th line from the end initially : if (i<N) now : if (i < N-1) by : pktiw */ #include <iostream> #include <vector> #include <algorithm> using namespace std; int pointer; //Keeps track of the best line from previous query vector<long long> M; //Holds the slopes of the lines in the envelope vector<long long> B; //Holds the y-intercepts of the lines in the envelope //Returns true if either line l1 or line l3 is always better than line l2 bool bad(int l1,int l2,int l3) { /* intersection(l1,l2) has x-coordinate (b1-b2)/(m2-m1) intersection(l1,l3) has x-coordinate (b1-b3)/(m3-m1) set the former greater than the latter, and cross-multiply to eliminate division */ return (B[l3]-B[l1])*(M[l1]-M[l2])<(B[l2]-B[l1])*(M[l1]-M[l3]); } //Adds a new line (with lowest slope) to the structure void add(long long m,long long b) { //First, let's add it to the end M.push_back(m); B.push_back(b); //If the penultimate is now made irrelevant between the antepenultimate //and the ultimate, remove it. Repeat as many times as necessary while (M.size()>=3&&bad(M.size()-3,M.size()-2,M.size()-1)) { M.erase(M.end()-2); B.erase(B.end()-2); } } //Returns the minimum y-coordinate of any intersection between a given vertical //line and the lower envelope long long query(long long x) { //If we removed what was the best line for the previous query, then the //newly inserted line is now the best for that query if (pointer>=M.size()) pointer=M.size()-1; //Any better line must be to the right, since query values are //non-decreasing while (pointer<M.size()-1&& M[pointer+1]*x+B[pointer+1]<M[pointer]*x+B[pointer]) pointer++; return M[pointer]*x+B[pointer]; } int main() { int M,N,i; pair<int,int> a[50000]; pair<int,int> rect[50000]; freopen("acquire.in","r",stdin); freopen("acquire.out","w",stdout); scanf("%d",&M); for (i=0; i<M; i++) scanf("%d %d",&a[i].first,&a[i].second); //Sort first by height and then by width (arbitrary labels) sort(a,a+M); for (i=0,N=0; i<M; i++) { /* When we add a higher rectangle, any rectangles that are also equally thin or thinner become irrelevant, as they are completely contained within the higher one; remove as many as necessary */ while (N>0&&rect[N-1].second<=a[i].second) N--; rect[N++]=a[i]; //add the new rectangle } long long cost; add(rect[0].second,0); //initially, the best line could be any of the lines in the envelope, //that is, any line with index 0 or greater, so set pointer=0 pointer=0; for (i=0; i<N; i++) //discussed in article { cost=query(rect[i].first); if (i < N-1) add(rect[i+1].second,cost); } printf("%lld\n",cost); return 0; }
|
https://wcipeg.com/wiki/Convex_hull_trick/acquire.cpp
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
TensorBoard can be used directly within notebook experiences such as Colab and Jupyter. This can be helpful for sharing results, integrating TensorBoard into existing workflows, and using TensorBoard without installing anything locally.
Setup
Start by installing TF 2.0 and loading the TensorBoard notebook extension:
For Jupyter users: If you’ve installed Jupyter and TensorBoard into
the same virtualenv, then you should be good to go. If you’re using a
more complicated setup, like a global Jupyter installation and kernels
for different Conda/virtualenv environments, then you must ensure that
the
tensorboard binary is on your
PATH inside the Jupyter notebook
context. One way to do this is to modify the
kernel_spec to prepend
the environment’s
bin directory to
PATH, as described here.
In case you are running a Docker image of Jupyter Notebook server using TensorFlow's nightly, it is necessary to expose not only the notebook's port, but the TensorBoard's port.
Thus, run the container with the following command:
docker run -it -p 8888:8888 -p 6006:6006 \ tensorflow/tensorflow:nightly-py3-jupyter
where the
-p 6006 is the default port of TensorBoard. This will allocate a port for you to run one TensorBoard instance. To have concurrent instances, it is necessary to allocate more ports.
# Load the TensorBoard notebook extension %load_ext tensorboard
TensorFlow 2.x selected.
Import TensorFlow, datetime, and os:
import tensorflow as tf import datetime, os
TensorBoard in notebooks
Download the FashionMNIST dataset and scale it:
fashion_mnist = tf.keras.datasets.fashion_mnist (x_train, y_train),(x_test, y_test) = fashion_mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0
Create a very simple model:
def create_model(): return tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ])
Train the model using Keras and the TensorBoard callback:
def train_model(): model = create_model() model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) logdir = os.path.join("logs",()
Train on 60000 samples, validate on 10000 samples Epoch 1/5 60000/60000 [==============================] - 11s 182us/sample - loss: 0.4976 - accuracy: 0.8204 - val_loss: 0.4143 - val_accuracy: 0.8538 Epoch 2/5 60000/60000 [==============================] - 10s 174us/sample - loss: 0.3845 - accuracy: 0.8588 - val_loss: 0.3855 - val_accuracy: 0.8626 Epoch 3/5 60000/60000 [==============================] - 10s 175us/sample - loss: 0.3513 - accuracy: 0.8705 - val_loss: 0.3740 - val_accuracy: 0.8607 Epoch 4/5 60000/60000 [==============================] - 11s 177us/sample - loss: 0.3287 - accuracy: 0.8793 - val_loss: 0.3596 - val_accuracy: 0.8719 Epoch 5/5 60000/60000 [==============================] - 11s 178us/sample - loss: 0.3153 - accuracy: 0.8825 - val_loss: 0.3360 - val_accuracy: 0.8782
Start TensorBoard within the notebook using magics:
%tensorboard --logdir logs
You can now view dashboards such as scalars, graphs, histograms, and others. Some dashboards are not available yet in Colab (such as the profile plugin).
The
%tensorboard magic has exactly the same format as the TensorBoard command line invocation, but with a
You can also start TensorBoard before training to monitor it in progress:
%tensorboard --logdir logs
The same TensorBoard backend is reused by issuing the same command. If a different logs directory was chosen, a new instance of TensorBoard would be opened. Ports are managed automatically.
Start training a new model and watch TensorBoard update automatically every 30 seconds or refresh it with the button on the top right:
train_model()
Train on 60000 samples, validate on 10000 samples Epoch 1/5 60000/60000 [==============================] - 11s 184us/sample - loss: 0.4968 - accuracy: 0.8223 - val_loss: 0.4216 - val_accuracy: 0.8481 Epoch 2/5 60000/60000 [==============================] - 11s 176us/sample - loss: 0.3847 - accuracy: 0.8587 - val_loss: 0.4056 - val_accuracy: 0.8545 Epoch 3/5 60000/60000 [==============================] - 11s 176us/sample - loss: 0.3495 - accuracy: 0.8727 - val_loss: 0.3600 - val_accuracy: 0.8700 Epoch 4/5 60000/60000 [==============================] - 11s 179us/sample - loss: 0.3282 - accuracy: 0.8795 - val_loss: 0.3636 - val_accuracy: 0.8694 Epoch 5/5 60000/60000 [==============================] - 11s 176us/sample - loss: 0.3115 - accuracy: 0.8839 - val_loss: 0.3438 - val_accuracy: 0.8764
You can use the
tensorboard.notebook APIs for a bit more control:
from tensorboard import notebook notebook.list() # View open TensorBoard instances
Known TensorBoard instances: - port 6006: logdir logs (started 0:00:54 ago; pid 265)
# Control TensorBoard display. If no port is provided, # the most recently launched TensorBoard is used notebook.display(port=6006, height=1000)
|
https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
10.1 WEB UI
Hadoop has a web interface from where you can administer the entire Hadoop eco system. Through this interface you can perform monitoring of task execution effectively. Here is an example of web link:
URL address including the port number can be vary depending on you cluster setup and server setting. The above link is from the single standalone cluster hence the server name is mentioned as “localhost”. Hadoop publishes web interfaces and it displays the status about cluster. Master Node of the cluster hosts these web interfaces. These web interfaces can be viewed by using SSH so that to create a tunnel to the master node and then by configuring a SOCKS proxy so your browser can view the available websites hosted on the master node by using SSH tunnel.
10.2 Administer Map Reduce
Administrating Map Reduce involves the administration of entire Map and Reduce cycle. There are various configuration options that we can set, Configurations can be controlled through mapred-site.xml file. It contains configurations for MapReduce Tuning Hadoop configurations for cluster deployments.
- mapred.reduce.parallel.copies – It is the maximum number of parallel copies the reduce step will run to fetch output from many parallel jobs. Its default value is 5
- mapred.map.child.java.opts – This is for passing Java options into the map JVM. Its default value is -Xmx200M
- mapred.reduce.child.java.opts – This is for passing Java options into the reduce JVM. Its default value is -Xmx200M
- io.sort.mb – The memory limit while sorting data in MBs. Its default value is 200
10.3 Administer Name Node
The basic functionality of Name Node is to perform file management over the distributed Data Node. Hadoop provides utility for checking the health of files in HDFS. While performing administration of Name Node, Hadoop looksfor blocks that are missing from all Data Nodes, as well as under or overreplicatedblocks. Let us see an example where we are checking the whole filesystem for a small cluster, We need to use fsck to achieve this –
% hadoop fsck /
Status: <<will show the status>>
Total size: <<size in byte>> B
Total dirs: <<number of directories>>
Total files: <<total number of files>>
Total blocks (validated):<<number of blocks>>
Minimally replicated blocks: <<count of minimally replicated blocks>>
Over-replicated blocks: <<count of over replicated blocks>>
Under-replicated blocks: <<count of under replicated blocks>>
Mis-replicated blocks: <<percentage of mis replicated blocks>>
Default replication factor: <<value for default replication factor>>
Average block replication: <<value for average block replication>>
Corrupt blocks: <<number of corrupt blocks>>
Missing replicas: <<percentage in missing replicas>>
Number of data-nodes: <<total number of data nodes>>
Number of racks: <<count of racks>>
fsck recursively checks the filesystem namespace. It starts from the path ‘root’ and prints a dot for every file it checks. In order to perform a file check, fsck collects the metadata information for the block of the file and seeks for inconsistencies or problems. These all information gets retrieved from the NameNode. There is no need for fsck to go and retrieve information from the DataNode and get the block information. Here some of the information with respect to Block and come of the conditions associated with it:
Over Replicated Block
If the Block exceeds its replication than target replication for the file they belong to then it is known as the Over Replicated Block. HDFS automatically deletes the excess replicas.
Under Replicated Block
If the Block keeps under its replication than target replication for the file they belong to then it is known as the Under Replicated Block. HDFS automatically creates new replicas of Under Replicated Block until they meet the target replication. To get the information about the ‘Block Being Replicated’, we need to use the command as ‘hadoop dfsadmin –metasave’.
Misreplicated Block
If a Block does not fulfil the Block Replica Placement Policy then it is known as the Misreplicated Block. In case of replication factor as three in a multi rack cluster environment, if all three replicas of a block are on the same rack, then the block is known as the Misreplicated Block. Since the replicas should be spread across at least two racks. HDFS will automatically replicate Misreplicated Block in order to meet the rack placement policy.
Corrupt Block
These are the Blocks whose replicas got corrupt. Blocks with having at least single noncorruptreplica will not be reported as corrupt Block. The NameNode replicates corruptreplicatill the target replication is met.
Missing Replica
These are the Blocks whose replicas are not present anywhere in the cluster.Corrupt or missing Block means there is a loss of data, it is actually a big concern. As a default procedure fsck leaves files with missing or corrupt Blocks, However you can definitely perform one of the following actions:
- Use the ‘move’ option to move the affected files to the ‘lost+found’ directory in HDFS, using the -move option
- Use the ‘delete’ option to delete the affected files, Note that the files cannot be recovered after being deleted
10.4 Administer Data Node
Administering DataNode involves the managing entire set of commodity computers. Note that every DataNode runs a block scanner that periodically verifies all the blocks stored over DataNode. This assures bad blocks to be detected and fixed before they are processed and readby the clients. The DataBlockScanner maintains a list of blocks so that to verify and scans them one after another for checksum errors. Blocks are verified in every three weeks so that to guard against disk errors over time. We can set ‘dfs.datanode.scan.period.hours’ property to perform this operation, default value for this is 504 hours. Corrupt blocks are always gets reported to NameNode so that it can fix them. If you want to get verification report for a DataNode then you can visit the datanode’sweb interface at. Find below an example of report:
Over the period of time the distribution of blocks across DataNodes can become unbalanced. As a result of this, anunbalanced cluster can affect the locality for MapReduce and hence puts a greater pressure on the highly utilized DataNodes. Hadoop has a balancer program, a Hadoop daemon,which redistributes blocks by transferring them from over utilized DataNodes to underutilized DataNodes. This process goes on until the cluster is considered to be balanced.
10.5 Administer Job Tracker
The JobTracker daemon is the link between our application and Hadoop system. Administration of JobTracker means managing the process in which JobTracker manages the overall working of TaskTracker. Here are some functions of JobTracker –
- After we submitting code to the Hadoop cluster then the JobTracker determines plan of execution by determining the required files to be processed. It assigns nodes to different tasks and monitors all running tasks
- In case of a task fails, the JobTracker re-launch the task automatically, most probably on a different node
- JobTracker is a Master process and there is only one JobTracker daemon per Hadoop cluster
- Once a client asks the JobTracker to initiate the data processing job, the JobTracker divides the work and assigns different map reduce tasks to each TaskTracker in the system
10.6 Administer Task Tracker
Administering TaskTracker involves the management of Tasks that is being executed on every DataNode. TaskTracker manages the execution of individual tasks on each DataNode. Here are some of the functions of TaskTracker –
- Every TaskTracker is responsible for running the individual tasks assigned by JobTracker
- However there is a single TaskTracker per DataNode, But each TaskTracker can spawn multiple JVMs in order to handle many map or reduce tasks in simultaneous execution
- TaskTracker continuously communicates with the JobTracker
- When the JobTracker does not receive a heartbeat message from a TaskTracker then JobTracker assumes the TaskTracker has crashed and will resubmit the failing tasks to other DataNodes in the cluster
10.7 Remove Node
As we know that the Hadoop system is designed to handle failure precisely without the loss of data. In case of replication factor is ‘three’, the Block that contains the data would be spread in three different DataNode (either on the same rack or different racks). There is only one case when you simultaneously shouts down all the DataNodes and loose data. However this is not the way of shutting down or removing the DataNodes. The way to decommission the DataNode is to first inform the NameNode of the DataNodes that you want to remove out of cluster. This will make sure that NameNode replicates the Blocks that are part of DataNodes being removed. If you shut down a TaskTracker that is executing Tasks then JobTracker will make a note ofthe failure and reschedule the Tasks on other TaskTracker. The entire decommissioning process is controlled by an exclude file whose details are as:
- It is set in dfs.hosts.exclude property for HDFS
- It is set in mapred.hosts.exclude property for MapReduce
10.8 Assign Node
In order to enhance the size of Hadoop cluster we may require to add DataNode in it. It is a DataNode assigning or commissioning process. Although adding up a DataNode in Hadoop cluster is altogether is a simple step but it has its own likely security risk associated with. Sometimes adding a machine and to allow to connect to NameNode might be risky in the sense that particular machine acting as DataNode may not be authorized to, since that machine is actually not a real DataNode, you don’t have any control on it, and may stop working at any point of time. This will result in loss of data that we will never expect to happen. Datanodes that are allowed to connect to NameNode are all specified in a file, the name of this file is mentioned in ‘dfs.hosts’ property. This file stays on the NameNode’s local
filesystem, it has an entry associated with each DataNode. In the same way TaskTracker that may connect to the JobTracker are specified in a file whosename is mentioned in ‘mapred.hosts’ property.
Here is the process to add a new NameNode to the Hadoop cluster:
- First add the network addresses of the new machine that needs to be included in the include file
- Update the NameNode with the information of new set of DataNode using command
% hadoop dfsadmin -refreshNodes
- Update the jobtracker with the information of new set of TaskTracker using command
% hadoop mradmin -refreshNodes
- Perform an update on the slaves file with the information of new DataNode in order to include them in future operations
- Start the newly added DataNode and TaskTracker
- Make sure that the newly added DataNode and TaskTracker appear in the web UI
10.9 Scheduling and Debugging Job
Scheduling
Job scheduling means job has to wait until it turns come for execution. In the shared cluster environment a lot of resources are shared among various users. Hence in turn it needs a better scheduler. It is highly required that Production jobs should finish effectively in timely manner. However at the same time should allow to getresults back in a reasonable time for the users who makes smaller ad hoc queries.
We can add to set a job’s priority for the execution using mapred.job.priority property, we may use setJobPriority() method. The value that it takes are:
- VERY_HIGH
- HIGH
- NORMAL
- LOW
- VERY_LOW
We have choice of scheduler for MapReduce in Hadoop:
- Fai Scheduler
The purpose of Fair Scheduler is to give every user a fair share of the available cluster capacity over the period of time.
- In case of a single job is running – It will get all of the cluster
- With multiple jobs submitted –Free task slots are provided to the jobs in such a way so that to give each user a fair share of the cluster.
- CapacityScheduler
In Capacity Scheduler, the cluster is made up of a number of queues which may be hierarchical in such a way that a queue may be the child of another queue. Also, each queue has anallocated capacity. Under each queue, jobsare scheduled using First In First Out scheduling with priorities.
Debugging
Debugging can be done in various ways and in various parts of the Hadoop eco system. But if we have to talk about LOGS especially for analyzing the system then we have following types of log available:
System Daemon Logs
Each Hadoop daemon produces a logfile usinglog4j. Written in the directory which is defined by HADOOP_LOG_DIR environmentvariable. It’s been use by the Administrator.
HDFS Audit Logs
A log of all HDFS requests. Written to the NameNode’s log, it is configurable. It’s been use by the Administrator.
MapReduce Job History Logs
A log of the events that takes place during running a job. Saved centrally on the JobTracker. It’s been use by the User.
MapReduce Task Logs
Each TaskTracker child process produces alogfile using log4j which is called syslog. Written in theuserlogs subdirectory. It’s been use by the User.
|
http://www.wideskills.com/hadoop/hadoop-administration
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Yes to the part about placing the initialization in a static block.
However, we still need a non-static method for getParameterName so that we
can put it in the Skeleton interface.
(An alternative is to generate the code as you suggest and change the
Skeleton interface to be just a "marker".
The assume that anything implementing the marker interface has a method
named getParameterName.)
Russell, what do you think. Russell, would you like to make this/16/2002 08:55
AM
Please respond to
axis-dev
Concerning Skeleton generation, wouldn't it be better if the generated code
was :
"
package thepackage;
public class MyTestSoapBindingSkeleton implements thepackage.MyTest,
org.apache.axis.wsdl.Skeleton {
private thepackage.MyTest impl;
private static org.apache.axis.wsdl.SkeletonImpl skel;
static {
skel = new org.apache.axis.wsdl.SkeletonImpl();
skel.add("eCHO",
new String[] {
"return",
"in0",
});
}
public MyTestSoapBindingSkeleton() {
this.impl = new localhost.MyTestSoapBindingImpl();
}
public MyTestSoapBindingSkeleton(thepackage.MyTest impl) {
this.impl = impl;
}
public static String getParameterName(String opName, int i) {
return skel.getParameterName(opName, i);
}
[...]
"
?
- No more Init method necessary (replaced by a static block)
- only one getParameterName function.
I don't understand why 2 getParameterName functions are necessary as only
one (static) should be enough.
Cédric Chabanois
> I just made changes to Java2WSDL to use getParameterNames()
> to generate the
> wsdl part names. If it can't use
> getParameterNames(), it defaults to using the debug
> information as before.
>
> As far as the runtime is concerned....
>
> The skeleton code was necessary to glue to output of the axis server
> runtime to the implementation code.
> For example, the skeleton was responsible for constructing
> Holder objects
> and passing back the output parameters
> in an implementation defined manner.
>
> I simplified the skeleton code by moving most of this work
> directly into
> the runtime. The only piece that was remaining
> was actually obtaining the output parameter names to pass
> back over the
> wire. The getParameterNames() method provides this piece.
> We could change the runtime to use the debug information just like
> Java2WSDL...but I think its wrong to have the runtime have
> behavior that
> is dependent on debug/14/2002 10:25
> AM
>
> Please respond to
>
> axis-dev
>
>
>
>
>
>
>
>
>
> Thanks,
>
> I understand better now !
>
> I still have questions concerning this point :
> - if implementation class was compiled with -g, we know the
> names of the
> output parameters, don't we ?
> So skeletton should not be necessary
>
> - if implementation class was not compiled with -g,
> getParametersNames is
> used to get the names of the output parameters.
> But the client (written in Delphi, MS.NET ...) has been
> created using the
> wsdl, so it may be waiting for inout# parameters as the wsdl
> does not have
> correct parameter names (because implementation class not
> compiled with -g
> ...)
>
> So I think that if skeleton is present, getParameterNames
> should be used to
> create the wsdl parameter names.
>
> Cédric
>
>
> > The getParameterNames(...) method is used by the AXIS runtime
> > so that it
> > can get the names of the output parameters to pass back. This
> > functionality used to be embedded in the skeleton methods.
> >
> > You are pointing out that the getParameterNames(...) method
> > could also be
> > used by Java2WSDL to get the parameter names for the wsdl file.
> > You are correct, but I have not added this code yet.
> >
> > Rich Scheuerle
> > XML & Web Services Development
> > 512-838-5115 (IBM TL 678-5115)
> >
> >
> >
> >
> > Cédric Chabanois
> >
> > <CChabanois@cogni To:
> > axis-dev@xml.apache.org
> >
> > case.fr> cc:
> >
> > Subject: Re :
> > Lightweight Skeletons
> > 01/14/2002 09:36
> >
> > AM
> >
> > Please respond to
> >
> > axis-dev
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > I)
> >
> >
> >
> >
>
>
>
|
http://mail-archives.apache.org/mod_mbox/axis-java-dev/200201.mbox/%3COF818AB028.A6A6DF84-ON85256B43.005B38C7@raleigh.ibm.com%3E
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Source: Mantua, 2000
The essay below has been part of a back and forth email exchange for about a week. Bill has done some yeoman’s work here at coaxing some new information from existing data. Both HadCRUT and GISS data was used for the comparisons to a doubling of CO2, and what I find most interesting is that both Hadley and GISS data come out higher in for a doubling of CO2 than NCDC data, implying that the adjustments to data used in GISS and HadCRUT add something that really isn’t there.
The logarithmic plots of CO2 doubling help demonstrate why CO2 won’t cause a runaway greenhouse effect due to diminished IR returns as CO2 PPM’s increase. This is something many people don’t get to see visualized.
One of the other interesting items is the essay is about the El Nino event in 1878. Bill writes:.
Clearly the oceans ruled the climate, and it appears they still do.
Let’s all give this a good examination, point out weaknesses, and give encouragement for Bill’s work. This is a must read. – Anthony
Adjusting Temperatures for the ENSO and the AMO
A guest post by: Bill Illis.
I will walk you through how this method was developed since it will help with understanding some of its components.
Let’s first look at the Nino 3.4 region anomaly going back to 1871 as developed by Trenberth (actually this index is smoothed but it is the least smoothed data available).
–.
– The 1997-98 El Nino produced similar results and still holds the record for the highest monthly temperature of +0.749C in Feb, 1998.
– There is a lag of about 3 months in the impact of ENSO on temperatures. Sometimes it is only 2 months, sometimes 4 months and this reconstruction uses the 3 month lag.
– Going back to 1871, there is no real trend in the Nino 3.4 anomaly which indicates it is a natural climate cycle and is not related to global warming in the sense that more El Ninos are occurring as a result of warming. This point becomes important because we need to separate the natural variation in the climate from the global warming influence.
The AMO anomaly has longer cycles than the ENSO.
– While the Nino 3.4 region can spike up to +3.4C, the AMO index rarely gets above +0.6C anomaly.
– The long cycles of the AMO matches the major climate shifts which have occurred over the last 130 years. The downswing in temperatures from 1890 to 1915, the upswing in temps from 1915 to 1945, the decline from 1946 to 1975 and the upswing in temps from 1975 to 2005.
– The AMO also has spikes during the major El Nino events of 1877-88 and 1997-98 and other spikes at different times.
– It is apparent that the major increase in temperatures during the 1997-98 El Nino was also caused by the AMO anomaly. I think this has lead some to believe the impact of ENSO is bigger than it really is and has caused people to focus too much on the ENSO.
– There is some autocorrelation between the ENSO and the AMO given these simultaneous spikes but the longer cycles of the AMO versus the short sharp swings in the ENSO means they are relatively independent.
– As well, the AMO appears to be a natural climate cycle unrelated to global warming.
When these two ocean indices are regressed against the monthly temperature record, we have a very good match.
–.
– The F-statistic for this regression at 222.5 means it passes a 99.9% confidence interval.
But there is a divergence between the actual temperature record and the regression model based solely on the Nino and the AMO. This is the real global warming signal.
The global warming signal (which also includes an error, UHI, poor siting and adjustments in the temperature record as demonstrated by Anthony Watts) can be now be modeled against the rise in CO2 over the period.
– Warming occurs in a logarithmic relationship to CO2 and, consequently, any model of warming should be done on the natural log of CO2.
– CO2 in this case is just a proxy for all the GHGs but since it is the biggest one and nitrous oxide is rising at the same rate, it can be used as the basis for the warming model.
This regression produces a global warming signal which is about half of that predicted by the global warming models. The F statistic at 4,308 passes a 99.9% confidence interval.
– Using the HadCRUT3 temperature series, warming works out to only 1.85C per doubling of CO2.
– The GISS reconstruction also produces 1.85C per doubling while the NCDC temperature record only produces 1.6C per doubling.
– Global warming theorists are now explaining the lack of warming to date is due to the deep oceans absorbing some of the increase (not the surface since this is already included in the temperature data). This means the global warming model prediction line should be pushed out 35 years, or 75 years or even 100s of years.
Here is a depiction of how logarithmic warming works. I’ve included these log charts because it is fundamental to how to regress for CO2 and it is a view of global warming which I believe many have not seen before.
The formula for the global warming models has been constructed by myself (I’m not even sure the modelers have this perspective on the issue) but it is the only formula which goes through the temperature figures at the start of the record (285 ppm or 280 ppm) and the 3.25C increase in temperatures for a doubling of CO2. It is curious that the global warming models are also based on CO2 or GHGs being responsible for nearly all of the 33C greenhouse effect through its impact on water vapour as well.
The divergence, however, is going to be harder to explain in just a few years since the ENSO and AMO-adjusted warming observations are tracking farther and farther away from the global warming model’s track. As the RSS satellite log warming chart will show later, temperatures have in fact moved even farther away from the models since 1979.
The global warming models formula produces temperatures which would be +10C in geologic time periods when CO2 was 3,000 ppm, for example, while this model’s log warming would result in temperatures about +5C at 3,000 ppm. This is much closer to the estimated temperature history of the planet.
This method is not perfect. The overall reconstruction produces a resulting error which is higher than one would want. The error term is roughly +/-0.2C but the it does appear to be strictly white noise. It would be better if the resulting error was less than +/- 0.2C but it appears this is unavoidable in something as complicated as the climate and in the measurement errors which exist for temperature, the ENSO and the AMO.
This is the error for the reconstruction of GISS monthly data going back to 1880.
There does not appear to be a signal remaining in the errors for another natural climate variable to impact the reconstruction. In reviewing this model, I have also reviewed the impact of the major volcanoes. All of them appear to have been caught by the ENSO and AMO indices which I imagine are influenced by volcanoes. There appears to be some room to look at a solar influence but this would be quite small. Everyone is welcome to improve on this reconstruction method by examining other variables, other indices.
Overall, this reconstruction produces an r^2 of 0.783 which is pretty good for a monthly climate model based on just three simple variables. Here is the scatterplot of the HadCRUT3 reconstruction.
This method works for all the major monthly temperature series I have tried it on.
Here is the model for the RSS satellite-based temperature series.
Since 1979, warming appears to be slowing down (after it is adjusted for the ENSO and the AMO influence.)
The model produces warming for the RSS data of just 0.046C per decade which would also imply an increase in temperature of just 0.7C for a doubling of CO2 (and there is only 0.4C more to go to that doubling level.)
Looking at how far off this warming trend is from the models can be seen in this zoom-in of the log warming chart. If you apply the same method to GISS data since 1979, it is in the same circle as the satellite observations so the different agencies do not produce much different results.
There may be some explanations for this even wider divergence since 1979.
– The regression coefficient for the AMO increases from about 0.51 in the reconstructions starting in 1880 to about 0.75 when the reconstruction starts in 1979. This is not an expected result in regression modelling.
– Since the AMO was cycling upward since 1975, the increased coefficient might just be catching a ride with that increasing trend.
– I believe a regression is a regression and we should just accept this coefficient. The F statistic for this model is 267 which would pass a 99.9% confidence interval.
– On the other hand, the warming for RSS is really at the very lowest possible end for temperatures which might be expected from increased GHGs. I would not use a formula which is lower than this for example.
– The other explanation would be that the adjustments of old temperature records by GISS and the Hadley Centre and others have artificially increased the temperature trend prior to 1979 when the satellites became available to keep them honest. The post-1979 warming formulae (not just RSS but all of them) indicate old records might have been increased by 0.3C above where they really were.
– I think these explanations are both partially correct.
This temperature reconstruction method works for all of the major temperature series over any time period chosen and for the smaller zonal components as well. There is a really nice fit to the RSS Tropics zone, for example, where the Nino coefficient increases to 0.21 as would be expected.
Unfortunately, the method does not work for smaller regional temperature series such as the US lower 48 and the Arctic where there is too much variation to produce a reasonable result.
I have included my spreadsheets which have been set up so that anyone can use them. All of the data for HadCRUT3, GISS, UAH, RSS and NCDC is included if you want to try out other series. All of the base data on a monthly basis including CO2 back to 1850, the AMO back to 1856 and the Nino 3.4 region going back to 1871 is included in the spreadsheet.
The model for monthly temperatures is “here” and for annual temperatures is “here” (note the annual reconstruction is a little less accurate than the monthly reconstruction but still works).
I have set-up a photobucket site where anyone can review these charts and others that I have constructed.
So, we can now adjust temperatures for the natural variation in the climate caused by the ENSO and the AMO and this has provided a better insight into global warming. The method is not perfect, however, as the remaining error term is higher than one would want to see but it might be unavoidable in something as complicated as the climate.
I encourage everyone to try to improve on this method and/or find any errors. I expect this will have to be taken into account from now on in global warming research. It is a simple regression.
UPDATED: Zip files should download OK now.
SUPPLEMENTAL INFO NOTE: Bill has made the Excel spreadsheets with data and graphs used for this essay available to me, and for those interested in replication and further investigation, I’m making them available here on my office webserver as a single ZIP file
Downloads:
Annual Temp Anomaly Model 171K Zip file
Monthly Temp Anomaly Model 1.1M Zip file
Just click the download link above, save as zip file, then unzip to your local drive work folder.
Here is the AMO data which is updated monthly a few days after month end.
Here is the Nino 3.4 anomaly from Trenbeth from 1871 to 2007.
And here is Nino 3.4 data updated from 2007 on.
– Anthony
296 thoughts on “Adjusting Temperatures for the ENSO and the AMO”
Isn’t water vapor a greenhouse gas? Is there a chart of water vapor changes at different atmospheric levels during this same time span that might coincide with increased temperatures? And if CO2’s affect on water vapor were removed, would that leave us with a correlation that more closely matches water vapor changes than CO2 changes? One more question, what does a changing ocean cycle do to water vapor? Is there a lag? Is there a point at which the climate becomes more sensitive to water vapor? In other words, is there a tipping point for water vapor? I can surmise that when ocean cycles go the other way, water vapor would start to change as well and could stall climate changes or even reverse them.
“There appears to be some room to look at a solar influence but this would be quite small.”
I am no expert, but I suspect the oceans are being driven by the solar influence. It is becoming increasingly clear that the oceans are running the climate show on this planet.
Thanks. Anthony, I am having trouble with the two spreadsheets. I have tried to download them both in open office and excel and I get a message that they are corrupted
REPLY: Try now I’ve added a ZIP file. – Anthony
There is also a chicken/egg question of warmer water outgassing or at the least not absorbing as much CO2. When oceans warm they take in less CO2 and when soils warm, they produce more CO2 due to increased rates of biological decomposition of organic matter. A recent (in the past week) paper figures that soils produce 10x more CO2 than all human activities combined. While a few degrees doesn’t make much difference in tropical regions, it can make a HUGE difference in temperate regions where a degree or two in local temperature might mean the difference between “frozen” and active biological decomposition. Also, a warmer year will result in a longer period above freezing and just a couple of weeks of additional time is again more than 10x human caused emissions over those two weeks. So an extra frost-free week could be equal to 10 weeks of additional human emissions from natural sources.
So CO2 may still not be a cause and is still quite likely to be a result of warming.
Bill: I’ll have to delve deeper into your post tomorrow, but until then, here’s something to ponder. Why does a running total of the Trenberth NINO3.4 SST anomaly data create a curve that mimics the global temperature anomaly curve?
I can scale that running total with a coefficient from a Trenberth paper on ENSO and come up with a very reasonable correlation between the running total and global temperature? WHY?
For a approach, fitting the Pacific Decadal Oscillation and CO2 to temperature trends, see: Roy W. Spencer’s model, Global Warming as a Natural Response to Cloud Changes Associated with the Pacific Decadal Oscillation (PDO)
Would Bill Illis’ fit above improve by using a combination of PDO, ENSO and AMO with ln(CO2)? Or are PDO, ENSO and AMO sufficiently interrelated as to not be independent?
Can the effects of the Urban Heat Island effect be quantitatively separated from CO2 forcing on temperature trends?
Anthony, only Java -type URLs come up at the links. Can you provide non-Java URLs for the data? Thanks, Steve
REPLY: Hi Steve, I’m not sure what you are seeing. I have no java involved in any of this, like you I dislike it. I noted some previous CA comment where you mentioned something similar. I think whatever version of the Java engine you have installed on your PC may be intercepting links.
My advice, uninstall java from your machine, reboot, then reinstall it.
Anthony
IT would also be helpful if Bill added original sources for the data (as URLs to the ENSO version and AMO version as used, for example.)
Regarding water vapour Pamela, there isn’t really good data to show changes in water vapour over time. The only non-confirmed data that there is shows there has been a very slight decline in relative humidity.
The global warming models are based on relative humidity staying more-or-less constant as temperature fluxuates up and down. Some studies show this is the case while others show there is some variation that we don’t understand right now.
The water vapour question is the big remaining question in global warming and how big the impact will be.
To Steve
Here is the AMO data which is updated monthly a few days after month end.
Here is the Nino 3.4 anomaly from Trenbeth from 1871 to 2007.
And here is Nino 3.4 data updated from 2007 on.
For those having trouble downloading, I’ve updated the download with a single zip file containing Bill Illis suplemental files.
Check the last part of the post again for it.
This is very interesting, well thought out work on first blush. Since this is largely a statistical analysis, I would really like to see CA / Steve McIntyre take a hard look at it & render his opinion on the statistical validity (and comment here if possible). If it appears to hold up, I would encourage Bill to try to get it published, perhaps as a co-author with one or more names that could lend weight & credence to the publication.
I have taken a similar approach for seasonal forecasting of front range snowfall here in Colorado with pretty good success. Interestingly enough, I also found my best correlations with ENSO & AMO and poor correlation with solar activity & volcanic activity / optical thickness data. I think that not only do the oceans rule our long term climatic trends but also largely rule our seasonal trends.
Something to consider – we know the ocean has thermohaline circulation cycles of up to 1000+ years. If the ocean circulation has cycles of 1000 + years, could it also have heat content cycles up to 1000 + years & could that be a significant driver / component of even longer cycles of climate change we observe? Are there proxies out there that could assess this in a manor similar to what Bill just did for the last 130 years? – isotope data possibly?
I personally think that Bill is just scratching the surface of what this general multi-variate technique could bring to the table for climate data analysis.
To Bob Tisdale,
There is no logical/physical reason to include a running total for the Nino 3.4 anomaly which extends over years. I could be persuaded for a running total of a few months but one just needs to examine the up and down of temperatures in the 1997-98 El Nino, for example, to see there is no accumulating impact. The direct and continuous impact appears to work better and is more logical from a physical perspective in my mind.
Great detective work Bill. Now we await the pitbull (Gavin) to see if he takes a bite!
Good work Bill Illis.
One lingering issue I kept running into in my own (non-climate) empirical fitting work is trying to avoid adding parameters I’m not certain are needed. And at each step, trying the simplest possible influence from a factor before assuming a more complex relationship.
What that boils down to is:
What happens to your fit if you just regress on a monotonically increasing line instead of ln(co2)?
Is the fit substantially better or worse?
Because a fair number of papers split “the warming” into a slice due to humans (AGW), and a slice that isn’t necessarily. Being able to differentiate the two would be excellent.
Still can’t get spreadsheets with zip file. Lots of #VALUEs (using Excel 2007).
Only get this on the link – no way to download anything.
“ndex of
Up to higher level directory
Name Size Last Modified”
Anthony:
Have you read this:
On reading seems like NASA gobbly-gook and makes statements like this:
“With new observations, the scientists confirmed experimentally what existing climate models had anticipated theoretically” and
“Because the new precise observations agree with existing assessments of water vapor’s impact, researchers are more confident than ever in model predictions that Earth’s leading greenhouse gas will contribute to a temperature rise of a few degrees by the end of the century.” What’s a “few”?
What’s your take on this “News”?
Bill, I have some plots somewhere which suggest that ENSO and global temperature are correlated with about a 3 month lag and then correlated again, weakly, with about a 15 month lag.
My thinking was that the results reflected either an odd flaw in my approach or, if real, a secondary impact of ENSO on Indian Ocean SST.
I’ll see if I can find, or reconstruct, those plots during the upcoming US holiday.
Still having problems with zip file. Getting lots of #VALUE.
Also problems with
Can’t seem to download anything.
Anthony:
Have you read this:
Any comments?
To Alan S. Blue
The theory of global warming is based on a logarithmic relationship of CO2/GHGs to temperature impact.
A linear model works fine until you move far away from the current CO2 levels of 387 ppm. In fact, right now, CO2 levels are increasing at a slightly exponential rate (0.8% acceleration) per year and the warming trend would go ballistic exponential in no time if you didn’t use the log formula.
It doesn’t make much difference for short periods of time but what would Earth’s temp be when CO2 levels were 3,000 ppm 350 million years ago – 8 times the current average of 15C or about 116C – it was only about +5C.
To Steve Hempell.
Regarding the water vapour study by Dessler – I read the paper and the results are not exactly as indicated in the news releases. The study examined the change in water vapour from DJF 2007 to DJF of 2008 when the La Nina (and the AMO) more water vapour in the lower levels of the atmosphere, the study really found that there was a decline in overall relative humidity when global warming theory suggests it should stay more-or-less stable.
To be fair, the models do produce results which are similar to this as temperatures go up (but not when they go down as happened between 2007 and 2008.)
To davidsmith,
I think there is room for further optimization of this model, especially with the lags and trying other indices. I’ve seen your stuff before and would welcome any further thoughts.
To be honest, I built this model because I got tired of asking people to just try this or try that and then not seeing it done. I am just a layman and others need to pick this up and run with it now.
It makes me uncomfortable to look at the graph Hadley Plus Constant Versus Nino and AMO Model Only and see a hybrid of actual data and modeled data going back in time. Plotting anything that comes out of a computer on a chronologic baseline from the past is an inherently unsettling strategy, one that to my eyes appears to have been heavily influenced by AGWers’ love of GCMs.
The word “warming” on the graph, again, appears to have the ring of authority, as though there were only a single possible explanation for the divergence. That would appear to my eyes to be an argument, rather than a fact. Could the length of solar cycles have anything to do with temperatures? Could the intensity of solar cycles have anything to do with the rising temperatures? Could the PDO be at play here? Could, as someone else pointed asked, energy in the deep ocean that got there hundreds, or thousands, of years ago have raised atmospheric temperatures in the 20th century?
I’d be curious to know if the AMO has peaked on its latest cycle, and if so, what happens from here through the next 30 years? A repeat of of 1945-1975 – ala stagnant or a minor drop in temperatures – like the Western WA Professor’s paper as of recent?
temps.
2008—>..
.. …. …… .. 2038(?)
…… … .. …..
…
Okay, I see partions of two oceans discussed here… many more to go.
Steve Hempell and Pamela
Re water vapour questions, it seems to me that with this kind of analysis, it is irrelevant. Temp increases are logarithmically tied to CO2, and H20 is essentially logarithmically tied to temperature. So a log dependence on CO2 will take care of the H2O in this type of correlation. Only the exponent of the log changes, ie the constant in front of the log term becomes a proxy for all the other parameters that change with CO2. In essence, it is perfectly acceptable to use the CO2 level as the proxy for most of the other variables that are tied to it. This is a nice piece of work.
I wish I could remember who said that the climate is the continuation of the oceans by other means.
=======================================
The assumption that for time scales longer than el-nino and multidecadal oscillations the climate naturally is essentially static forms an essential, but unjustified, foundation of the AGW hypothesis. It is not possible to prove unambiguously that the warming from 1970 to 2000 was not part of some natural variation. Hence the defense of the undefensible concerning the hockey stick charade.
kim: Von Cloudswitz
To me, it really looks like the AMO/nino model diverges compared to observations since the 70ies, and that the difference keeps increasing with time.
Bill Ellis: You wrote, “one just needs to examine the up and down of temperatures in the 1997-98 El Nino, for example, to see there is no accumulating impact.”
Take a closer look. Note the step change in the pre-and post-1997 global temperature trends that should be attributable to the 97/98 El Nino in the:
RSS MSU Data:
GISS Data:
NCDC Data:
UAH MSU Data:
and the HADCRUT Data:
Or looking at data sets of smaller areas, notice the remarkable step changes in the Mediterranean Sea SST after the 97/98 El Nino:
and the Gulf of Mexico SST:
and the Atlantic Ocean SST:
and the western Pacific Ocean SST:
Arctic temperatures shifted, too:
Note the differences in response of the east and west Pacific Ocean SSTs (divided at 180deg longitude) to the 97/98 El Nino:
Note also the differences in the responses of the Indian Ocean to significant El Nino events:
versus La Nina events:
I discussed the above in the following posts:
Bill: I don’t have the background that would allow me to delve into the data further and pull out an accumulating impact of ENSO. But maybe someone else reading this thread does. The fact that a running total of the Trenberth NINO3.4 SST anomaly data mimics the global temperature anomaly curve hints that there is an accumulation.
Regards.
A good start in unravelling the oceanic effects on atmospheric temperatures.
Now we need to ascertain the net global effect at any particular time of ALL oceanic oscillations combined.
Sometimes they combine in the same phase to affect temperatures rapidly and at other times they offset one another.
Then tie them in with solar changes over several solar cycles and that should account for all observed temperature changes without having to involve CO2 at all.
See my various articles at :
Kim,
The oceans should be regarded as a continuation of the atmosphere as regards maintenance of global temperature and being so much more substantial the oceans are by far the greater part of the mechanism.
Bill Illis: Sorry about misspelling your last name. It’s early here. I was apparently more concerned that I had the right links.
Regards.
kim (23:30:28) : I wish I could remember who said…
“Climate is the continuation of the oceans by other means”
That’s one link, kim. It seems to be all over the web.
Bill Illis says,
“I am just a layman” I ask Bill, where can i get a degree in Layman ?
Bill Illis:
Thankyou for this cogent analysis. I have one comment on your method and its effect on your conclusion.
I understand your article to say your analytical method has the following steps.
1.
The effect on temperature of AMO and ENSO within the time series is calculated by simple regression (this is possible because AMO and ENSO exhibit several cycles within the temporal range of the data set).
2.
The temperature effect of AMO and ENSO is deleted from the time series to reveal a residual temperature trend in the time series.
3.
The residual trend is assumed to be an effect of changed atmospheric carbon dioxide concentration over the temporal range of the data set.
4.
The assumption in step 3 is used to calculate the climate sensitivity to changing atmospheric carbon dioxide concentration.
This may be correct, but the assumption in step 3 is the logical fallacy of ‘argument from ignorance’. The assumption amounts to, “The cause of the residual trend is not known so it must be changing atmospheric carbon dioxide concentration”. (If this ‘logical fallacy’ is not clear then consider, “The cause of crop failures is not known so it must be witches”.)
Of course, the residual trend may be a result of changing atmospheric carbon dioxide concentration.
However, the assumption in step 3 does not concur with the implicit assumption of steps 1 and 2 that natural cycles are affecting the temperature trend.).
There is no known cause of this apparent low frequency oscillation: some people suggest it could be solar influence, but it could be the chaotic climate system seeking its attractor(s), and it could be … . However, there is no known cause of the AMO and ENSO, either.
Therefore, the implicit assumption of your steps 1 and 2 suggests that the residual trend determined by your steps 1 and 2 could be recovery from the LIA that is similar to the recovery from the DACP to the MWP.
Indeed, since the method adopted the implicit assumption of your steps 1 and 2, consistency suggests that all the observed rise of global temperature in the twentieth century is recovery from the LIA that is similar to the recovery from the DACP to the MWP.
Hence, the calculated climate sensitivity to changing atmospheric carbon dioxide concentration obtained by your method should be assumed to be a maximum value until this possibility of recovery from the LIA is assessed.
I hope these thoughts are helpful.
Again, thankyou for your superb work that I trust will soon be published.
Richard
To Richard,
The steps you outlined are correct and there is a step 3 where there may be opportunities to find other variables/indices to explain some of the variation.
There is a fairly consistent trend going up however and the biggest explanation for that would be increasing GHGs.
But you’re right, other variables should be tested in this model.
I also am having problems opening the excel files. Excel tries to repair (unsucessfully) the annual model, and I get a “cannot be accessed” error on the monthly.
Excellent analysis. I look forward to delving deeper over the holiday.
Arnd Bernaerts, yes, it was, and thank you, Roger.
============================== represents..
This is why the CO2 notch is virtually identical in the two spectra; the CO2 band was virtually saturated at the 325ppmv concentration level, so even nine times more CO2 has almost no appreciable effect.
Norm K.
Richard C
You make good points. However, for your ‘logical fallacy’ example of the cause of crop failures to be analgous, there would need to be a corresponding increase in the population of witches.
And while we wouldn’t expect crop failures to “force” an increase of witches,
does the same hold true as to whether >CO2 is a cause or effect? After reading Jim Hanson’s paper where he tries to explain away the 700 or so yr. lag of CO2 following temperature changes in the Vostok data, I remain unconvinced that CO2 is, in the words of someone who’s name I can’t recall, “driving the bus or just sitting in the back”.
Bill Illis: Sometimes a change of perspective is needed. Your analysis also doesn’t account for the disparity between the magnitude and frequency of El Nino events and those of La Nina events. This can readily be seen by smoothing NINO3.4 data. I used a 7-year filter for the following graph. (Actually an 85-month filter for the monthly NINO3.4 and Global Temperature anomaly data.)
Other notes about the graph: Since the source of your data (Trenberth and Stepaniak) remark in the accompanying paper that the NINO3.4 data is questionable prior to the opening of the Panama Canal, 1914, I deleted the data before 1915. I also prepared this graph for an upcoming post that I haven’t gotten around to writing up, which is why it ends in 2005.
Note that the NINO3.4 data is predominantly positive from 1918 to 1944 and from 1977 to 2005 (periods when global temperatures rose) and that the NINO3.4 data is predominantly negative from 1943 to 1959 and from 1970 to 1977 (periods when global temperatures for the most part declined). There are some exceptions, but, in whole, it holds true. During the positive NINO3.4 period of 1959 to 1970, global temperatures started to rise but were suppressed by volcanic aerosols.
Regards.
Norm,
Thank you for that great post…I’ve read it 3 times, and will read it several more, I’m sure. Each time I understand a little bit more of what you’re saying.
Bill Illis,
What great work. I hope I get to see it put to good use, and provide yet another set of points that get added to the debate which we so desperately need.
JimB
John W:
I hope this posting will clarify what I meant by my comment, and I apologise if my use of an imperfect illustration caused confusion.
To begin, I want it to be very clear that I think Illis has provided a good, useful and important analysis which warrants publication.
My witches illustration was intended to aid understanding, and it was not intended as direct analogy. However, your comment brings attention to quality of data. (The records show that the number of detected witches did increase at the time when witchfinders were appointed. But it is not clear how many witches were detected and how many witchfinders existed.)
I stress that I think Illis has provided a superb analysis, but the quality of any analysed data should always be questioned because GIGO applies to all analyses.
Also, your comment concerning “driving the bus” illustrates the importance of ascribed causality. Which was causal; did increase to the number of detected witches induce increase to the number of witchfinders, or did increase to the number of witchfinders induce increase to the number of detected witches?
In the context of the analysis Illis provides, ascribed causality has great importance. Which is causal; did increase to atmospheric carbon dioxide concentration cause increase to the residual temperature trend, or did rising temperature cause increase to the atmospheric carbon dioxide concentration, or were the temperature and carbon dioxide changes caused by some other effect(s)?
My comment was intended to explain the importance of ascribed causality on the analysis Illis provides.
I am an extreme sceptic on matters of man-made global climate change. I do not know the cause(s) of the recent rise in atmospheric carbon dioxide concentration, and I do not know what – if any – effect that rise is having on global climate. But I want to find out.
The absence of any empirical evidence for anthropogenic (i.e. man-made) global warming (AGW) leads advocates of AGW to rely on the logical fallacy of ‘argument from ignorance’ and outputs of computer models.
History is replete with examples of politicians being guided by advisors who used the ‘argument from ignorance’ fallacy to justify their advice. The advisors have always presented an appealing case based on accepted theory, and they have always ignored – or rejected – alternative possible explanations for the effects which they have asserted as justification for their advice.
In ancient times such advisors said, “We do not know what causes lightening to strike so it must be the actions of Gods and people should make sacrifices to appease those Gods.” And, as my illustration said, in the Middle Ages such advisors said, “We do not know what causes crops to fail so it must be witches and we must eliminate the witches.”
Now, advocates of AGW say, “We do not know what causes global climate change so it must be emissions from human activity and we must eliminate those emissions.” Of course, they phrase it slightly differently: they say that they cannot match historical climate change with known climate mechanisms unless an anthropogenic effect is included. But this “anthropogenic effect” is an assumption with no more empirical evidence to support its existence than the empirical evidence for ancient Gods and witches.
My comments tried to say that the final part of Illis’s analysis adopts the same ‘logical fallacy’ that is used by AGW advocates. It is an assumption – not a fact – that increased atmospheric carbon dioxide is the cause of the residual temperature trend his analysis reveals. Indeed, his demonstration that natural oscillations cause some of the temperature trend adds credence to the possibility that other observed natural oscillations may also be significantly contributing to the trend.
But, as I also said, increased atmospheric carbon dioxide may be the cause of the residual temperature trend Illis’s analysis reveals.
I hope what I intended to say is now clear.
Richard
davidsmith1 (20:37:23) :
Will this or this help you track it down?
two rapid comments on the reply just above:
“the computer models have never yielded a single result that matches observations”
This looks to me like a very strong statement. Too bad it is completely false… In particular when you think many models biuld their internal set parameters based on reproducing observations over the last century.
“There is only a single vibration mode of CO2 that resonates within the thermal spectrum radiated by the Earth. This bend vibration resonates with a band of energy centred on a wavelength of 14.77microns (wavenumber 677cm-1)”
again, a very convincing argument, except that it is obviously incorrect:
There is more than just one asymetric vibration mode in CO2.
Bill
This analysis is very promising. It is similar to but extends the work of Douglass and Christy (Limits on CO2 Forcing From Recent Temperature Data of Earth, Energy and Environment, to be published) which considers only UAH data (1979+). Both your analysis and Douglass/Christy attempt to remove “unforced” natural effects from the climate signal and ascribe the residual to “the real global warming signal.” However, solar variability remains an untreated “forced” natural effect. You may wish to consider incorporating the work of N. Scafetta and B. West (J. Geophysical Research, Vol 112 D24S03, Nov. 2007, and their previous papers cited therein) who treat solar variability phenomenolgically via a simple thermodynamic model using various reconstructions of Total Solar Illuminance and the historical temperature over longer time periods. They find a major fraction, but not nearly all, of 20th century warminng ascribable to solar variability. It should be possible to incorporate their approach in your regressions.
The point is that no one that I know of has included both “unforced’ terrestrial cycles and solar variability in an integrated analysis.
Now that’s a good post. Keep ’em coming!
0.4°C more warming with a doubling of CO2 – hardly catastrophic. Certainly does not warrant expensive mitigation programs demanded by many.
“…adjustments of old temperature records by GISS and the Hadley Centre and others have artificially increased the temperature trend… ”
Now why doesn’t that surprise me? I’m curious how the other side will respond to this.
To Bob Tisdale,
I tried just plugging a higher coefficient for the Nino 3.4 region because it does seem like there are periods when its impact is greater than the reconstruction allows.
However, there are many other time periods when the increased coefficient just puts the reconstruction far off the actual temperature trend. These time periods then extend out over many years, decades even, versus the very short periods when the reconstruction is off by a small amount.
So, I just decided to trust the regression and go with it. I wanted this to just be a straight-up, simple model with no plugging or smoothing in any event because there is a danger in playing around with the data too much, as Mann’s hockey sticks show.
You are the expert on analyzing ocean temps and circulation of course and I have been to your site quite often before.
In terms of the Trenberth data being questionable in earlier periods, I just decided to just go with what’s available. If we are to build a model of temperatures, we have to use what is available.
What I would like to see however, is if the raw Trenberth data would provide a better fit as this index is a five month smoothing. It is probably too variable, hence the need for the smoothing but what I’ve seen in this building this reconstruction is that information is lost as the data is smoothed or averaged. Just look at the spikes in the temperature data, the climate can move very fast.
I just returned from a climate change conference in Amsterdam organised by the Royal Geological and Mining Society of the Netherlands on the 20th of November
Two speakers: Prof Jurg Beer (physicist) of Zurich and Prof Kees de Jager of Utrecht University and founder and first director of the Utrecht Space Research Laboratory showed some remarkable correlations between temperature variations and solar activity over the last 400 years or so. They concluded that no anthropogenic signature could be detected. They purely concentrated on the statistical significance of the observations. Yet they realized that the changes in solar forcing are not enough to cause the temperature fluctuations, but they didn’t want to speculate about the possible causes. Obviously some other mechanism must explain the amplified effect on temperature. They knew about Svensmark et al but didn’t want to comment on it as they felt not qualified to do so, but they were eagerly awaiting the outcome of the forthcoming CERN experiments.
If ENSO and AMO are also correlating with the past temperature fluctuations, I can only conclude that the sun ( via clouds? ) is responsible for the frequency and strength of the various oceanic oscillations.
De Jager’s paper has just appeared in the September issue (Volume 87, no 3) of the Netherlands Journal of Geosciences.
In thinking about it overnight, I came to the same conclusion that some of the other posters did. What you have done is residualized the long wavelength trend not directly related to ENSO / AMO out of the signal. Over the time scale of investigation, a linear trend or logarithmic trend could be fit – I am guessing with similar r^2’s. I would agree that temps & CO2 have a logarithmic trend, but there isn’t enough spread in the data to say the residual has a logarithmic trend. SO, with that being said, if you make the ASSUMPTION that the residual trend is purely due to CO2 – the 1.85C per doubling of CO2 is a MAXIMUM effect end member – assuming any other forcing mechanisms at play are positive & not negative.
Question for the group to consider : What other long wavelength forcings are out there that could drive this residual (ie positive forcings, with CO2 being an even smaller positive forcing) & what other possible long wavelength negative forcings are out there that would make this an underestimate of CO2 forcing?
One more IMPORTANT comment for group: This little exercise here is a good example of collaborative science – not unlike the concept behind linux. As a community, there should be some consideration of a way to formalize this concept (not that I have time to do this, but someone reading might). I think the over-riding concern with the “skeptics” community is that we want the science done right – science as science, with everything considered, not as dogmatic political science. I would bet that if a web-based mechanism was set up for collaborative research, that scientifically sound progress could be made on many different aspect of climate change by the skeptic community. It could have different threads investigating specific questions, a compilation of all important publicly available datasets, a compilation of pertinent publications, as well as all research done by the group to date for others to build upon. Bill’s paper above would be a good example of a starting point for a thread of research. Questions brought up by posters could be investigated further, the model / hypothesis refined with those answers. New questions such as the causation behind the residual could start as a new thread of research. As long as no one lets their egos get in the way (looking for glory) & the goal is simply getting the right answer, it could be a powerful tool.
Anthony, still having problems opening the excel sheets, even in zip format. Says they are corrupted.
REPLY: I tried a download from another machine that I didn’t write the post from and had the same result. I’ll see if I can figure out this nuance. – Anthony
Maybe we just have warm or cold air blowing on our temp gauges. What patterns does the jet stream take on with oceanic oscillations? At least in the upper part of the US we either freeze or save on fuel when the jet stream dips or not. The winter of 07/08 experienced plenty of cold temps because the jet stream looped down into our territory many times. The more times in a season it stays up above the 45 parallel the more fuel we save. Is there an oscillation to the predominant jet stream pattern that coincides with oceanic cycles? And what do we know about the jet stream in relation with solar cycles? Plus, wouldn’t the jet stream move water vapor along to somewhere else?
Chris S. (06:44:32)
Whose reconstruction of solar activity did your speakers use? Where’s Leif to comment on this latest?
========================================
Uh, that’s (06:32:44) for Chris S.
===========================?
Damn it, everybody! I am in the middle of writing several articles, a policy paper and a book, and I can’t make time for this post……but will have to! Thank you Bill and all other contributors – it is one of the most informative and challenging posts yet – and very hard for me to digest because I am really bad at stats.
Others have picked up the issue of long term trends and recovery from the Little Ice Age, PDO cycles, and even longer ones – and not assuming the divergence is doe to CO2 until these are factored in – and the post on CO2 saturation is great stuff – I need to revisit that….and so little time!
One request – of the IPPC’s 0.6C, how much are you knocking off? Bill – You said it was a large amount, but I can’t see it on the graphs. What simple proportion do you ascribe to carbon dioxide?
Bill Illis, one last thing. With respect to your statement and graph about the NINO3.4 anomaly data trend or lack thereof, there is a significant difference between the Trenberth NINO3.4 SST anomaly data (The ultimate source is HADSST) and the Smith and Reynolds (ERSST.v2) version. Here’s a comparative graph of the monthly data:
Here’s a graph of the difference with a linear trend line:
That trend is substantial and the dip in the early 20th century is consistent with the ERSST.v2 version of the Pacific Ocean. I don’t think the Trenberth (HADSST) NINO3.4 data has been detrended, though it looks like it might have been, because the difference also shows up in the annual NINO3.4 SST data of the two data sets:
And the difference in the annual SST data:
I discussed it here:
Who knows, maybe the guys at the Hadley Centre didn’t like the pesty SST dip in the early 20th century, so they smoothed it out. Stranger things have happened.
Regards
Chris Shoeneveld: “Yet they realized that the changes in solar forcing are not enough to cause the temperature fluctuations, but they didn’t want to speculate about the possible causes. Obviously some other mechanism must explain the amplified effect on temperature.”
I’m not convinced there even has been an “amplified effect on temperature” any more than normal.
Thank you, Bill Illis, for this very nice post.
May I, nevertheless, ask the following questions:
1) the weight of ENSO is considerably less than that of AMO (0.06 or +/- 0.2 Celsius variation vs 0.5-0.7 or +/- 0.3-0.4 C variation).
ENSO represents tropical pacific, while AMO represents northern atlantic – not including tropical atlantic. The total area of pacific ocean is approx. 180 million km2, thereof tropical/subtropical area approx. 90 million km2. The atlantic ocean has 80 million km2, thereof the AMO area is approx. 30 million km2. The ratio is 3:1 while the weight factors ENSO vs AMO have roughly 1:2.
Why do you not use pacific decadal oscillation instead of AMO? PDO represents the total pacific, with, maybe, too little weight for the tropical pacific, which then would be compensated by ENSO.
2) The logarithmic dependence on CO2 concentration. Could it be replaced by just a linear dependence on time, without spoiling the agreement? After all, it can be claimed according to IPCC that 1/3 of the global warming arises from solar influence, which to a very first approximation has been linear in time (probably no longer).
3) The Hadcrut global data CO2 logarithmic prefactor of 2.73 yield 0.1 C per decade warming (since 1958), while the RSS global data give a CO2 logarithmic prefactor of 1(or 0.046 C/decade, since 1978). The difference to some extend seems to arise from the heavier weight of the last decade in the RSS analysis, but mainly may arise from inadequate land surface data (see UHI etc discussion a few days ago). Do you share this view?
A logarithmic ansatz for the CO2 dependence ignores any negative feedback due to enhanced latent heat transport at higher temperatures (the Lindzen argument).
Bill: As soon as the glitch in the download is fixed, I’ll try to replace the Trenberth data with ERSST.v2 and see what happens with your model. I’ll post the results.
BTW, how’d you answer my comment about the differences in the data sets before I posted it?
Pamela Gray (07:14:29) :
“Scientists need to feed their families too. Who will fund such an effort?”. Anthony puts this blog together, which furthers the science and doesn’t get paid. This is not unrealistic to think that it could work. We clearly have a large group of technically competent people out here with the skills & the knowledge to address many of these problems. Just as with Linux, one person didn’t write the whole OS, the community did it together. Same could apply here – those who have the skills to contribute can contribute as they are able. As a side benefit, no one could say it is an effort funded by an agenda – no grant money funding the AGW’s, no “big oil” money (as the AGW’s like to think the “skeptics” are funded by) – just people motivated by finding the truth.
Bill Illis,
Very nice article. On my first reading I am impressed that the climate signal at least occurs in the visually correct time frame for CO2.
Please be careful not to over conclude that because you don’t have other explanations for the temp rise it must be CO2. IMHO you should strike those comments from the article or make strong caveats. Your graph makes a convincing enough argument for it by itself I think.
Also, although you were careful to point out data quality issues in your comments I need to mention that the corrections in the GISS data are nearly as large as your signal. While the corrections may (or may not) be entirely reasonable, errors in the data will have a large effect on the total rate of warming in your results. The same is true for the ENSO and AMO measurements.
I will spend some time over the next two days reviewing the rest of your work. It really is an interesting calculation. Tamino did something similar but didn’t publish any of his calculation methods so it was impossible to review. Also, he is stuck on the idea that climate change was linear for a hundred years and that makes it hard to take him seriously.
Great stuff though.
Have you, or anyone, attempted to apply a Fourier transform to either the raw data or the residual error data?
“Eyeballing” residual frequencies in a graph like that is fraught with observation errors. If there are regularities in the data, a Fourier transform will exhibit spikes at various frequencies, and you can then go looking for things that happen on those time scales.
Regards,
Ric
As I have stated before, if it is known what forces cause change, then change can be predicted. If the forces which cause change are not known, then projecting future changes from historical data is nothing more than extrapolation, and extrapolation is sure to be wrong, and sometimes far wrong.
It is clear from historical data that the equation (increased CO2 levels) = (global warming) is simply not true. What these people doing the extrapolation of historical data keep saying is, “If it weren’t for all of these factors which happen quite often and naturally, it would have gotten warmer.” But these events, the oscillation of the Atlantic and Pacific Oceans, volcanoes erupting, sunspots changing, varying amounts of water vapor in the atmosphere and the like occur frequently and are quite natural events.
It is much like the old sort of joke, “If the dog wouldn’t have stopped to take a crap, the dog would have caught the rabbit.” But if the dog always stops to take a crap, the dog will never catch the rabbit.
We humans simply can’t affect the volcanic eruptions, nor the oscillations of the oceans, nor the appearance (or not) of sunspots, nor the amount of water vapor in the atmosphere. Thus, we humans can’t have any sort of control over the “climate”. The climate has never been under any sort of human control, and isn’t going to be under control of humans in the future.
All this “smoothing” of historical temperature data is simply ignoring what actually happened in history. All this “adjusting” of historical temperature data is nothing more than changing actual data to fit a hypothesis which clearly has failed the test of verification by actual observation.
It is long since time to toss out this hypothesis which has clearly failed, stop using extrapolation of historical temperature data (which itself has been adjusted that is, finagled to fit a false hypothesis) and stop using this pseudo science and begin using scientific method again. If one doesn’t know the actual causes, and the proportion of effect of each cause, then there are no such things as “trends”. These “trends” exist only in the minds of imaginative people.
Pamela Gray wrote: ?”
Of course the funding has to be from “proper” sources, otherwise the results are suspect for some unknown reason.
Pamala, re # 26,
And I thought they were all being funded by the big oil companies!
Can that be an error?
Willi
Kim,
De Jager considered both the equatorial and the polar activities. The latter is, apparently, often neglected.
He referred to the “smoothed maximum sunspot numbers; a proxy for the maximum toroidal field strength over the centuries” and the “smoothed values of the geomagnetic aa index at sunspot minimum; a proxy for the maximum poloidal field strength” (Duhai& De Jager, 2008). Since 1000 AD they named the following minima: Oort, Wolf, Sporer, Maunder and the Dalton.
By the way, they predict the next solar cycle, #24 to have a maximum strength of 68 +/-17 sunspots to be reached in 2014.
Kim,
Sorry, it is Duhau & de Jager
Title:
The Solar Dynamo and Its Phase Transitions during the Last Millennium
in: Solar Physics, Volume 250, Issue 1, pp.1-15.
Duhau, S.; de Jager, C.
Equally important is: what temperature proxy data did they use?
For the 400 years tropospheric temperature oscillations they used Moberg et al. 2005. Nature, 433: 613-617. Obviously, they didn’t use the hockey stick.
To the posters who are suggesting trying other series etc., that is why I have done this. Just to show that it can be done. Like I wrote, it could certainly be improved on.
When we get the spreadsheets up and running, they are set-up so that one could try just about any other variation. All the charts etc. are in there as well.
I really did this so we could actually start adjusting for these things rather than just noting “if you adjust for …”
That and it was clear to me that increased temperature trend of the 1980s, 1990s and 2000s was in part just a reflection of the ENSO and AMO and was not global warming. realclimate even has a GISS temp chart up right now showing temps going straight up like it will reach the moon. This analysis method says the GISS trend per decade since 1979 is only 0.058C per decade far, far less than 0.2C per decade it is projected to be.
The coefficient for the Nino 3.4 region at 0.058 means it is capable of explaining changes in temps of as much as +/- 0.2C.
Is it 0.058 or a typo for 0.58?
Great post, Bill, and some great comments, too. I agree with some others that a natural warming since the LIA could explain the gradual increase in temperatures; it doesn’t have to be CO2-related. We know there are some very long-term cycles at play. The current lull in solar cycle 24 will probably help us understand the effects of the sun better. I’m buying more long-johns.
Some of those outside factors might include other ocean-atmospheric cycles. From 1976 – 2001 not only the PDO and AMO, but also the NAO, IPO (which overlap), and the AO and AAO went from cool phase to warm. Could that possibly fill in the gap ascribed to CO2?
Then there are McKitick, Michaels, LaDochy, etc., who claim the recent historical trend has been been exaggerated. I would sure like to know why the raw data is adjusted upwards when what common sense I can bring to bear tells me it should be adjusted downwards.
And, by definition, the sun is the primary power source. What is at issue is if CHANGES in the sun might be affecting ocean, atmosphere, etc. My current take on that is that the small stuff may not matter, but the grand minimums probably do matter – a lot. And if they don’t, then some other natural force (quite apart from Milankovic cycles) would be able to create “little” minimums and optimums..
Nice job Bill but your treatment of the logs needs attention.
Physically the argument of the ln() should be non-dimensional i.e. the expression should have the form:
∆T=C*ln([CO2]/[CO2]o)
which can be expanded to: C*ln([CO2])-C*ln([CO2]o)
so rather than treat the constant term as a free variable in your fit it should be a constant with [CO2]o=285.
In this case at the start when [CO2]=285, ln(1)=0 therefore ∆T=0
In your case the two curve fits give ~306 and ~326 and you can see this by looking at the graph and seeing where the two lines cross 0. I would suggest that you try the fit with this model instead.
Secondly you attach significance to the ‘intercept’ of the graphs this is in error mathematically, the ln(x) function approaches -∞ asymptotically as x->0.
Physically this is in error because at small values of [CO2] the dependence becomes linear.
It’s interesting to see that a simple lumped parameter model using basically the variation of the two ocean basin SST anomalies (detrended) and a greenhouse term gives such a good agreement.
I hope WordPress can handle the math symbols, apologies if it can’t.
In 2005, I did a similar research, but using yearly data and including a wide variety of parameters. The results were published on , where one can also find a early model-to-play. Early 2007, I updated the data and improved the model using statistical tools. The methodology and results were published on my website at .
Though these are just statistical approximations, they do provide insight in the factors influencing the temperature data, and in the weaknesses of the professional climate models. I for one learned that the influence of the oceans was much bigger than expected, and because these variations (AMO,…) existed long before any anthropogenic “pollution”, the current global warming -to me- seems much more due to these factors than to e.g. manmade CO2.
To Phil,.
The essay says there is no GW trend in the AMO, yet most other studies have found a
positive trend of around 0.5C over 120 years. The statement seems to be based on a graphing of this dataset …
But if we look at the
NOAA description of the dataset it tells us that the index is derived by detrending the SST data. So is the essay bringing us the startling conclusion that a detrended dataset contains no trend?
If the detrended AMO dataset was used in the regression analysis then this will tend to alias any non-linear forcings, so any conclusions based on the residuals from that regression may be simple artifacts of the detrending.
It does not seem legitimate to simply assign any residual to CO2 warming, and not any other factors, also – as alluded to in the text but not the analysis, the GHG forcing does not act instantaneously or even on an annual timescale – there a lag in the climate response which means that there is estimated to be around
0.6C of warming ‘in the pipeline’ , any extrapolation needs to include this, not to mention the additional forcing from non-GHG feedbacks, which tend to be exponential …
For a peer-reviewed analysis along the same lines see
One interesting thing about Jan Jannsens analysis compared to this one is that it only uses 3 variables to create the fit which I think is pretty much as good as Jan’s. Jan … do you agree? Jan,for example modelled volcanoes direct whereas this analysis suggests AMO/ANSO is directly impacted by Volcanoes so ” we dont need it”
If this analysis is right C02 is overplayed as a climate influence, and I for one agree.
We are left with the open question of what mechanism drives AMO and ENSO. Well perhaps volcanoes are part of the answer but CO2 driving global warming isn’t. I haven’t found anything in the literature which suggests a plausible cause of OMO/ENSO variation – its a natural cycle – i.e we don’t know.
Anybody seen a plausible explanation?
This and Jan’s model do make predictions on temperature but we will have to wait a long time to test them
However Bill — you could do a hindcast by say using data up to say 1978 and seeing how well it forecasts the last 30 to 2008 ( 30 years being everybodies favourite minimum climate interval.
I notice you say the correlation look different since 79 ( RSS keeping GISS/HADCRUT honest ) so perhaps 1900 to 1980 and seeing how it predicts 1870 to 1900 and the last 30 years?
Bill Illis, a very interesting piece of work. However, what is missing to my eye is a comparison of your use of CO2 to represent the trend, and just using a straight line to represent the trend.
To make the case that CO2 is involved, you need to show that the fit using CO2 plus ENSO 3.4 plus AMO is significantly better than the corresponding fit using a linear trend plus ENSO 3.4 plus AMO.
My best to you,
w.
Is there a place to read about the current assumptions about the oceans impact on climate? How does it relate to the above article? How are underwater volcanos and rifts factored in or are the effects too small to matter?
I couldn’t find anything at realClimate but maybe I didn’t know where to look.
There’s a lot of interesting stuff here. I am glad that Norm Kalmanovitch dropped in with some information on CO2 IR activity.
One thing I see constantly in papers, without anybody ever justifying the mechanism, is the claim that the warming due to CO2 is a logarithmic function. This then leads to talk of a “climate sensitivity” parameter being the mean global temperature rise due to a doubling of CO2.).
Now I don’t doubt that you can take some sets of data, and curve fit them to a logarithmic currve. Everybody knows you can hide all kinds of pestilence by simply plotting data on a log-log plot. The ability to curve fit data, and calculate correlation coefficients between sets of data doesn’t prove that there is any cause and effect relationship whatsoever.
You wouldn’t believe the total mayhem that scientists have wreaked by simply messing around with numbers, in the belief that you couldn’t possibly closely match real data to the results of just messing around with numbers.
Well if you believe that, you need to review the history of the “Fine Structure Constant” which has the value e^2 / 2 h c e0 where e is the electron charge; h is Planck’s constant; c is the velocity of light, a e0 (epsilon zero) is the permittivity of free space, and is approximately 1/137. The 1/137 form is intimately linked to that sordid history. The first chapter of the great fine structure constant scandal involved Sir Arthur Eddington who one proved that alpha (TFSC) was EXACTLY 1/136; but when nature didn’t comly with his thesis, and the measured value became much closer to 1/137, the good Professor Eddington thereupon proved that alpha was indeed EXACTLY 1/137. Well it isn’t, it’s about 1/ 137.0359895 and has been measured so accurately that it was used as a method for measuring the velocity of light (which IS now specified as an EXACT number (2.99792458E8 m/s) ).
Dear deluded Professor Eddington, became known as Professor Adding one !
Well that was only the first episode of the FSC scandal. In the mid 60s someone derived the 1/FSC number as the fourth root of (pi to some low integer power times the product of about four other low integer numbers raised to low integer powers). I’ll let you university types search the literature for the paper. It computed 1/FSC to within 65% of the standard deviation of the very best experimental measured value of 1/FSC which is 8 significant digits. And the paper included ZERO input from the physical real world universe; it was a purely mathematical calculation. But of course it had to be correct because everybody knows you can’t get the right answer just by mucking around with numbers. The lack of observational data input phased nobody in the science community who embraced this nonsense; well for about a month. That’s how long it took some computer geek to do a search for all numbers that were of the same form; 4th root of the products of low integers to low integer powers and pi to a low integer power.
The geek turned up about a dozen numbers that were equal to 1/FSC within the standard deviation of the best experimental measurments; and one of those numbers was actually within about 30% of the standard deviation; twice as accurate as the original paper. A more sophisticated mathematician developed a multidimensional sphere thesis where the radius of the sphere was the 1/FSC number and a thin shell that was =/- one standard deviation fromt hat radius contained a number of lattice points thatw ere solutions to the set of integers in the puzzle. So he computed the complete set of answers that fit the prescription; the result of doing nothing more than mucking around with numbers that was accepted as correct because it so accurately fitted the observed data.
So watch out what you go for, just because some fancy manipulations fit your data; particularly noisy like data that can hide real errors from the predictions.
One can model the optical transmission of absorptive of materials as a logarithmic function. If a certain thickness transmits 10% of a given spectrum, twice the thickness will transmit 1%, and so on; BUT such materials absorb the radiation and convert it entirely to thermal energy.
Not so with water vapor or CO2 or any other GHG. Some absorption processes may convert some of the energy to heat energy; but mostly the absorbed IR photon is simply re-emitted, perhaps with a frequency shift due to Doppler effects, or even Heisenberg uncertainty. Subsequent re-absorption by other GHG molecules, may face totally diffrent results due to temperature and pressure changes in between successive absorption-re-emission events. The likelihood that such processes follow any simple logarithmic function is rather remote, and the possibility that the global mean surface temperature change due to such porocesses also follows a logarithmic function; even more so.
I know all the books and papers say it’s logarithmic; how many of them derive the specific logarithmic function based on the molecular spectroscopy physics ?
How, (if at all), does this dovetail with Spencers hypothesis regarding the PDO?
DaveE
John Philip says … “the GHG forcing does not act instantaneously or even on an annual timescale -”
What is the physical/physics basis for saying the GHG forcing does not act instantaneously or certainly within a year. We are talking about photons of light here. Where is the energy going?
and “… there is a lag in the climate response which means that there is estimated to be around 0.6C warming in the pipeline”
I noted the theory is now the deep oceans are absorbing some of the increase and it might them take us longer to reach the doubling temperature. How long then? Because I think global warming researchers have a duty to tell us that now. Does the temperature reach the doubling level 35 years, 75 years or 100s of years after CO2 reaches the doubling plateau?
The points about the construction of the indices is well-taken. Where can we find the raw data before it is detrended?
Bill Illis (10:46:40) :.
No the formula is: ∆T=C*ln([CO2]/[CO2]o)= C*ln([CO2])-C*ln([CO2]o)
So if you fit a function of the form ∆T=C*ln([CO2])-B
B=C*ln([CO2]o) in your one case C=2.73 and B=15.8
so in your fit ln([CO2]o)=B/C=5.79, therefore [CO2]o=326 ppm. (Which if you look at your zoom-in graph is exactly where the red line crosses zero)
My suggestion is that you should do the regression on ∆T=C*ln([CO2]/285)
I expect that would give you a slightly lower C with the line crossing zero at 285 ppm.
REPLY:Unfortunately I can’t install LATEX for symbol translation here on this blog, but if you want to display the formula, you could spell it out with appropriate symbols, do a screen cap, and post it up to a picture website like flickr etc and link to it here. Just trying to help. – Anthony
.”
Things like that generally get started by someone who is curious and wanting to see if they can make something themselves. That was how Linux got started when Linus Torvalds sat down to play with his 386 and decided out of simple curiosity if he could make an unix-like OS. Others got interested and began adding pieces so initially it was sort of a “stone soup” effort.
But then things changed and in a very important way. To give an example, we used Linux at work. We made some changes to some programs to better support our particular environment. Over time as the “upstream” versions of these programs were released, we would have to fit our changes into the new code release. Sometimes it was easy, other times it was more difficult depending on what changed in that upstream release. One day I saw someone asking about a feature on a mailing list for one of these programs and it was a feature we had actually implemented in our environment. I made a decision to provide our changes to the software developer as a “contribution” and they were adopted and incorporated into the standard package. We never had to experience that pain of patching our changes into the code after that. The “upstream” maintainer adopted the maintenance of those changes and a lot of other people benefited from the new features we added. But overall the motivation was self-interest. It was to our benefit to have someone else maintain that code and offload the job of having to merge our changes with every new code release.
So while things often get started out of curiosity, and people will often take an interest in almost a “hobby” sense, what really gets something rolling is when it actually becomes useful in a “real world” sense. And while people will sometimes gift some work out of the goodness of their hearts, often the biggest returns are from people in whose interest it is to get their changes into the broader code base than to have to hack at it every time something changes. It becomes more efficient to open the source than to keep it closed.
Same with projects here. People whose livelihood depends on accurate weather data might find it in their interest to help with the surface stations project or to share what information they have more generally. A firm in the agricultural industry, for example, might make better long term decisions if they knew that growing seasons were actually shortening or flat and not lengthening. If there is no warming, then making economic decisions based on the assumption that growing seasons will get longer in the future can cost someone a fortune. And if I am selling something and if I have the right information, nobody is going to buy it unless they also have the right information so it pays both parties (it is in their self-interest) to get accurate information out there so the producer provided the right thing and the market demands the right thing.
What is going on now with our government data is practically criminal in the economic sense. Because of politically-based bias, real economic damage is potentially being done. I am all for “open source” science models.).
That’s because Norm’s exposition falls way short of what really happens!
In our atmosphere the absorption spectrum of CO2 is a very close packed series of absorption lines (so many and so closely packed together that they look like a broad band unless viewed at high resolution). At very low pressures and temperatures (like on Mars) the individual lines are very sharp and separate as pressure and temperature are increased to the values seen in our atmosphere the individual lines are broadened by collisional and Doppler effects and eventually will overlap each other. As a consequence the absorbance dependence changes, at very low pressures and temperatures it will be linear, at very high pressures it will be √[CO2], in between there is a transition, for CO2 in our atmosphere it’s in the intermediate region and is best described by ln (and this can be measured).
Astronomers have used this for a long time, they term it the ‘curve of growth’, usually applied to atomic spectra in interstellar space.
BUT such materials absorb the radiation and convert it entirely to thermal energy.
Not so with water vapor or CO2 or any other GHG. Some absorption processes may convert some of the energy to heat energy; but mostly the absorbed IR photon is simply re-emitted,. As you get up into the stratosphere the situation changes and the CO2 has time to emit.
To Phil,
okay now I see what you are saying.
I was modeling the ln(280ppm or 285ppm) to be at -0.4C rather than Zero.
This whole model is based on the anomaly (from the baseline) so it crosses Zero when the particular temperature series baseline anomaly passes Zero. Now each series has a slightly different baseline and when you are comparing series you have to match up the baselines but I wanted ln(280ppm) to be at -0.4C.
Phil,).
George
Bill – Thermal inertia of the climate system is pretty uncontroversial, see for example this
write up of a paper by Meehl et al which.”
So the ‘physics’ explanation is that the heat largely goes into the oceans which take years to decades to warm in response, 70% of the surface is ocean and it takes around a decade for the surface layer to mix with the deep ocean ….
There is a discussion of the length of time to equilibrium in
this paper [may need a free subscription to access].
The AMO data before de-linear-trending is here …
but see the caveats on the NOAA page linked earlier. Hope this helps.
The logarithmic response to C02 is straightforward physics and nothing to do with any climate theory.
Put simply CO2 absorbs light at specific wavelengths. As CO2 levels increase more and more of these wavlenths are absorbed. However the law of diminishing returns applies in that once CO2 has absorbed some there is less left to absorb so an increase from say 280 to 300 has less effect than an increase from 380 to 400. It turns out this “law of diminishing returns” follows a log function.
This was covered on this website on 4/9/2008 in an article on this topic
Fine lines and CO2 absorbtion spectra
As I recollect A photon reacts to a CO2 molecule by changing its excitation state, either by changing the vibration mode of the atoms or by raising an electron’s energy going round one of the atoms to a higher level. I think IR absorbtion is in the latter category. Anyway there a a very large number of possible vibration modes and it requires different amounts of energy to move from one to the other. Each possible mode change creates an absorbtion line so there are lots of them
John Philip can you just tell us what Hansen said in 1985 in the Science article. Most of us do not have a subscription.
I note he published a temperature forecast just a few years later which did not include any ocean absorption that we can tell of since his Scenario B forecast temps are about twice as high as they are currently.
The temperature trend since 1979 indicates we can never reach the 3.25C doubling level no matter how much the oceans absorbs or how much lag time there is. It would take a thousand years.
Great analysis, Bill. Just one question. given that the temperature response to increased CO2 is logarithmic, why do we only see a warming signal post 1970 or so? One would expect a greater warming signal early in the rise of CO2, rather than later. Or am I missing something?
Well according to the official NOAA global energy budget, of the 390 W/m^2 emitted from the earth’s surface, only 40 W/m^2 escapes to space, so that means that GHG are already absorbong 90% of the total available IR, so only 10% is left to capture no matter how much GHG gets up there.
By the way if CO2 has such a long lifetime in the atmosphere (200 years they say), how come NOAA has plots shwing that at the north pole the CO2 in the atmosphere drops 18 ppm in just five months. That doesn’t sound like it would take 200 years, or even 10 years to remove all of it.
One other quick question; if we are already committed to gross sea level rise and temperature rise because of GHG already emitted, and the ML data certainly shows that CO2 keeps on going up unabated despite everybody’s Kyoto committments; then.
Some people may buy Meeh’s thesis (I do agree there are thermal lags) but why would the temperature go the wrong way, when the “forcing” continues to climb in the same direction.
As to the multiple fine lines in the CO2 IR spectrum; I’m familiar with the so-called symmetrical stretch mode which is not IR active, and the asymmetrical stretch mode which is IR active around 4 microns or so, and the degenerate bending mode which I believe is the 14.77 or 15 micron mode that everyone talks about (haven’t been able to get a definitive value for what wavelength that is.
But anything involving elecron levels in the atoms; as distinct from molecular vibrations, would seem to involve much higher photon energies than required for the molecular effects, so one would expect them to be visible light or shorter wavelengths.
Since CO2 is a linear molecule with no dipole moment, it would not be too active in rotation about the molecular axis, and other rotation modes say about the carbon atom and other axes, would seem to be much longer wavelengths.
But i’m eager to learn, so if someone can explain the energy level foundation for the many fine lines in the IR spectrum of CO2 I’m all ears.
To Don Keiller,
Really good question, I’m going to have to look at the rate of change here too, something I missed. I’ll post back when I can go through it all. Going by experience with this model, I’ll need to double-check everything before responding.
Bill Illis: The following link is to a google spreadsheet with the ERSST.v2 version of NINO3.4 SST and SST anomaly data. It’s the monthly data fresh out of NOAA’s NOMADS system from January 1854 to October 2008. I tried to replace the Trenberth data with it, but ran into the following problem.
I entered annualized ERSST.v2 NINO3.4 data into the Annual Temperature Anomaly Model, starting at 1871, at column C, row 44. It created a host of #VALUE errors. I thought the difference in climatology (The ERSST.v2 base years are 1971 to 2000) was putting the data was out of a working range for the model, so I recalculated the anomalies based on the same base years as the Trenberth NINO3.4 data (1950-1979). That didn’t help.
Please try the ERSST.v2 data and see if it works for you.
Regards
John Philip:
I would be grateful for an explanation of your statement saying;
“So the ‘physics’ explanation is that the heat largely goes into the oceans which take years to decades to warm in response”.
Please explain how liquid water absorbs heat but does not warm until some time later. My understanding is that – in the absence of latent heat exchange – any warming would be instantaneous.
This is important for two reasons.
Firstly, the oceans have been measured to be cooling over recent years and, if my understanding is correct, then the ‘thermal inertia’ you espouse is not happening.
Secondly, there is a need for large energy storage capacity to assist smoothing of electricity grid supplies. So, a mechanism that allows water to absorb heat but not warm until later would solve a major industrial problem.
Richard
evanjones (10:09:47) :.
First of all, don’t use ‘old’ stuff [and note: no please here]. Second, there has been some doubt as to what degree my ideas are met with general acceptance. A very new book [Sunspots and Starspots (Cambridge Astrophysics Series) by Thomas and Weiss, 2008, ISBN 978-0-521-86003-1] summarizes the ‘textbook’ consensus as follows [page 214]:
“Reliable measurements of solar irradiance extend only over the past 30 years. The success of models involving only sunspots and faculae in reproducing these measurements has encouraged researchers to attempt to reconstruct the variations in TSI over a much longer period based either on the historical sunspot record or on the proxy record from abundances of cosmogenic isotopes, or even on models of cyclic activity in the solar photosphere (e.g. Lean 2000, Froehlich and Lean 2004, Wang. Lean, and Sheeley 2005, Krivova, Balmaceda, and Solanki 2007). The upper panel of Figure 12.3 [Figure 3 of ] shows the most straightforward reconstruction, relying only on the measured correlations between sunspot numbers and irradiance since 1978 (Froehlich and Lean 2005). Other reconstructions (e.g. Lean 2000, Wang, Lean, and Sheeley 2005) differ in the inclusion or omission of an arbitrarily varying contribution from ephemeral active regions, on on the basis of a questionable difference in Ca II emission between active and inactive stars, in assuming [emphasis added – me] that there was a long-term increase in TSI, as shown in the lower panel of Figure 12.3. In reality, sice we know that cycles persisted through the Maunder Minimum (Beer, Tobia, and Weiss 1998), it seems unlikely that the average value of TSI could have dropped significantly below its level at a normal sunspot minimum.”
Third, because of TSI = a T^4, we have dTSI/TSI = 4 dT/T, so the percentage increase in Temperature will be only 1/4 of the percentage increase of TSI, not twice as you have it
I notice the article included ENSO and AMO, representing effects from the Pacific and Atlantic Oceans. No factor from the Indian Ocean. Living in Melbourne, in South East Australia the Indian Ocean Dipole (IOD) has a major effect for temp and rainfall over SE Australia (on the other side of the continent to the Indian Ocean). I wonder if a complete model for oceanic effect on temperature can be completed without considering the Indian Ocean. (Cynically I could say that the AMO may have the greatest impact because it effects areas where the greatest amount of temp measurement is done.)
George E. Smith (14:57:31):
See the animation on:
I have the same questions.
Perhaps, I need to understand the interaction
H2O + CO2> H2CO3
in the atmosphere.
This is an interesting study.
Looking at your charts, I suspect that you may have stationarity issues with y our temperature data, which is common when working with time-series data. You should conduct some stationarity tests (such as Augmented Dickey-Fuller or Philips-Perron) and make the necessary transformations to your data if required.
Regressions using non-stationary data can be very misleading, and sometimes worthless. That’s why you should do these tests to be safe.
To Bob Tisdale,
I got your ERSST.v2 data in and working. I have to say google docs does do a mess of things so we probably shouldn’t use it again until they get the bugs out.
On the good side, this data certainly works and produces a greater coefficient for the Nino 3.4 region (lagging it by 3 months seemed to work better again). The warming signal against CO2 also drops to about 1.65C per doubling.
On the downside, the r^2 falls from 0.783 to 0.745 and the errors are a little larger and/or show more consistently above or consistently below characteristics.
But it does not look bad at all. This dataset more consistently catches the spikes for example. Here are the two charts you would want to see.
George E. Smith (13:15:16) :).
OK George, here goes.
The IR absorption arises because of transitions between vibrational and rotational energy levels. ( I’ve linked to some webpages below).
A molecule can vibrate and rotate but can only exist at certain energy levels, the separation between vibrational levels is much larger than rotational, in the case of the CO2 667 cm-1 vs less than 1 cm-1. A CO2 molecule in the ground state can absorb radiation by jumping up one energy level in vibration (∆v=1) while at the same time staying at the same relative rotational level (Q-branch, ∆j=0), increasing one level (R-branch, ∆j=+1) or decreasing one level (P-branch, ∆j=-1). Since there are a great many rotational energy levels there are a great many possible lines.
See here for example, down as far as isotope effects:
in the case of CO2 bending the Q-branch is allowed.
Here’s one specifically for CO2:
I hope that helps?. Please explain.. Would you please explain why you claim a sizeable cooling?
Bill: Thanks for the update. I forgot to note earlier that the output of your model appears to generate a global temperature anomaly curve that comes much closer to the instrument data than at least one high-priced GCM. It will remain nameless so not to start a battle on this thread.
Question: At what cell did you paste in the ERSST NINO data and did you use the revised NINO data starting at Jan 1871? I want to make sure I’m looking at the same spreadsheet that you are when I include it.
Thanks again.
John Philip said:
Well, that’s one possible conclusion — if you cherry pick your beginning/ending dates.
Instead, let’s look at the big picture, from the very same UAH: click
Your claim that we’re headed for a ‘highly dangerous’ rise in temps looks silly. Why continue digging, when the planet is laughing at your hubris?
Smokey – the figure I gave is a simple linear fit to all the available UAH data,from Spencer and Christy, so there can really be no question of cherry picking. Your graph OTOH is a highly dubious polynomial fit widely condemned as misleading on this, of all, websites.
I would like to know the algorithm used to plot ‘average’ temperature vs. ‘heat’.
I can understand the logarithmic relationship between absorbance of photons with increaseing concentration; but for the life of me I have never seen an expression which explains how this energy give rise to temperature, give that we are dealing with a three phase system, ice, liquid water and vapor and the fact that changes in energy input could manifest themselves in pressure and in expansion of the atmosphere. You could for instance fill a balloon or a glass sphere with CO2 and irradiate it, the steady state temperature of the balloon would be lower than the glass sphere.
Don Keiler had asked earlier if there was any change in the rate of warming over time.
The changes are actually very hard to decifer.
My model based on ln (CO2) shows a gradual increase over time (not dissimilar to the very slightly exponential growth in Co2 levels) to where it is 0.15C per decade in the 2000s.
However, the actual observation data (after adjusting for the ENSO and the AMO) shows much more variation.
There is a very slight cooling trend from about 1890 to 1915. In the 1920s, warming jumps to about 0.2C per decade, then it falls to 0. From 1933 to 1945, it jumps to about 0.25C per decade and then falls rapidly to a negative value of -0.2C per decade from 1946 to 1955. From 1955 to 1975, warming is about 0.1C per decade. But from 1975 on, there is a gradual deceleration in the warming rate so that is very close to 0.0C per decade right now.
Complicated.
There is obviously more going on here than the model shows.
To Bob Tisdale,
The data showed it started in Jan 1854 so I pasted it into April 1854 (and Jan 1854 originally which didn’t produce as good a fit.)
I did the regression model starting in 1856 and also starting in 1871. It didn’t make much difference on the starting point.
To John Philip
The UAH unadjusted temperature trend is just 0.13C per decade (not 0.17C).
The UAH data adjusted for the ENSO and the AMO produces a warming trend of just 0.03C per decade which is probably a little low but would produce no warming to worry about at all.
John Phillip:
You ask answer:
Yes, I am saying that water (in the kettle or in the ocean) increases its temperature as – not after – heat is added. If you don’t believe me then try turning off the kettle before the water boils and see if it does boil.
And the water cools as – not after – it looses heat. The oceans have been cooling in recent years and, therefore, the ‘thermal inertia’ you espouse is not happening.
I repeat my question to you that you have not answered: i.e.
Please explain how liquid water absorbs heat but does not warm until some time later.
And I repeat that you could make a fortune from a mechanism that would permit water to store heat without warming because it would solve the problem of needed large energy storage capacity to assist smoothing of electricity grid supplies.
I will not answer any response to this from you other than an exposition from you of the mechanism that you suggest would permit water to store heat without warming.
Richard
John Philip:
You specifically referred to UAH:
Are you actually claiming that global temperatures are continuing to rise? Is that what’s happening on your planet?
On Earth, temperatures have fallen. Unless, of course, you still believe the “adjusted” temperatures provided by the science fiction writers at GISS.
If GISS was prepared to stand behind its clearly fictional press releases, it wouldn’t be afraid to publicly archive the raw data. Would it? But the fact that GISS adamantly refuses to disclose their taxpayer-funded raw data, or the methodology they use to ‘adjust’ the temperature record ever upward, tells people all they need to know about GISS’ probity.
Bill Illis is correct: we don’t know enough about the climate. Readjusting raw data isn’t helping the science; it’s a deliberately deceptive agenda.
@ John Philip (17:23:37) :
just made some coffee in a kettle. Here are the temperatures:
start: 78°F
1 min: 94°F
2 min: 116°F
3 min: 133°F
4 min: 151°F
5 min: 170°F
6 min: 193°F (boiling at 6,200 feet above sea level)
7 min: 194°F (still boiling)
Made coffee.
From this experiment it would appear that warming was instantaneous (and measurable) after applying a “positive energy transfer.” Since the issue isn’t how long it takes the oceans to boil, but rather to warm up measurably, it would appear that warming of the oceans would be measurable without a significant lag. Considering the mass of the oceans, I would agree that it would take a considerable amount of time to warm them a given number of degrees, but such warming would be detectable as it took place as with the water to make my coffee. Although I can understand mathematically how there could be a lag in a given output with respect to a given input, I do not understand what real life physical process could take place (i.e. the “pipeline”) that could result in significant warming of the oceans that would not be measurable as it happened, as in making my cup of coffee. Please explain.
Phil. (19:44:42)
It seems that each instant after heat was applied the temperature rose. Looks pretty instantaneous to me.
====================================
kim (20:30:16)
Phil., on further reflection, it looks like I got your point.
===================================
To Phil and John Philip
Note we are talking about deep ocean temperatures here versus the sea surface (since the sea surface temps are already captured in the global temperature series.)
The data does show there is some warming in the deep oceans. There is a 0.1C increase in some latitudes (about 30% of the oceans) down to 1,000 metres and about 0.05C in some latitudes (about 15% of the oceans) down to 2,000 metres. So it appears there is, indeed, some warming of the deep oceans.
During the ice ages when surface temps fell by 5C, the deep oceans appear to have decreased in temp from their current 3C to about 0C. So the deep oceans are affected by the surface temps.
Water, however, is one of strangest chemicals around. When it solidifies as ice, it gets less dense and floats. Warmer water, however, rises to the top and colder water sinks to the bottom. If it freezes, it rises to the surface. Hence, any change in the surface sea temps takes a long, long time to influence the oceans at deeper levels. The very cold water stays at the bottom, the warmer water rises to the surface.
It almost takes a complete circulation of the oceans to make much difference at all, which can be a thousand years or more. The lag between surface temps and CO2 in the ice ages indicates we could expect 800 years for the deep ocean to be really influenced by the surface.
My issue with this is that global warmers refuse to say how much lag there is. None of Hansen’s papers that I could find (including the one linked to by John Philips) say anything about it.
More importantly, in a physical sense, the deep ocean absorption just means that more mass needs to be heated up by the increased greenhouse effect provided by increased CO2. We don’t warm to +3.25C, we only get to +1.85C and then an equilibrium is established. The deep oceans will continue to absorb energy from the surface continuously and it will start cooling again if that energy doesn’t continue rising. Once we get to a doubled CO2 level (and let’s say we stay there) there is no more warming to come once the deep oceans catch up.
That is my perspective on it. And I firmly believe the warmers need to be clear about this for once.
@ Bill Illis (20:36:25) :
I think an important distinction needs to be made between temperatures and heat content and which of the two is being referred to by the term “warming.”
From:
“Unlike surface air temperature by itself (that has been the main climate metric used to assess global warming), in which there is a lag between a radiative imbalance and an equilibrium temperature …, there is no lag between a radiative imbalance and the amount of Joules in the climate system.”
From:
“The concept encapsulated by the term “unrealized heating” more appropriately refers to storage of heat in a nonatmospheric reservoir (i.e., primarily the ocean), with the “realization” of the warming only occurring when heat is transferred into the atmosphere.”
and:
“Unlike temperature, at some specific level of the ocean, land, or the atmosphere, in which there is a time lag in its response to radiative forcing, there are no time lags associated with heat changes.”
The “warming in the pipeline” would seem to reflect heat storage in the oceans that later is or can be transferred to the atmosphere, thus affecting weather at the surface where human beings live. However, it does not seem as if heat from the Sun could be stored on Earth in a place where that heat cannot be measured as it is accumulated. Nor would it seem possible that sea levels could rise due to thermal expansion of the oceans with a lag of decades, if heat is being stored in the oceans themselves and later transferred to the atmosphere.
Bill – you’re right 0.13C/decade for UAH since 1979 is correct, I was confusing UAH with the RSS analysis, apologies for the confusion. However taking the regression analysis as published and extrapolating it forward suffers from these flaws:-
– It ignores the thermal inertia of the climate system, assuming that all the forcing applied is reflected in the temperature rise already observed. This is not the case, it is uncontroversial that the heat stored in the ocean will continue to cause a rise in surface temperatures for several decades.
– It uses detrended data for the AMO regression but (nearly) raw data for the ENSO regression. The analysis should be repeated using the same source data for both oscillations, the incorrect statement that the AMO data shows no trend should be addressed.
– It does not handle feedbacks correctly, more sophisticated models find that as water vapour increases (to choose just the most significant single feedback), the resulting greenhouse warming increases exponentially, acting to offset the logarithmic declining effetcs of additional CO2.
The question of the size of the lag between a radiative imbalance and the corresponding temperature increase does not have a simple single answer – some feedbacks, for example the disintegration of large ice sheets, operate on a scale of centuries, the IPCC’s figures for a doubling of CO2 use a definition of climate sensitivity that includes ‘fast’ feedbacks only, sometimes called the Charney sensitivity which assumes that the land surface, ice sheets and atmospheric composition stay the same.. See Chapter 10 of AR4 WG1.
Richard, later on in our thought experiment the kettle is switched off. The temperature as measured some distance from the heat source continues to rise for some time as the heat is distributed through the body of water. Same thing, but on a planetary scale.
To Bill Illis and to Phil:
Thankyou for your points. I agree with both of you, and add the following.
Bill Illis, you suggest the stored heat that may induce ‘delayed’ AGW may be in the deep ocean. In that case,
(a) As Phil says, very little of the stored heat could return to the atmosphere until the deep ocean water returned to the surface (i.e. ~800 years after the heat entered the ocean) because there is little thermal exchange across the thermocline (and please see my comment on magnitude below). This could not be a problem worthy of consideration (at least, not worthy of consideration by our civilisation). And – in the context of this debate – it is not relevant to your model (but please see my final comment below).
(b) The lack of accelerated sea level rise in recent decades suggests that such deep ocean storage of AGW heating has not been significant.
(c) The heating from AGW of the ocean is (i) mostly direct radiant IR input that is absorbed within the top few meters of the ocean, (ii) conductive heating of the ocean surface by contact with warm air, and (iii) addition of warmer water to the ocean surface layer from precipitation, rivers and runoff from the land.
Phil, I agree with you that the list of heat transfer mechanisms of AGW into the oceans that I provide in (c) indicates transfer of heat (from AGW) to the deep ocean occurs via the ocean surface layer. Therefore, in the absence of any other known thermal transfer mechanism from the surface to the deep ocean, it has to be agreed that recent cooling of the ocean surface layer indicates the transfer of heat has not occurred recently.
However, there is a possibility that heat from AGW has been conveyed to the deep ocean because warming of the ocean surface layer occurred in previous decades. This raises the issue of how much AGW will be returned to the atmosphere when that deep ocean water returns to the surface (~800 years in the future, see above)..
Richard
“Would you please explain why you claim a sizeable cooling?”
That’s all only a play with the starting point. I just did the same with the last 80 months and here we go with a sizable cooling:
Bill – another little feature of the analysis, in the spreadsheet you are using the natural log of the CO2 concentration, in fact the forcing due to an increase in CO2 is proportional to the log of the difference between starting and ending concentrations, in fact it is estimated to be
Delta F = 5.3 x ln(C1/C0) where C1 is the end concentration and C0 the start concentration.
By taking the log of the total concentration you will get a curve that is proportional to the theoretical forcing that would have occurred if concentrations were zero at the start of the period!
Phil (22:58:19) That’s an excellent explication of a nice nuance. This is why the Jason measure of sea level are so important. Since Argos buoys only go down about two miles, if Trenberth’s ‘extra heat’ is being stored deep in the oceans, there should still be thermal expansion of volume of the oceans.
I particularly worry about the halt in the reported Jason data stream until very lately, and the attempt to jigger the Argos readings of the last four years back to warming. The evidence that the oceans are cooling is truly of more import than the recent atmosphere cooling.
===============================================
If I may offer a critique – I don’t know who chose it but the image at the top of the page is…well…IMO, a poor choice. As I see it, yes, it does show a El Nino and a La Nina – however – it is showing an El Nino during a PDO warm phase and the La Nina is shown during a PDO cool phase. Your apples & oranges are getting mixed together. The ENSO cycle (12-18 months or so) does not switch with the PDO cycle (20-30 years) & the image is not an accurate representation of the Pacific-wide basin during an ENSO cycle…correct?
*very* interesting write-up though.
Reguards,
Jeff
Earlier John Philips had linked to the unadjusted untrended AMO data. I have put this data into the model now. It is exactly the same data I used before except it has very slight trend in it, about 0.002C per year or about 0.02C per decade GW impact potential.
It produces some interesting results.
The coefficient for the AMO goes back up to 0.75 as we has seen in other times. The r^2 falls somewhat to 0.74 but the F-statistic jumps to by far the highest number I have seen at 738.
More importantly the warming residual left over falls to the infamous 1.24C per doubling (which the actually physics calculations say the number should be.) Interesting.
It also allows one to see better some of the changing over time warming signals I was seeing before. There is certainly a peak then fall-off starting in the late 1970s for example and other variations at other times.
Here is the Warming modeled chart.
I will have to think about if it is valid to use an AMO index which has a slight trend in it. The point about this regression method is to remove the natural variation from the climate. if the AMO is rising due to warming, then it can’t really be used for this purpose (although the untrended data could be). Other studies have shown that the AMO is a natural climate cycle that even has longer cycles lasting hundreds of years.
Any thoughts?
John Phillip,
“In fact the warming is more likely to be exponential, due to thermal inertia and the impact of positive feedbacks, notably absent from this analysis.”
Positive feedback is a bit of a pet peeve of mine which is a regular theme of the AGW guys. The earth clearly has a large number of negative feedback mechanisms as well which are poorly understood. I can make that statement because our climate would have gone over this huge evil tipping point dozens of times by the ice core data and it hasn’t happened. Also there is endless data that the ice has melted well beyond today’s levels in the last 6000 years and yet there was still no massive overwhelming flood. The tipping point is a theory with no foundation in the data.
Most of the tipping point arguments are based on massive releases of CO2, melting ice, increased CO2 release until the earth turns into whatever horror story they can make up. What Bill Illis has shown shifts the equations for the limit to the amount of warming created by CO2 to a much lower level so even if there is massive amount of CO2 released the oceans are less likely to flood the earth.
When you say warming is more likely to be exponential, you are going way too far.
The quote from the Hansen 1985 paper includes “Evidence from Earth’s history (3–6) and climate models“. Since when have climate models been able to produce evidence? I always thought evidence came from measurements, not from predictions!
Phil..
That is only a half of the story and therefore wrong .
While vibrationnaly excited CO2 transfers energy to N2&O2 by collisions , so N2&O2 transfers energy to CO2 exciting its vibrationnal levels by collisions .
As we are in LTE , the rates are equal .
The Planck’s distribution of the excited vibrationnal levels of CO2 demands that the proportion of excited levels stays CONSTANT for a given temperature .
Consequence is that CO2 simply must radiate away what it absorbs because else we have no more LTE .
The proof is trivial : look at a CO2 spectrum anywhere in the troposphere .
The CO2 radiates as expected .
So it is not because the relaxation time is much longer than the mean time between collisions that CO2 only absorbs , collides and never radiates .
That’s why the quenching that works both directions does NOT say that the CO2 is “heating the atmosphere” because it’s doing simultaneously both – cooling by radiation AND heating by absorption .
In equilibrium because of the already mentionned Planck’s distribution constraint both actions are equal .
Remember , in equilibrium one has always “Anything that is absorbed must be emitted but not necessarily by the same molecule .”
On the other hand the original poster you commented on got most of the QM processes wrong (like the dipolar momentum , energy “storage” , “saturation” etc) .
To John Philipp
There is no “pipeline” .
Thermal inertia indeed exists but I am afraid that you did not understand what it means .
Like R.Courtney says , any heating is instantaneous by definition .
Any molecule that increases its energy by absorption or collision (aka it “heats”) does so instantaneously .
And it doesn’t matter if you take them by trillions of trillions . The bulk will still heat as the sum of heated molecules which do so instantaneously .
Those points about the interplay between sun, atmosphere, ocean surface and ocean depths are critical to the whole issue.
I have heard it said that the ocean height continues to rise despite the ocean surfaces cooling and so the global system is still warming even though SST and atmosphere seem to be cooling.
My explanation would be that when the ocean oscillations are negative then less heat energy is being released to the atmosphere but solar input to the oceans continues so that there is an increase of energy in the oceans despite a cooling of atmosphere, land and so climate. During such periods solar energy gets tucked away in the depths and is denied to the atmosphere so that energy radiates to space faster than it is replenished by energy released from the oceans.
Conversely a period of positive oscillations causing atmospheric warming would normally involve enough release of energy from the oceans to warm the atmosphere but reduce the total energy in the system. That should result in a fall in ocean height and normally would.
However it appears that even during the 25 year warming spell from 1975 to 2000 there was nevertheless a slow increase in ocean height which appears to falsify the above BUT at the same time there was a so called grand solar maximum. Thus it is possible that the energy in the system continued to increase during the warming spell despite a full set of positive ocean oscillations. All that is needed for that to happen is for the historically high solar input (not necessarily reflected solely by TSI) to be putting more into the system than the positive oscillations are releasing.
Any consideration of multidecadal movements of energy between ocean surfaces and depths and variable and intermittent releases of that energy to the atmosphere is currently missing from the whole debate but in my view it is critical.
Combine those energy flows with solar changes over several cycles and I suspect that all the changes we have observed so far will be fully explained without involving CO2 at all.
The strong point of Illis’ study is the demonstration that a linear combination of AMO and ENSO indices can account for much of the variance seen in the standard “global temperature anomaly” compilations. The weak point is the naked presumption that the remnant is a physical “climate signal,” to which the logarithmic increments of temperature seen in vitro with rising CO2 concentrations can be applied . Given the multiple “adjustments” of actual data made by compilers such as GISS and Hadley and their failure to avoid UHI effects, any trend in their anomaly series is of dubious validity. And the surface temperature control in actual climate is provided by moist convection adjusting the vertical lapse rate in the atmosphere, not by the marginal radiative effect of trace gas concentrations.
John S says:
This brings to mind something I cannot understand with respect to AGW claims about the marginal effect of CO2.
That is, what mechanism do they propose will actually cause thermal runaway as CO2 levels increase?
They seem to hint that increasing CO2 will cause more of the outgoing LWR to be absorbed, thus heating up the atmosphere, and then? Causing more water to be evaporated and thus heating up the atmosphere more?
However, for that to occur, the extra heat/energy in the atmosphere has to be transferred to water, ie the oceans. But this would depend very much on timing, since hot air rises, and only the small portion in contact with the sea can transfer that energy by conduction. Of course, radiative transfer could also occur, but that would depend on the mean free path at the frequencies involved, and the probability that a molecule of CO2 transfers its energy to another molecule in the atmosphere before it radiates that extra energy away.
So, it would seem to me that the effect of an increase in CO2 would be to increase the convection effect that John S refers to, although only marginally.
I still ask:
–What about the other cycles? NAO, IPO, AO, AAO, and the Indian Ocean temperatures (which follow 20th century variations fairly well).
–What about the Mckitrick and LaDochy papers which indicate that global temperature trends are exaggerated?
Leif: Thanks. I’ll have to be content with the 30-year measures, then.
To John Philip,
Regarding the proper log formulas, I am really modeling all the GHGs here (using CO2 as a proxy for all of the them.) And I am modeling Temperature response.
The 5.35 ln (CO2/CO2o) is the formula for the “Forcing” of CO2 only (and there is a bunch of other formulas for the other GHGs and other forcings such as aerosols as well). In the most recent IPCC, the coefficient for CO2 was changed from 5.35 to 5.0.
With the 5.0 or 5.35 formula, one then gets a 3.45 Watts/m^2 impact from the CO2 forcing only which when multiplied by 0.75C /w/m^2 estimated temperature response to a forcing results in 2.6C per doubling of CO2 only.
I was hoping I wouldn’t have to get into this discussion because it is very messy. I fell for this trap too a couple of times when I was designing it (as Anthony Watts can attest to). There are a lot of coincidences in these numbers.
And perhaps the modelers, in trying to nail down every little impact in “forcings” which may be based on calculations that are not correct to start with (where did the 5.35 come from anyway) , they have lost a little perspective on what they are trying to do which is model a Temperature response. Is the climate actually responding in “Temperatures C” rather than in “minute forcings in watts per metre squared”, the way it is supposed to.
I got quite a few “Watts” into that post.
Fernando:
“Perhaps, I need to understand the interaction
H2O + CO2> H2CO3 in the atmosphere”
That is a key question:That reaction is endothermic.
Tom Vonk says:?
And I ask yet again, what about the influence of other cycles?
Here are my links:
PDO:
AMO:
Arctic Oscillation:
Antarctic Oscillation:
North Atlantic Oscillation:
Indian Ocean Temp.Anomalies:
And what about the possibly spurious adjustments to the historical record?
All that might make the curve fit even better.
Richard Sharpe (10:02:53) :
Tom Vonk says:.
I don’t know what context this was said but if it’s intended to describe the situation in the earth’s troposphere it’s wrong. It should read:
Consequence is that CO2 simply must radiate away or lose by collisional transfer what it absorbs because else we have no more LTE.
In the stratosphere where CO2 is responsible for cooling and collisions are few then CO2 does predominantly lose energy by radiation.
To evanjones
I’ll try these out tonight. It would be preferable to have a series which is continuously updated with monthly data, a series which is a natural cycle unrelated to global warming and one that has a long enough time series (back to at least 1900 for example.)
Bill,
I’m impressed. I also haven’t had the time to look into it carefully.. Before I attributed the gap you see in your 4th figure to rising CO2, I’d want to see what kind of gap exists just in the tropics. Then, if it is indeed localized primarily to northern extratropics, over the Asia land mass, I cannot help but wonder how much of that is due to quality issues with respect to the surface stations in that part of the globe.
I have raised the question several times, but have never received a cogent answer: why would warming from CO2 be localized in the northern extratropics, and over land? (And, for that matter, primarily over Asia; the USA has not been warming.)
While your results are intriguing, I think you should see how robust they are to hemispheric/tropical differences in long term temperature trends. If the “warming gap” diminishes, or goes away, if you use only tropical temperature trends, how would that affect your analysis?
I’m sure you know, but you can get the HadCRUT series specifically for tropics (it is 30N-30S, rather than the more common 20N-20S, but it will do for what I’m thinking about) here:
Basil
@ kim (04:03:59) :
Jason 2 has now been handed over to NOAA:
“The handover is a major step in Jason-2 operations. NOAA will now carry out routine operations on the satellite and, by the end of November, process the operational data received by its ground stations and interface with users.”
See
To Basil,
I did do the RSS tropics and the RSS northern hemisphere. (There is a nice fit to the Tropics).
The warming in the Tropics for RSS is about 0.045C per decade which is basically the same as the global number. The Northern Hemisphere is higher at about 0.07C per decade.
I, as well, have not seen a very good explanation for why the temperature in the northern hemisphere is rising faster.
One little explanation might be that the AMO has been cycling upward since 1975. The reconstruction shows the AMO is much more important in the northern hemisphere than it is for the global temp series or for the Tropics. In the Tropics, the ENSO becomes dominant and the ENSO doesn’t really have long cycles of consistently up or down – just a lot of short sharp swings.
And thanks for the link to the regional Hadcrut3 data series. I have not used these specific breakdowns yet and will try them out.
Can someone help me understand something?
How is it that the global warming issue became a liberal vs conservative issue? I am blown away by how is sometimes seems more a political issue than a scientific one. Everyone with a political ideology has self-appointed himself a scientific expert on the topic. Does anyone really think he is smarter or knows more than the real experts? I say leave the scientific debate to the scientists and out of politics.
Personally I am more concerned with the truth than my beliefs. It seems that (other than people who don’t know or care about the issue) everyone has a prejudiced belief and will only pursue evidence of their viewpoint.
I have liberal leanings (though I am much too independently minded to agree with most Democratic or American liberal perspectives). I believe global warming is occurring, but not because of anything any pundit or politician has said, but because of what scientists have written. That being said, I am not afraid to look at evidence from the “opposition”, because I am so much more interested in the truth than in being right.
I am busy, so I don’t have time to read everything that interests me, but I plan on reading “Red Hot Lies” soon. I only hope that everyone, liberal or conservative, will keep an open mind and not be afraid or looking at evidence from both sides.
Moderator:
Sorry, I clicked on the wrong page before I submitted this. I reposted on the correct page. Please remove the above comment as it is not on topic here.
REPLY: It’s the holiday, not to worry – Anthony
“How is it that the global warming issue became a liberal vs conservative issue? ”
The scientists who were/are pushing the theory of global warming started pushing liberal ideology and solutions, while talking about global warming.
“I believe global warming is occurring, but not because of anything any pundit or politician has said, but because of what scientists have written.”
I don’t think I have ever read a post here that stated global warming is non-existant.
Also, your statement is vague. Do you believe man-made global warming exits? That would be one of the points of discussion.
What about the scientists that do not think man has caused the planet to warm or that man’s effect is small? Being scientists does there opinion influence you?
“I only hope that everyone, liberal or conservative, will keep an open mind and not be afraid or looking at evidence from both sides.”
That would be my hope also. I would argue that you have just described the far majority of people at this site. I don’t think anyone here believes others should be prosecuted or jailed for disagreeing with another over a scientific issus as complex as climate change.
I trust the Moderator will not post this if it is straying too far from the important point of Bill Illis’s analysis, but it concerns the natures of ‘evidence’ and ‘modelling’ (the analysis is a model).
Phillip Bratby says:
“The quote from the Hansen 1985 paper includes “Evidence from Earth’s history (3–6) and climate models“. Since when have climate models been able to produce evidence? I always thought evidence came from measurements, not from predictions!”
I agree. One my peer review comments for the IPCC AR4 (which it is not surprising was ignored) said the following:.
Richard
From Basil (12:16:49) :
. ”
In some ways, see only the arctic heat up makes sense w/r to the liberals’ “rising-CO2-causing-rising-global-temperature” theory.
One of Hansen’s premises is that a 1x increase in CO2 GHG function is “forcing” a 10x increase in water vapor’s GHG function. He expects to see this effect most strongly where water vapor is limited (colder climates) and where the air itself is very dry (again, colder, arctic climates.) Where water vapor is higher (warmer areas and ocean/island areas) he doesn’t expect to see much GHG increases.
Thus, Hansen MUST show an increase in Arctic temperatures. Even though he cannot explain why only the ten Siberian thermometers are going up, when the more numerous Candian and Alaskan and Swedish temperatures are NOT rising.
Okay, here is the problem with the reconstruction and why it doesn’t work really, really well.
The Southern Hemisphere temps (as linked to by Basil) are not matched up by the ENSO or AMO indices. I tried the Antarctic oscillation linked to by evanjones and this gets one closer but it is still off by quite a bit.
Does anyone know of a good Southern Ocean Index to try?
The Southern Oscillation Index is more related to the El Nino and La Nina phenomenon rather than southern ocean temps as the name would indicate so that one doesn’t work – in fact the ENSO is negatively correlated with southern hemisphere temps which is a little strange I guess except when one considers that the ENSO develops out of the ocean circulations from the southern oceans sometimes and, in fact, lags the southern ocean temps versus leading global temperatures as it does. A little counter-intuitive but nonetheless.
Here is the problem.
Any ideas for a southern ocean index which matches this?
And Richard, I agree with you completely. Empirical evidence is king and should always overrule any theory. It is the basis for science and medicine and the reason why civilization has advanced in recent centuries. I don’t know why theory overrules evidence in the global warming field however.
John Philip (17:20:45)
Oh dear, the cherry-pick the 1998 El Nino has passed the decade mark.
If you add one more year, to 132 months, then only good ol’ GISSTEMP is
going up. Of course, that El Nino was followed by a La Nina as often happens,
and that forces the trend positive.
Of course, the trend over the past two years can’t continue, but see
That period has the most recent El Nino and the PDO flip. It makes for a good cherry pick, but it does have useful information you can read between the lines.
Bill,
It would be very significant, I think, if separate out the temperature of the tropics and the temperature of the northern extratropics, and can show that the AMO is more important to the latter, and can explain some (a lot?) of the anomalous behavior in temperature trends in the northern extratropics since the 1970’s.
Basil
Basil,
I have the tropics and the northern hemisphere pretty much nailed. The tropics is driven by the ENSO (but has an AMO influence as well) and the northern hemisphere is driven mainly by the AMO (but has an ENSO influence as well.)
Which itself is a significant development I think.
But I’m really looking to pull the natural variation out of the total global temp anomaly so it appears I need something for the southern hemisphere.
The reconstruction was already very close so I don’t need a perfect correlation, just something that is reasonably close.
Just catching up with all the discussions on this thread so I could easily have missed or misunderstood something but, with regards to the “ocean warming” ‘skin’ of liquid water. I’ve seen Doug Hoyt demonstrate this using transmission formula with appropriate absorption coefficients for typical IR wavelengths.
This raises the question (for me at least) as to how increased GHGs warm the ocean. Some AGWers seem to be arguing that, rather than warming from the direct downward IR effect, increased GHGs actually slow the rate of ocean cooling.
R.Sharpe .
c) From b) above follows that as the collisional reaction is an equilibrium , the CO2 molecules must radiate approximately the same energy as the one they absorb because N2 can radiate only a small amount .
This is a trivial consequence of energy conservation . .
I forgot the problems with the “arrow” symbols .
In the above collisional reaction it shoud read :
CO2* + N2 (-) CO2 + N2* where the “(” and “)” are left and right arrows .
Re Colin Aldridge (11:21:51) and Richard M (11:37:18)
Anyone seen a plausible explanation for ENSO?
Try
All the discussion of carbon dioxide and its supposed effect on surface temperature should be tempered by observation of the reaction of ozone to the strong seasonal increase in radiation from the Earth each northern summer. This is due to the distribution of land and sea with 40% of the northern hemisphere land by comparison with only 20% in the southern hemisphere. A mid year burst of OLR (continents heat the atmosphere and cloud cover declines with relative humidity) produces an increase in temperature at the tropopause (100hPa) in the southern tropics of something like 4-5°C in August every year. The pattern of seasonal change of temperature at 150hPa or 200hPa (peaking in April – May in the southern tropics) is unaffected. Conclusion: there is no effective transfer of energy from the heated tropopause down into the atmosphere immediately below. Convection cancels the effect of down welling radiation. Full exposition at:
The convection dynamic in the troposphere where temperature falls with elevation is very strong. ‘Tropos’ is Greek for turning. It is less vigorous in the stratosphere (‘stratos’ = layered) where temperature increases with altitude. Strangely, the strong effect of OLR on atmospheric temperature via ozone falls away long short of peak ozone concentration at 30hPa. I suggest that this is due to increasing ease of emission to space as atmospheric density declines with elevation. Thus the pattern of evolution of temperature within the year reveals the relative strength of the forces that are in operation. Here is the observational evidence of the irrelevance of greenhouse theory in the real world of atmospheric dynamics.
Bill, congratulations on what appears to be, from a non statistician’s point of view, an advance in attributing effect to cause. However, I want to point out that all of the oceans acquire heat in tropical zones and emit more energy than they receive from the sun, pole wards of 40° of latitude. The North Atlantic has a much larger swing in temperature than the other oceans but this may have something to do with the ratio between the relatively small surface area of the Atlantic in relation to the large cloud free tropical zone where the solar energy is absorbed. Additionally, the relatively closed circulation of this ocean due to the particular arrangement of the coast of Brazil in relation to the push of the warmed waters means that little of the surface circulation of warmed water is lost to the southern hemisphere where the water volume is vast and temperatures depressed by the presence of Antarctica and its constant all season downdraft of air at minus 80°C or thereabouts. So, the AMO expresses the evolution of global temperature with a vigor that is not seen in the Pacific with its vast southern component.
The tropical oceans absorb energy from the sun. A useful index of the amount of energy received might be the temperature of the water at 0-10°N latitude where the warmest waters lie. However since, after a certain point, the energy received by the ocean is resolved via evaporation rather than increase in surface temperature and much of that energy is released as latent heat at 850hPa, a useful index could be temperature at 850hPa (over the ocean) between the equator and 10°N. That might come close to expressing the the power of the solar driver that works via cloud cover change in the ozone rich high pressure zones of subsiding, relatively cloud free air in the tropics. Despite there being little low altitude cloud there is a strong flux in cirrus above 500hPa depending upon local temperature. To see this mechanism in action on an hourly basis in the Pacific east of South America see Fulldisk Satellite Image from GOES8 at This blog provides a permanent reference point to current images in the header above.
There is a key assumption here, i.e.
The ENSO and the AMO are capable of explaining almost all of the natural variation in the climate. and.
That is, the essay assumes that correlation demonstrates causality. This is a logical fallacy, without considering the underlying physics it is impossible to conclude whether the oscillations are driving the temperature or the oscillations are modified by the changing temperatures, As Trenberth and Hoar (2001) found.
If this is the case then simply subtracting the ENSO index from the observed temperatures will not give the ‘real’ residual global warming signal.
From an eyeball of the first chart it seems the good correlation starts to break down in the second half of the 20th century as GHG warming becomes dominant. The model attempts to plug this gap with a formula based on the multiple of the natural log of CO2 plus an arbitrary constant, which gives a reasonable match. The expected global warming from theory is given as:.
And this is expressed as the formula:
In other words the ‘regression’ simply uses lower values for the multiplier and constant.
This is not legitimate, the 0.75C/W/m2 figure is an expression of the climate sensitivity, that is the expected temperature rise produced by a given forcing. However the IPCC define climate sensitivity as
the equilibrium change in the annual mean global surface
temperature following a doubling of the atmospheric equivalent
carbon dioxide concentration.
The key point being that there is thermal inertia in the climate system and so the observed temperatures are only expected to match the modelled temperatures once equilibrium is reached (this is a convenient construct, in fact equilibrium is never reached, but it is a useful concept), Hansen et al (2005) finds :
Evidence from Earth’s history and climate models suggests that climate sensitivity is 0.75° ± 0.25°C per W/m2, implying that 25 to 50 years are needed for Earth’s surface temperature to reach 60% of its equilibrium response.
In other words in the half century following a change in forcing as the climate system is still responding, it is unsurprising that a simple logarithmic model with reduced coefficients gives a reasonable match to observed temperatures, but it is not valid simply to extrapolate this forward.
Some of the forcing goes to increasing the ocean heat content, this figure from the same paper shows how the models’ estimate of the increased OHC in the top 750m compare with observations. Any model that assumes all the extra forcing goes purely to increasing surface temperatures must explain the origin of this extra heat.
As explained above the statement …
As well, the AMO appears to be a natural climate cycle unrelated to global warming. is incorrect because the dataset chosen had already had the trend removed, the trend is described as
It is exactly the same data I used before except it has very slight trend in it, about 0.002C per year or about 0.02C per decade GW impact potential.
Actually slightly over 0.025C/ decade. Given that the data cover 15 decades, the difference over the dataset is approx 0.375C, equivalent to more than 50% of the 20th century warming. The fact that a change of this magnitude has an apparently minor impact on the model tells us something very interesting about its robustness.
Let us take a brief look at what the IPCC models which include the ocean physics and feedbacks actually projected for recent decades. The figures from the TAR are here. Taking Scenario A2 as a reasonable midrange choice and a good match to the actual forcings trajectory the temperatures were projected to increase by 0.35C from the baseline year of 1990 to 2010, a linear trend of 0.175C/decade. The actual trend in the monthly HADCrut dataset over the period since January 1990 to 3dp is 0.176C/decade.
Not bad.
John Finn:
Thankyou for correcting me. You say:
’skin’ of liquid water. I’ve seen Doug Hoyt demonstrate this using transmission formula with appropriate absorption coefficients for typical IR wavelengths.”
Yes, of course you are right.
My error was to say “direct radiant IR input” when I should have said “direct radiant IR and visible input”. Visible wavelengths penetrate to tens of meters before being absorbed, and they provide a significant energy input (i.e. heating) to the ocean surface layer.
Again, thankyou for pointing out my mistake.
Richard
Bill Illis:
You ask;
“But I’m really looking to pull the natural variation out of the total global temp anomaly so it appears I need something for the southern hemisphere.”
OK, I understand what you want and I cannot provide real help. However, there is a possibility that – I think – warrants mention if – as I hope – you publish your work.
I remind that I wrote above:
). ”
And I later wrote above:
. ”
In other words, there is an apparent natural cycle with too great a frequency to be analysed by your method that could be expected to provide warming which was stored in the oceans during the MWP and is now being returned to the atmosphere.
Indeed, this possibility is proclaimed by those who advocate “AGW in the pipeline”.
However, as I also said;
.”
Therefore, this postulated ‘warming from the MWP’ cannot provide a temperature rise greater than the temperature rise from the Dark Age Cool Period (DACP) to the Medieval Warm Period (MWP). But, conservatively it could be assumed to have provided 0.2 deg.C to observed recent Southern Hemisphere temperature rise.
I repeat that I know this fails to answer your need, but I do think it merits at least a footnote in the publication I hope you will provide.
Richard.
John Phillips says – “In other words the ‘regression’ simply uses lower values for the multiplier and constant (for the warming reconstruction). ”
No, I’m saying they ARE lower so far. Not simply uses, they are lower.
I can perhaps adjust the global warming model’s log formula to include a “Time” component which is missing so far as follows:
— 2.7 * ln(560) – 15.8 + (Another up to 1,000 years of additional not-really-well-defined-or-explained warming in the pipeline) = 3.25C
So far, the pipeline has some leaks in it.
—-
And I had read the paper you linked to earlier about the AMO when it was discussed on realclimate a week or so ago.
I note the abstract states the AMO is a natural climate cycle independent of global warming.
.”
And they did forecast (so far correctly) that it would weaken in the next decades..
Wouldn’t the IR “heated” skin just evaporate?
John Philip asserts:
.”
Really? Prove it.
The assertion assumes little mixing of the ocean surface layer. Indeed, it assumes a “surface skin layer” that is “less than 1 mm” thick and is independent of the underlying water. But the entire surface layer is turbulent and it is very turbulent near the surface.
While the possibility of the assertion cannot be rejected, it is so implausible as to be worthy of rejection until supporting evidence is provided.
As a postscript, I add that I spent the first 3 years of this decade living on a boat in an attempt to quantify energy interactions at sea surface. My endeavour was defeated – so the project was a failure – by effects of surface ripples (n.b. ripples, not waves) that I do not think had been previously detected. Hence, having wasted those three years of my life, I am extremely sceptical of any simplistic assertions concerning energy interactions at sea surface.
Richard
Richard,
“While the possibility of the assertion cannot be rejected, it is so implausible as to be worthy of rejection until supporting evidence is provided.”
It seems that in climate science, any assertion supporting AGW is immediadely accepted as truth. The effect is then added into the models, and if actual data is found to disagree with the models, then the data is “corrected”.
Mike
Jeff Id, re feedbacks … published just last month, Dessler et al found …
Between 2003 and 2008, the global-average surface temperature of the Earth varied by 0.6C. concluded ….
(My bold).
cheers,
JP
Thanks John Philips for linking to a public version of the Dessler paper.
I note Dessler had published a study earlier which indicated water vapour response was not keeping up with climate model’s predictions (but this study did not use all of the troposphere).
In this paper (funded by NASA – GISS I presume), water vapour response was measured across the whole troposphere as temps declined from DJF 2007 to DJF 2008 (due to the La Nina and the AMO as I have been saying all along – temps declined by 0.4C, very close to the predictions of the reconstruction).
In the very lower troposphere, relative humidity decreased by 1.5% (percentage points) and it increased in the very upper troposphere by 1.5% (the middle was constant).
Give there is much more water vapour in the very lower troposphere than in the top, the study really found there was a decline in the overall weighted-average relative humidity as temperatures declined.
So how does that support global warming? Relative humidity is supposed to stay more-or-less constant.
To be fair, the models do produce results similar to this when temps are increasing, but not when they are decreasing.
These results imply a runaway greenhouse or runaway ice planet. The models would never work if there a decline in relative humidity as temps fell and an increase in relative humidity as temps increase.
I think the study just shows there is still a lot of variability in relative humidity that we do not understand yet.
For a discussion of the ‘ocean skin’ issue please see my article here:
I would add that natural swings in the power of the atmospheric greenhouse effect are likely to occur routinely as a result of natural changes in the total amount of water vapour in the atmosphere.
As the Earth warms for whatever natural reason the atmosphere will hold more water vapour and if it cools for whatever natural reason then the atmosphere will hold less water vapour.
Either way the natural water vapour variations will dwarf by orders of magnitude any changes in the power of the greenhouse effect that could be attributed to human CO2.
Furthermore those natural water vapour swings have never caused us to cross a tipping point so why should a tiny variation induced by any effect from human CO2 do so ?
…it is so implausible as to be worthy of rejection until supporting evidence is provided.
This guy (who seems to understand a thing or two about ocean surface thermodynamics, judging by his publication record) did the research and it was published in
Linking thermal skin gradients at the sea-surface to the radiative coupling of the atmosphere and ocean: a mechanism for heating of the oceans by atmospheric greenhouse gases
From the abstract … The heat flux to the atmosphere is achieved though conduction though the skin layer of the ocean, within which a temperature gradient exists, so that the interfacial temperature of the ocean is cooler than the bulk temperature below. The thickness of the conductive skin layer is of comparable size to the emission (and absorption) depth of infrared radiation in water. The differences in the skin SST and the subsurface bulk temperature are typically a few tenths of a degree, an amount that is important in terms of attempting to detect oceanic warming caused by climate change. Given that the ocean absorbs the infrared radiation emitted by the atmosphere, including by greenhouse gases, within the radiative skin layer, concern has been expressed about how the increasing levels of greenhouse gases can heat the ocean. However, the skin temperature gradient is believed to be responsive to the intensity of the incident infrared radiation at the surface, and this modulates the heat flow from ocean to atmosphere. Empirical evidence to support this hypothesis will be presented, based on measurements taken at sea using the Marine-Atmospheric Emitted Radiance Interferometer (M-AERI).
Cheers,
This is a great analysis. My only issue with it is the apparent assumption of no cross biases for unevaluated parameters.
For example, suggesting that the remaining trend may have some small, but basically minimal room for influence on the temps is a bit mesleading. It may well be true that there is little additional explanation in the temperature variations or trends in addition to what’s been looked at simply because ENSO and AMO are proxy measures for these otehr things, but that is different from suggesting a lack of solar influence. To the extent that the ENSO and AMO occur precisely from other influences means that they are really a proxy measure for something else. It would be similar to saying that age as an insurance rating parameter isn’t really the risk factor, but it is a proxy for experience.
This doesn’t necessarily change the analysis at all, but it may shift the question. Does the sun influence the El Nino and/or AMO cycle? And how much? Is the relative magnitude of El Nino affected by GHGs?
I am also curious as to why the PDO is not considered. Even with a good fit, I’m just uncomfortable with the assumption that global temps so easily boils down to three parameters. I personally suspect that some of the attribution currently given to GHGs in the analysis may even yet be overstated if other things were considered.
It may seem like I’m being too critical, I suppose. I don’t mean to be. All in all, I find it to be a very intriguing study.
[snip] John stop that, nothing was claimed – Anthony
Moderator – Fair enough, however FYI the first line of the article linked to and authored by Stephen Wilde states
Stephen Wilde has been a Fellow of the Royal Meteorological Society since 1968. however I counted half a dozen factual errors in the first few paras of the piece, which I thought odd for someone claiming to be a FRMetS.
The RMS lists those entitled to use the FRMetS title on its website
There is no Stephen Wilde listed. It also lists the requirements ….
which contrasts with Mr Wilde’s profile which states he runs a Law firm and follows Meteorolgy in his spare time
Now there may be some innocent explanation, misunderstanding or administrative error, but given the recent post about Karl’s phantom doctorate perhaps we could ask Mr Wilde to explain, as it appears to be a primae facie case of someone impersonating a meteorologist
;-)
REPLY: My initial issue was with claims (or lack thereof) made in comments, however it appears that you aren’t the first one to notice this regarding articles outside of this blog.
See
I agree that he is not on the list. But let’s hear why. Perhaps there is a valid explanation.
Mr. Wilde, what say you?
Richard S Courtney says:
Doesn’t that then provide a mechanism for the transfer of energy (heat) from the surface layer to lower layers, and thus reduce the opportunity for its removal via evaporation?
Tom Vonk says:
Does this mean that at any one instant, only 5% of the CO2 molecules are able to heat the atmosphere by transferring energy to other species?
I’ve made the position quite clear elswhere.
There is a small error in that I was a student member from 1968 to 1971 but I have been a Fellow since then.
I cannot use the letters FRMetS since a rule change in 1973 but I can continue to refer to myself as a courtesy title within the rules of the RMetS.
I make my position as an amateur enthusiast perfectly clear in the Contributors section at CO2sceptics.com and in my introduction to my part of the forum there.
Hmmmm … a seems the RMS rules changed in 2003, the new stringent requirements were introduced, and Fellows elected before that date may continue to describe themselves as a such purely as a courtesy title but not use the formal FRMetS appellation as a sign of professional competence.
Still, most people are not aware of this distinction and would be bound to conclude that someone describing themselves as a Fellow of the RMS was indeed a FRMetS and a professional meterologist, rather than an enthusiastic amatuer.
Incidentally, the Code of Conduct for Fellows (FRMetS) states they must use the name of the Society only when duly authorised.
Whoops, I meant a rule change in 2003.
As regards my article it is expressed to be a discussion piece, not a definitive exposition.
It provides a starting point from which lay readers can consider the facts and relate them to different theories.
The points I raised have never been answered to my satisfaction and the ocean skin theory remains unproven on a global scale.
Even if it happens the scale of the phenomenon may be insignificant in the face of natural forces.
The ocean skin theory is at present merely a helpful speculation for the AGW lobby which has problems convincing anyone that slightly warmer air can heat oceans on a meaningful time scale.
I take it as a compliment that some consider my article good enough to justify attacking me on personal grounds.
Mr Wilde – Thanks for clearing that up, as I said, it was quite possibly a misunderstanding – which turns out to be the case. Most people will find it a little odd that someone with no professional meteorological qualifications, publications or experience, and who is ineligible to use the title FRMetS can legitimately describe themselves as a Fellow of the Royal Meteorological Society, but I accept that this is indeed the case.
Anyway, this is drifting both off-topic and ad-hom, as it is meant as a ‘discussion piece’ I may add a few thoughts and corrections later if I get time, but probably not here. …
JP (B.Sc) ;-)
John Philip,
Of course most people would not be aware of the distinction which is why the first sentence of my first article made it clear that my Fellowship predated the requirement for a professional qualification.
Please do be more careful in jumping to conclusions.
By now my true status is widely known and unlikely to mislead anyone who is interested enough in my stuff to actually read it.
After examining the AMO index issues more thoroughly, I have decided it is still valid to use the AMO index as a natural climate variable which is not related to global warming.
It is apparent the untrended index should be used, however, given the AGW community would not accept using the raw untrended data (given it does have a trend.)
Earlier in developing this model, I had downloaded a Long-Term AMO reconstruction by Stephen Gray et al which goes back to 1572. I decided not to use it since it is just annual data and has greater variability than the current AMO index method. There are a few inconsistencies in the time periods when the indices overlap but agree on the general up and down swings.
Here it is.
This reconstruction shows that the AMO appears to be a natural climate cycle that even has greater swings in temperatures in the past than the current index shows. Some of these swings match up with the climate changes we know about in history such as the onset of the Little Ice Age for example.
Here is what the Raw Untrended AMO Index looks like.
While it does have what seems to be a pretty rapid trend upward, some of this is just a result of the scale and the time period covered. The trend upward over the past 140 years would not be inconsistent with what is seen in the longer reconstruction.
The increase is only 0.024C per decade and, in terms of the regressed coefficient for the AMO, it would have just a 0.018C per decade GW impact (the models predict about 10 times as much).
Other studies conclude the AMO is a natural cycle, so therefore, I believe it can continue to be used.
Furthermore, it clearly impacts monthly temperatures so any reconstruction should use it. The untrended data does not contain a global warming signal. What may be a completely natural increase in the AMO over the past 140 years, can be left for the global warming residual.
Oh, and by the way…
The ocean skin theory is at present merely a helpful speculation for the AGW lobby which has problems convincing anyone that slightly warmer air can heat oceans on a meaningful time scale.
is not an accurate description of the process. The heating occurs as a result of solar irradiance penetrating the surface, the IR radiation reduces the temperature gradient at the surface layer slowing the release of that heat back to the atmosphere. In fact on average the ocean surface is warmer than the air above so there is no proposal that the ocean warming is a result of heating from ‘slightly warmer air’.
HTH
I kept saying “Raw Untrended” in the above and that should say “Raw Trended”.
Richard Sharpe:
You ask me:
Richard S Courtney says:
“The assertion assumes little mixing of the ocean surface layer. Indeed, it assumes a “surface skin layer” that is “less than 1 mm” thick and is independent of the underlying water. But the entire surface layer is turbulent and it is very turbulent near the surface.”
Doesn’t that then provide a mechanism for the transfer of energy (heat) from the surface layer to lower layers, and thus reduce the opportunity for its removal via evaporation?”
I agree, it must do that. However, I made no mention of evaporation because I think is not pertinent to the assertion that was made.
I was addressing a specific assertion of John Philip; viz.
.”
My statement that you quoted says that such a “surface skin layer” is very unlikely to exist and, therefore, the hypothesis of that “surface skin layer” inhibiting upward loss of “sunlight absorbion” (sic) is very unlikely.
And I concluded from this saying:
“While the possibility of the assertion cannot be rejected, it is so implausible as to be worthy of rejection until supporting evidence is provided.”
(Since then no such supporting evidence has been provided in this discussion, but an attempt at “argument from authority” was attempted.)
We could discuss evaporation elsewhere if that were desired. (Evaporation provides a much greater thermal transport from ocean surface than IR which is involved in the GH effect). However, such discussion would be a distraction from the important assessment of Bill Illis’s analysis which is the subject of this debate.
Richard
is not an accurate description of the process. The heating occurs as a result of solar irradiance penetrating the surface, the IR radiation reduces the temperature gradient at the surface layer slowing the release of that heat back to the atmosphere.
JP
what about the mixing? or if there’s no mixing – evaporation?
Though we’ve been talking about 1 mm penetration isn’t the true value more like 1/20 mm, i.e. ~99% is absorbed within 0.05mm.
I too find the AGW explanation for ocean warming (or lack of cooling) totally implausible – and certainly within the timeframe of decades.
The question of physical process and causality was my concern with this work, after all, you seem to be saying that global temperature can be expressed predominantly as the sum of two localised temperature measurements. I don’t think this ought to be a great surprise. (given the accepted effect on weather that ENSO and AMO have)
After reflecting a little, I now wonder if, given the presence of this coupling, can any model which does not incorporate the coupling be relied upon to demonstrate the underlying forcing effect?
Sean
To Sean
Underlying the ENSO and the AMO is the thermohaline ocean circulation.
The AMO is where the once warm ocean water sinks to the depths and becomes part of the deep ocean. The energy transfer between the ocean and the atmosphere as a result of this process creates the forcing/temperature change – sometimes warm, sometimes less warm.
The ENSO is not normally thought to be a node of the ocean circulation but has similar characteristics in that deeper and colder ocean water is sometimes brought to the surface when the Trade Winds are stronger than normal for an extended period of time. Colder water wells up and there again is an energy exchange. When the Trades slow down for an extended period of time, there is less up-welling and more surface heating from the Sun in the Nino region and again there is then a warm energy exchange rather than a cooling energy exchange.
Now if there were an ocean circulation index, we wouldn’t the two measures.
John Finn
says,
“JP
what about the mixing? or if there’s no mixing – evaporation?
Though we’ve been talking about 1 mm penetration isn’t the true value more like 1/20 mm, i.e. ~99% is absorbed within 0.05mm.”
If the surface skin, which absorbs the downwelling radiatio,n is mixed into the ocean below, than the bulk of the ocean below is certainly being heated by the downwelling radiation.
Actual measurements have been made supporting this specific mechanism by which the downwelling radiation absorbed in the surface skin suppresses the transmission of heat from the ocean bulk to the surface. The temperature gradient between the surface and 5 cm below the surface has been shown to depend on the difference between downwelling and upwelling radiation.
You might want to revise your belief that the surface skin mechanism is not working on the basis of real data.
For those unconvinced about the existence of the ocean surface skin layer and its role in the increase in OHC here’s another blatent appeal to authority, NASA this time. It’s got figures and charts and references and everything …
Over the surface of the ocean, there frequently exists a very thin layer called the surface skin layer in remote sensing sciences (Schluessel et al., 1990) (Figure 2).. Above and below the thin skin layer, turbulent eddy fluxes enhance heat flux in the ocean and/or atmosphere across the interface. However,).
John Philip:
The discussion of a hypothetical ‘ocean skin’ is a distraction from the purpose of this debate; viz. the analysis by Bill Illis.
Believe in the existence of this mythical ‘skin’.
I have had my say in this distraction concerning the hypothetical ‘ocean skin’ and will say no more on it whatever ‘hooks’ are dangled.
Richard
Bill, I don’t have a southern Ocean index, but one of those links is to the Indian Ocean temps, monthly, going back well before 1900.
It’s not a IO dipole index, though, just temperature anomaly.
John,
The paper your referenced is AGW extremeism based on pre-existing AGW work. It is an on topic reference for your point but it is a weak paper because of it’s simplified method for calculation of water feedback and overreaching conclusion. Here’s a quote which rubbed me wrong.
“We use a conventional definition of the strength of
the water-vapor feedback:
Soden et al. [2008] provide pre-computed values of
@R/@q(x, y, z). We then multiply @R/@q(x, y, z) by the
observed Dq(x, y, z)/DTs between two climate states and
then sum over latitude, longitude, and altitude to obtain an
estimate of lq. Soden et al. also provide @R/@q(x, y, z)
broken down into longwave (LW) and shortwave (SW)
components, allowing us to separately compute the LW
and SW water-vapor feedbacks, lq,LW and lq,SW.”
They use previous estimates to calculate the water vapor feedback. These calcs are based on further estimates of the feedback mechanisms for water. Again, I don’t make the claim that AGW is false, just that this kind of work does not help.
The paper itself references numbers from 0.94 to 2.69 W/m^2. This doesn’t do a good job supporting you claims of exponential out of control warming and I think scientists would do well to examine our real knowledge of historic temperatures before making such claims.
We know for certain that people and plants are being uncovered from glaciers that lived only a few thousand years ago yet there is no evidence of floods. We also have the distinct possibility that temps “globally” may have spiked above today’s temps only 1000 years ago. Again, there was no major flood which would indicate exponential temperature rise!
Where I have my problem with this study, is in the reliance on simplistic equations which are fine but they are followed with unreasonable conclusions. The authors give their motives away completely with this over-conclusion:
“The existence of a strong and positive water-vapor feedback means that projected business-as-usual greenhousegas emissions over the next century are virtually guaranteed to produce warming of several degrees Celsius. The only way that will not happen is if a strong, negative, and currently unknown feedback is discovered somewhere
in our climate system”
The feedback magnitude is clearly not correct as demonstrated by Bill Illis’s work above amongst other things. But the real evidence should be the multiple climate reconstructions such as Crag Loehle which use reasonable methods and demonstrate significantly warmer temps only 1000 years ago with no great flood.
My final point is NOT that you cannot be right, you might be. But rather that you cannot claim you are correct with the current state of science.
We just don’t know!
It is an amazing thing in this science where so many scientists make overreaching conclusions from their calculations. Such simple stuff too, Bill removed known factors from GISS and made the conclusion the rest is CO2 warming (without presenting evidence). Not that you are wrong Bill but no evidence was presented to support this conclusion. I am very pleased with the rest of the work, you are on the right track.
Mann 08 makes a complete disaster (IMHO intentionally) of the math in their paper and makes the conclusion that we are warmer than ever.
Dessler 2008 does simplistic calcs and determines exponential growth is a done deal yet it is unsupported by any historic temperature measurement.
Will it ever stop?
How backward is this science, Here is a quote from CA by Esper 2003 paper
.”
Really gorgeous quote I think.
If we don’t know history, we don’t know the future.
Richard Courtney: “4.From (3) it can be deduced that gazelles leap in response to the presence of a predator.”
What you are describing is induction, not deduction. In this instance you have argued from the particular to the general.
“Gazelles are observed to always leap when a predator is near” is a particular observation, or observations, that is, limited to a finite set.
“Gazelles leap in response to the presence of a predator” is a general conclusion. Therefore, the argument is inductive. A deductive argument would be:
– Gazelles leap in response to the presence of a predator
– This animal leaps in response to the presence of a predator
– This animal is a gazelle.
Scientific hypotheses/theories are deductive. A general case is stated, then particular observations/tests made for or against the general claim. These observations/tests are evidence. Climate models can act as evidence because they test the theory.
To Richard Courtney
Apologies for wasting your time with peer-reviewed papers and irrelevancies about Fellows of learned societies who turn out to be amateurs, and so forth.
But perhaps I could crave your indulgence for just one more moment and ask you to remind us – what was the title of your PhD thesis?
Thanks.
Brendan H says:
That would be the case if climate models faithfully implemented the theory, however, that would seem to be far from the case.
They contain, as far as I can tell, all sorts of ad-hoc forcings to get them to conform to the actual temperature record.
Who was it who said give me five parameters and I can model an elephant.
Moderator –
WUWT recently saw fit to post about Tom Karl’s honourary doctorate.
[snip]
John while I agree with you in principle, you’ve missed an important distinction. People such as Karl who are public servants who abuse such titles have no expectation of privacy by virtue of their public employment.
Private citizens do have an expectation of privacy.
I will not allow you to turn this blog into a PERSONAL LEGAL LIABILITY FOR ME by posting such things as your personal opinion. I do not have time to verify such things, for all I know the letter could be fabricated. By allowing you to post such things on my blog the liability shifts to me.
Cease and desist or be banned. Your welcome is just about worn out. No dissent, no further discussion, just stop. Not one more peep from you on this issue.
– Anthony Watts
Brendan H:
Is that why climate alarmists won’t defend their ‘runaway global warming/CO2/AGW is gonna getcha’ hypothesis in a formal, moderated debate?
Or are they, like, too busy modeling to defend their [repeatedly falsified] AGW hypothesis?
Because the challenge to formally debate AGW in a neutral setting has been out there for a lo-o-o-ng time now.
What are they afraid of?
…. 3 H2O> (H2O)3 trimer
…. 2 CO2> (CO2)2 dimer
…. H2O + CO2> H2CO3
…. H2CO3> H+ + HCO3 –
…. H2CO3> 2H+ + CO3 —
I can get more exotic structures:
…. 3 H2O + 2 CO2> (H2O)3(CO2) 2
I can imagine any structure to 4ºC and pressure equal to 100 atm. (In the deep ocean)
FM
To evanjones,
I put the Indian Ocean SST index into the reconstruction and it certainly helps with the southern hemisphere reconstruction and it also helps with the overall global temp reconstruction.
Three problems, however. The data ends in 2004 and doesn’t seem to be updated anymore. Second, I tried other Indian Ocean Indices including the dipole and these other ones don’t provide an improved reconstruction. Thirdly, the Indian Ocean SST index covers the whole Indian Ocean and I don’t want to use indices that cover really large sections of the oceans – the complete ocean index would be the best reconstruction of course but then it would be like trying to reconstruct a dataset when you are already using 70% of the dataset as your independent variable. It is one of the reasons I used just the Nina 3.4 regions.
So, back to the drawing board again.
I’ve been thinking about the question Sean Houlihane asked and what is the underlying forcing (or let’s say rationale) with the ENSO and the AMO which makes them good choices to reconstruct climate variations.
And the answer really is that these two regions are the most active regions where the Oceans are exchanging energy (heat and cold) with the Atmosphere. There is far, far more energy being transferred back and forth in these two regions than any other …
… Except for the third big region which is the opposite of the AMO in the southern hemisphere and that is the downwelling region in the Weddell Sea off Antarctica.
So this is the missing piece of the puzzle. No index for this region however. Any ideas by anyone? Bob Tisdale or David Smith still around?
Richard Sharpe: “They contain, as far as I can tell, all sorts of ad-hoc forcings to get them to conform to the actual temperature record.”
Sorry, that’s above my pay grade, but if you think you have a case, write it up and see how it runs.
Smokey: “Because the challenge to formally debate AGW in a neutral setting has been out there for a lo-o-o-ng time now.”
A while back I watched a television debate between warmers and sceptics. My local newspaper features pro and con AGW views when the issue arises. There’s plenty of debate across all sorts of media, from scientific journals to the MSM to internet venues such as this one.
But ultimately it’s the scientists who will decide for or against AGW, so the scientific journals and related media are the ultimate arbiter of the facts about human-induced climate change.
Richard S Courtney (15:01:43) :
Says,
” John Philip:
The discussion of a hypothetical ‘ocean skin’ is a distraction from the purpose of this debate; viz. the analysis by Bill Illis.
Believe in the existence of this mythical ’skin’.”
You are wrong about that. The basis is actual experiments. In my post there is a link to a graph,
which shows the temperature difference between the surface of the ocean, and 5cm below the surface, to demonstrate how the flow of heat toward the surface of the ocean from below responds to the IR radiation balance at the surface.
” I have had my say in this distraction concerning the hypothetical ‘ocean skin’ and will say no more on it whatever ‘hooks’ are dangled.”
That is OK. Then I have had the last word.
Richard
28 11 2008
TomVonk (01:32:30) : .
That’s the problem Tom, you’re discussing a physical abstraction rather than a real atmosphere. Your circular argument says “it’s in LTE (your initial assumption) therefore the atmosphere can’t heat up”. The point is that the atmosphere does heat up therefore your assumption in your model is not valid!.
Richard Sharpe
I believe that was von Neumann.
Tom Vonk, Phil
If I understand the argument, the truth is somewhat between the two sides arguing:
If you increase the CO2 concentration, the radiation balance of Tom Vonk will be temporarily disturbed such that the air heats up, until a new, higher steady-state temperature is reached where the radiation balance is once again restored.
So, in many ways, both are correct: the radiation balance is usually valid, but can be temporarily disturbed, so as to attain a new steady-state temperature.
To all posters: This has been great debate – it has brought out the best in the bunch as far as technical debate. Both sides have done good job presenting their points…. without either side clearly serving a death blow to the other side. ….. which is of course the basic point of the skeptic community – the science is not even close to settled & solid scientific research & debate (vs political debate) needs to continue. Why are skeptics skeptical? Because they know enough about the science to know there is a lot we don’t know & aren’t arrogant enough to pretend that we do know.
Smokey (17:24:06) : asks : What are they afraid of?
A debate exactly like this one is what they are afraid of – because when it all comes out, what is clear is we really don’t have all the answers & the science isn’t settled – and that is a very hard position on which to sell major sacrifices to the general public.
Along those same lines, if there are any journalist types out there lurking – who are objectively looking at this – please report on this specific debate because the public deserves to know what’s really going on behind the scenes.
Norm posted the following earlier, however because Norm is a geophysicist with some experience his opinion was felt by some to be authoritative. Here’s some data to refute his handwaving..
Here it is for earth conditions:
.
Here are the spectra for a single line at 380ppm and 760ppm, note the absorbance increases (transmission decreases).
This is why the CO2 notch is virtually identical in the two spectra; the CO2 band was virtually saturated at the 325ppmv concentration level, so even nine times more CO2 has almost no appreciable effect.
Norm K.
Here’s the spectral line for Martian conditions:
A bit more than “almost no appreciable effect”!
Just one more thought on that ocean skin.
If the water skin is warmed up so as to slow down the release of heat energy from ocean to atmosphere will that not reduce the heat energy available to the atmosphere which will then cool down so as to terminate the process and reinstate the ‘normal’ flow of energy from ocean to atmosphere to space ?
I think this is relevant to the initial article from Bill because it deals with processes that may affect the data used there..
Brendan H:
You assert:
“Scientific hypotheses/theories are deductive. A general case is stated, then particular observations/tests made for or against the general claim. These observations/tests are evidence. Climate models can act as evidence because they test the theory.”
Sorry, but No!
Climate models describe the theory: they do not “test” it.
Comparison of the model’s output with empirical data is a test of the theory. If the output and the data do not agree then this indicates
(a) the theory is incorrect,
Or
(b) the description of the theory (i.e. the model) is incorrect
Or
(c) both the theory and the description of the theory are incorrect.
These indications remain true until the data is shown to be wrong.
A major problem with climatology is modelers who seem to think ‘models can act as evidence because they test the theory’. Such a thought as tantamount to a claim that the climate does what a climate model says.
A theory is an idea, a model is a representation of an idea, and reality is something else.
Richard
I feel proud to be in the company of lay-persons here. Sorry to post so late. Bill Illis, Fellow of the Royal Society of Amateurs (those who do it for the love of it); Willis Eschenbach, another FRSA who has been demolishing the shiny new hockeystick; Jeff Id, FRSA (if I remember right) who has demolished the very mathematical basis of all hockeystick lookalikes; Stephen Wilde FRSA; who else? I’m another layperson, another blog-contributor noob; the thing I couldn’t find was an adequate primer on Real Climate Science for my needs, so I studied the science, wrote my own, and try to keep on improving its clarity for lay readers as well as its scientific adequacy.
Jeff L (26/11, 06:43:41) : “This little exercise here is a good example of collaborative science – not unlike the concept behind linux. As a community, there should be some consideration of a way to formalize this concept”.
I’ve recently had thoughts, again, along these lines, and if Anthony would like, I would be happy to draft an article for this blog. But meanwhile, to catch bright ideas now, I’ve set up a thread on our forum here. Jeff L please get in touch if this speaks to you!
Bill Illis (10:59:02) :
Thanks for your hard work, and for the trended AMO plot.
Just to confirm, is this from the link John Philip (13:53:42) posted corrected for “climo”?
Would be great to add that link to the “Resources” page. I’ll post it over on the comments of that page if you can confirm.
Thanks again.
Okay, I have solved my problem with modelling the Southern Hemisphere temperatures and have a better global temperature reconstruction now.
Based on my thought from above that there is a third active region where the Oceans are exchanging energy with the atmosphere, the Antarctic Downwelling region, about where the Weddell Sea is, I have created an new index from the Smith and Reynolds ocean SST dataset. This region is effectively the southern version of the AMO where the warmer ocean water, cools and sinks to become part of the deep ocean circulation.
Bob Tisdale always uses this dataset, so I thought I would as well. It goes back to 1854 on a monthly basis and is updated to October 2008. I downloaded the monthly anomalies for this box which is the Antarctic downwelling region. It has similar characteristics to the AMO with some longer cycles but only a +/-0.6C variation.
The data then provides a pretty good reconstruction for the southern hemisphere – not perfect but certainly covering the changes. The Nino region now ceases to provide any info for this reconstruction (but it is at least not negatively correlated as it was before). The AMO stays significant (coefficient is 0.216) and the Antarctic DownWelling region is 0.545.
Southern Hemisphere
Putting this new index into the global reconstruction results in an overall better model in my opinion but the r^2 falls to 0.724. The Nino coefficient rises now to 0.07 (providing +/- 0.2C to the reconstruction), the AMO coefficient rises to 0.59 (providing +/- 0.36C to the reconstruction) and the ANT DW coefficient is 0.36 (providing up to +/- 0.2C to the reconstruction).
Global Temp Reconstruction
Its a little hard to see what is going on here because the Red Line model is covering up the Blue Line Hadcrut3 temp anomaly for lots of the time period – which would be the goal I guess.
Global Warming now falls to 1.35C per doubling and there is a better match to the residual over the record.
Any thoughts?
To John M,
Yes the data for the AMO raw trended data is from John Philips link. It comes from the exact same dataset and page I was using.
The description of the data on that page is a little clumsy and you can’t really tell it is the raw data which is why I didn’t look at it before.
Phil,
I was really keen to try to understand what you are saying, but I found two problems:
1. I found it hard to differentiate between what you wrote and what you were quoting,
2. The links you provided do not work.
Can you try to use <blockquote> and </blockquote> around material you are quoting?
Bill Illis:
You ask:
“Global Warming now falls to 1.35C per doubling and there is a better match to the residual over the record.
Any thoughts?”
I offer a few.
Firstly, congratulations. This is a remarkable analysis that deserves publication.
On face value, the analysis could be accused of assuming ‘correlation indicates causation’, but it is not guilty of that and any such accusation should be rejected.
The analysis removes known natural effects from the time series to reveal a residual trend. Of course, there could be other natural effects that may be contributing to the time series. And one assumption of the analysis is that all natural effects are providing a positive contribution to the trend: this assumption may not be correct: for example, volcanism lowers temperatures (at least, for temporary periods).
However, it can be said that the residual trend of your analysis shows the warming that has happened independent of AMO, ENSO and Antarctic Downwelling.
And it can be assumed that this residual indicates a maximum of the warming that may have happened as a result of AGW over the analysed time period. Using this assumption the analysis suggests that climate sensitivity has a maximum value of 1.35 deg.C for a doubling of atmospheric carbon dioxide.
The obtained maximum climate sensitivity of 1.35 deg.C is less than half the IPCC “best estimate” of 3 deg.C for a doubling of atmospheric carbon dioxide, and well below the range of the IPCC AR4 estimates (2 to 4.5 deg.C) for a doubling of atmospheric carbon dioxide. Indeed, 1.35 deg.C is below 1.5 deg.C, and the AR4 says the climate sensitivity is “very unlikely” to be below 1.5 deg.C for a doubling of atmospheric carbon dioxide.
But, of course, 1.35 deg.C is more than three times the 0.4 deg.C that Sherwood Idso obtained from his 8 “natural experiments” to determine the climate sensitivity for a doubling of atmospheric carbon dioxide.
A very fine piece of work and I look forward to seeing it in print.
Richard
To Richard
Just a short comment about volcanoes. The large ones clearly impact temperatures but for whatever reason, the impact is picked up by the ocean indices I am using.
I had charted these before (linked below) with the previous model and have now zoomed into the specific periods in question with the newer one as well and I can’t see there is an adjustment required for the large volcanoes.
Bill,
As you may know, I’ve done a lot of analysis of the HadCRUT3 series from the standpoint of its spectral characteristics. It would be interesting to examine how well your model preserves the spectral characteristics of the raw data. If you are interested, let’s take this up in email. I’ll send you what I’ve done, and tell you what I’d need from you to do a comparable spectrum analysis.
If interested, email me at blcjr2 at gmail dot com, and I’ll respond.
Basil
Bill Illis (09:51:19) :
Thanks Bill. I’ll add a comment to the Resources thread.
Bill Illis, don’t you have an implicit assumption that the “heat in the pipeline” from start to finish of your data is similar to the heat that was put into the systems from X(i) pipelines in Y(i) years starting with x(i), y(i) before your data starts? I only point it out for completeness. I do not know of any proof of “heat in the pipeline” since such a phenomena should be measurable.
Just to maybe convince myself further (I am a natural skeptic after all) and from the comment by Richard Courtney about volcanoes where I had to squeeze down the period covered in the charts to have a look, I created little 10 year chucks of the chart so that we can see if it is actually working or not.
And I’ll be ___. There is no way this is a fluke. Now keep in mind there is little 0.1C to 0.2C and even 0.3C errors in this reconstruction, but this model really does follow the Hadcrut3 trend/cycles very closely.
There are a few periods where it is off by too much for my liking – 1955 to 1957, some parts of 2000-2003, and a section from 1920 to 1926 where the reconstruction is consistently about 0.1C below the temps, but other than that it is not bad.
Have a look, (there is a lot of them but in order I believe.)
Bill Illis:
This is an interesting piece of research.
I hate to sound like a nagging statistics prof, but I haven’t read anything that suggests that you have examined the issue of stationarity in your time-series data. As I mentioned in an earlier comment, if your data is not stationary but that you haven’t corrected for this problem, your estimated coefficients could be severely biased or outright wrong.
Most of the comments on the paper have focused on issues related to climate theory. This is well and good. But if you are applying a statistical method such as regression analysis, you also must make sure that the method is applied properly. Regression analysis only works if your data respect specific conditions. Stationarity tests are a must in this case.
Otherwise, you might end up like the infamous Michael Mann and his “hockey stick”, which resulted from an incorrect application of statistical techniques that he did not fully understand.
All the best.
Richard: “If the output and the data do not agree then this indicates
(a) the theory is incorrect,
Or
(b) the description of the theory (i.e. the model) is incorrect
Or
(c) both the theory and the description of the theory are incorrect.”
A fourth possibility is that the data is faulty.
“Comparison of the model’s output with empirical data is a test of the theory.”
Yes, I was speaking in shorthand. Climate models are part of the procedure that is used to test the theory, so in that sense form part of the body of evidence. In principle the procedure is similar to the use of laboratory experiments to test a theory in other fields.
“Such a thought as tantamount to a claim that the climate does what a climate model says.”
If a climate theory is a claim about what the climate does, and if the model describes the theory, then by your own logic the model attempts to show what the climate does.
Bill Illis – I agree with Bob Tisdale that the treatment of ENSO ignores the physical reality of the situation. I’ve documented how the oceans have responded to ENSO here:
Tom Vonk is missing an important point. When a local thermodynamic equilibrium (LTE) is established the amount of radiation absorbed per second by greenhouse gases (GHG) at a point in the atmosphere equals that emitted. If the amount of GHGs at that point increases, the amount of IR light absorbed increases and then the amount of radiation from the GHGs must increase to reestablish the LTE. In order for that to happen the temperature of the atmosphere at that point increases.
It is only through this increase in temperature that the average kinetic energy of collisions can increase and thus the rate of exciting the greenhouse gases GHGs so that they can radiate (The energetic increase could also be in the average vibrational/rotational energy of the collision partners, but again, the only way to raise that is by increasing the temperature).
Brendan H says:
I suspect you do not know what it means by a model matches the theory nor do you understand the opportunities for errors in the model that cause it to depart from reality and the theory.
Can someone explain what this means with respect to surface temps lagging ocean temps? Do these lag times seem less than what was previously supposed? Does this model have any forecasting ability? I have always thought that the lag would be much smaller than was previously supposed since the oceans are so shallow when compared to Earth’s diameter. Does this work bear out that assumption? What does this work mean as far as heat being “in the pipeline”? Have you found the hiding place of the “missing heat”?
Thanks,
Mike Bryant
Yes, it’s crazy the Indian ocean data goes only to 2004. I’m sure it has been tracked since then.
Brendan H wrote:
Except that there’s no way to know if a model reached the “right” conclusion for the “right” reasons. There are probably thousands of ways one could theoretically match the known temperature record, but you’d never know which was the “right” way without fully understanding, and being able to perfectly model, ALL the parameters. And then you have to know ALL the causes and effects. Running 47 different models hundreds of times and picking the ones that “match” isn’t evidence of anything except chance.
Bill Illis The version I saw was scanned in. It has some interesting points you may wish to look at.
To Carl Wolk,
I noted in my write-up that there is some interaction between the AMO and the ENSO.
When you use the higher frequency figures over longer time periods without smoothing, however, (which I need to use to build a monthly model), you see that the longer-term cycles of the AMO and this new Antarctica DW index are reasonably independent of the ENSO.
There are some periods where the ENSO seems to have a lasting impact on the AMO, the 1997-98 El Nino for example. But there are many more periods where the two series are moving in opposite directions and where there is no impact or no lasting impact from a La Nina or an El Nino.
Here is a scatter plot of the higher frequency data with the ENSO regressed on the AMO. Obviously the relationship is much more complex than this scatter indicates but it does give you a general idea of why I am concluding they are independent.
The ENSO on AMO regression also leaves an error term which mostly preserves the original AMO cycle. (It is not quite the same, however, as some of the spikes are slightly reduced and the 1997-98 El Nino moves out a few months versus the original series.)
There is less interaction with the other index I created/just made-up/have no basis to actually use/but does actually explain SH temps.
I wonder if Bob and yourself could try damping down the smoothing and use a longer time series to see if there is a higher frequency more definitive relationship.
As some have noted I need to run some autocorrelation tests on the independent variables but I think relationship is too complicated to address properly and I need to do stationarity tests. There are 1652 data points, how many lags do I have to test? Do I need to do that for the Hadcrut3 observation dataset? GISS, RSS etc? I am running this method on all the temp series and the only issues to come up so far is the SH temps which I believe I have addressed and there is too much variation in the US lower 48 and the Arctic to produce reasonable results. I took quite a few statistics classes but it has been a long time since I have used any of it.
@ Jeff Alberts (04:37:16) :.
To John Pittman
Thanks for the link to the Michael Mann paper. He did better work before he got into tree rings. I think he might have got into them as a result of the work he did on this paper. He needed check to see if the frequency repetitions occur farther back in time as well and he had to use “proxies” for that.
Well, I tried some of this out on my data and, sure enough, it is in here as well. (I saw some of this before when I was trying to test the lags and when I got to about 25 years, a cycle started appearing but I thought it was just a fluke. Nope, Mann got the same thing.)
These cycles show up in the Hadcrut3 dataset and in my residuals as well when I adjust out the impact of the ocean variables.
Since the numbers are so close to what would occur with the solar cycle, I played around with 5.5 year lags, 11 years, 22 years and 44 years. All these are in both datasets.
My skill set does not allow me go much beyond this.
Applying the rest of the techniques in Mann’s paper are in the same boat..
But in that sense Bill’s effort isn’t a model (certainly not a predictive one). What he’s trying to demonstrate is that the temperature history can be fitted by a collection of observed anamolies for the ocean basin oscillations plus a warming trend which he accounts for by GHG. That basically doesn’t allow prediction, after all he’s just taken the anomalies without assigning cause (volcanos for example), if he were able to predict the ENSO, AMO etc. then that would be a model. It does suggest that the system could be modelled with a rather small number of regions though. It begs the question of what is left out, for example what happens if aerosols/albedo are added?
Looks like a very high sensitivity to solar changes despite the fact that we are very puzzled as to why that should be so.
Then combine solar with oceanic variation and who needs CO2 ?
Interesting that Bill sees episodes when ENSO and AMO are moving in opposite directions.
Just what I suggested in my various articles but additionally one has to consider all the global oceanic oscillations simultaneously and then work out the net effect.
Granted that most of the time the netted out effect of just ENSO and AMO would be sufficient (but not always).
Bill’s good work is a substantial first step in the right direction. Incorporate his numbers into the models and perhaps a little predictive skill might emerge.
To Phil and everyone.
The model does have a little predictive power.
First, there is the 3 month lag in the ENSO.
The ENSO had been cycling up from the La Nina depths until August but now it has gone neutral to slightly negative. This should provide for stable temps over the next three months (there is a slight decline but its too small to get into).
And the AMO might be cycling down now (the forecasts that are available show this as well). This would be a 20 to 30 year down cycle so it worthwhile watching the data as it comes out.
The Antarctic DW area is definitely cycling down now and it has been since 1990 or so. There is quite a bit of variation in this index so it would have to be tracked for several months at a time to see a trend though.
And well, CO2 is still increasing at a slightly exponential rate (although Methane and CFCs appear to have flatlined now) so there could continue to be increasing temps from global warming (0.00066C per month) but the rate will be a little slower than past numbers. That is an interestingly small number isn’t it.
This model says 0.455C anomaly for Hadcrut3 in November (up from 0.440C in October but the model was over by 0.013C in October so it could just stay at 0.440C). That is without an AMO or Ant DW change.
Thanks for the reply, Bill. If I interpret what you are are saying, the rate of change of temperature does not relate to rate of cgange in atmospheric CO2.
This suggest to me that CO2 is not a major climate forcer:-)
Don
Hi Bill,
It was the consensus models that I think need your numbers. I accept that your model now has some predictive value.
As regards the more or less neutral ENSO at present you still detect a small downward movement notwithstanding that.
Assuming that other negative oceanic influences are not the reason I would guess that the residual small decline is solar induced one way or another but I accept that the mechanism for such sensitivity to small solar changes is currently a puzzle.
I spoke to the Chief Exec of the RMetS recently and he agreed that a continuing fall in global temperatures notwithstanding a neutral ENSO might be significant. Although he did not say so I took that to mean that such a continuing fall would point to causes other than CO2 as the primary climate driver.
Brendan H:
There are several flaws in your reasoning, but one is so clear that I think an explanation of it is sufficient to enable you to understand that your argument is not correct.
You say to me:
“If a climate theory is a claim about what the climate does, and if the model describes the theory, then by your own logic the model attempts to show what the climate does.”
Yes, a climate model “attempts” to show what the climate does. But there can be no way of knowing if the model does “show what the climate does”. At best, all that can be said is the model seems to provide outputs that compare to “what the climate does”.
What the climate does is reality. And what the model does is what it has been designed to do.
As I said,
a theory is an idea, a model is a representation of the idea, and reality is something else.
Evidence concerning reality is provided by observing reality. And evidence concerning the performance of any scientific model is provided by comparing the output of the model to reality.
The output of a model can be taken as an indication of what evidence may be found if reality is examined. But it can only be accepted that a model provides such an indication when – and only when – the model has been shown to represent reality.
In other words, models do not provide evidence of reality (unless, of course, you believe in astrology).
I hope the matter is clear to you now.
Richard
Richard S Courtney says:
We can gain confidence in a model when it correctly predicts out-of-sample results, and the more it predicts, and the more accurately it does so, the more confidence we can have in it.
And the AMO might be cycling down now (the forecasts that are available show this as well). This would be a 20 to 30 year down cycle so it worthwhile watching the data as it comes out.
Really, looking at this pattern I’d expect it to be more likely to stay up for another 20 years!
That’s the point, you’re not making predictions, you’re guessing.
I may not have been clear about the ENSO 3.4 region numbers so here they are for the last several months:
Feb -1.860
Mar -1.080
Apr -0.850
May -0.580
Jun -0.320
Jul 0.110
Aug 0.140
Sep -0.200
Oct -0.260
We were coming out of La Nina quite rapidly in the spring, then it stalled at neutral in the summer but has gone back into slightly negative temps over the past few months.
With the 3 month lag, we will be affected in November by the change that happened from July to August. But these are very small numbers for the Nino region. It can be +/- 3.0C. The regressed coefficient says the impact is 0.07 * the change of 3 months ago, so in other words, a very small number. The total decline in the next 3 months will be 0.026C (by January).
[I know it is a little hard to accept this 3 month lag but this is the general consensus in the community and it really seems to occur in the data.]
Regarding the solar impact, we are at the bottom of the solar cycle right now and last month it appeared as though we might be coming out of it and heading into solar cycle 24.
But there have been no new sunspots lately and I just had a look at the solar irradiance numbers and there is a continuing decline in the numbers again. Scary down according to Virgo. SORCE is not down so much but it is down too.
Up above, I said there was likely a solar cycle influence in my numbers. I don’t know how to tease it out properly but it is in there. It might actually be a little bigger than people think in fact.
To Phil,
Yeah, I’m guessing about the AMO but the Wiki graphic has a smoothing in it which makes it harder to see what is really going on.
This is the unsmoothed data (and I added some extra months so it is easier to see the recent trend).
Maybe it isn’t going down but it could be. The longer-term reconstructions show the cycles are not as regular as the current chart makes it look like.
Phil,
Can you repost those graphs of transmission in the presence of 380 and 760ppm of CO2 since the links you provided earlier don’t work.
Or better still, can you tell us how you generated them? Did you need an account? I checked out the site and it seems you need an account if you want to run graphs using multiple gasses …
Does anyone know if there are any studies on the earth’s albedo variation and the contribution of cloud and snow/ice cover to that variation?
I wonder if albedo variation provides for a greater insolation variance than few watts/m2 that each doubling in CO2 is supposed to cause?
Bill you may want to read some of this.
AN.
Richard Sharpe (15:25:27) :
Phil,
Can you repost those graphs of transmission in the presence of 380 and 760ppm of CO2 since the links you provided earlier don’t work.
OK I’m sorry the images disappeared!
Norm posted the following earlier, however because Norm is a geophysicist with some experience his opinion was felt by some to be authoritative. Here’s some data to refute his handwaving.
Here it is for earth conditions:
Here’s a comparison of some spectral lines for Martian and Earth conditions:
A bit more than “almost no appreciable effect”!
Yes you do need an account to do the full calculations.
Bill Illis:
In response to your question about stationarity tests, you need to perform these tests on every variable in your model (dependent and independent).
The number of lags to use in your test is a judgment call based on the nature of the variable and your knowledge of climate theory. You must ask yourself this question: if an auto-regressive process were present in the data, over how many months would it be reasonable for such a process to last? Another way to look at it is to ask yourself: when considering the temperature anomaly for October 2008, how many previous months of temperature anomalies could reasonably be expected to be correlated with October 2008? The answer depends on how long certain random events will typically affect temperatures upwards or downwards for extended periods.
I am not a climate expert, so I don’t want to suggest a specific number of lags. However, based on the construction of your model (which includes the ENSO lagged by 3 months) you will likely need a minimum of 3 lags in your stationarity test for each variable, probably more.
Phil. you said, “That’s the point, you’re not making predictions, you’re guessing.”
From thesaurus.com:
Main Entry: PREDICTION
Part of Speech: noun
Definition: declaration made in advance
Synonyms: of event anticipation, augury, cast, conjecture, crystal gazing, divination, dope, forecast, forecasting, foresight, foretelling, fortune-telling, GUESS, horoscope, hunch*, indicator, omen, palmistry, presage, prevision, prognosis, prognostication, prophecy, soothsaying, surmising, tip, vaticination, zodiac
You could also accurately say, “That’s the point, you’re not guessing, you’re making predictions.”
Mike
Jeff Alberts: “There are probably thousands of ways one could theoretically match the known temperature record, but you’d never know which was the “right” way without fully understanding, and being able to perfectly model, ALL the parameters.”
I think you’re overstating the case, especially in demanding perfection, which is not possible. That said, climate models incorporate well-understood physics, the number of parameters is finite and modellers have ways of testing for individual effects.
There’s a major disconnect here, though, between the sceptic claim that the climate has strong negative feedbacks that drive the climate towards equilibrium, and the implication that the climate is random to the point that anything could happen.
And nor do sceptics always find fault with modelled science. Earlier in the year there was a good deal of celebration across the sceptical blogosphere in the wake of the Keenlyside forecasts, which were based on models.
Richard Courtney: “At best, all that can be said is the model seems to provide outputs that compare to “what the climate does”.
Yes. Our point of difference is over the status of the outputs of the models. You appear to think that they do little but mirror the theory. I think they can provide information to illustrate the way the climate works.
Running a model is similar to running a laboratory experiment. A laboratory experiment is not non-human reality, but it provides evidence that can be used to understand the real world.
“In other words, models do not provide evidence of reality…”
Scientific evidence also includes experiment, and experiments are not “reality” in the sense of pristine nature. They are human devices, intended to test the theory.
OK, phil, so the mean transmittance goes from 0.6466… to 0.6006… for one doubling. What happens for the next doubling?
You could also accurately say, “That’s the point, you’re not guessing, you’re making predictions.”
Mike
I suppose a non-scientist who didn’t have much command of the english language could.
A usual definition of ‘guess’ is ‘to suppose something without sufficient information to be sure of being correct’.
Brendan H:
This matter is only loosely related to assessment and improvement of the Illis Analysis (which is the subject of this debate). However, I make this final reply to you because your comments suggest ideas that may be typical of others who have been influenced by a certain propaganda web site run by some modellers.
You say to me:
“Yes. Our point of difference is over the status of the outputs of the models. You appear to think that they do little but mirror the theory. I think they can provide information to illustrate the way the climate works.”
What you say you think is delusional. Any model (of climate or anything else) does what its constructors have told it to do. Therefore, a model is a construct from the opinions and understandings of its constructors. Hence, outputs of a model are – and can only be – evidence of the opinions and understandings of its constructors.
The “way the climate works” may or may not be similar to the opinions and understandings of a climate model’s constructors. And, therefore, it is not possible for the outputs of climate models to “provide information to illustrate the way the climate works” except in so far as the opinions and understandings of model’s constructors have been proven to be an accurate description of “the way the climate works”.
But no human knows “the way the climate works”. For example, the Illis Analysis assesses effects of AMO and ENSO but nobody knows what causes AMO and ENSO.
A claim that climate models “can provide information to illustrate the way the climate works” is an assertion that the climate models are constructed by people with deific omniscience (so I suppose you are talking about models of James Hansen and Gavin Schmidt; joke).
Richard
Phil. wrote :.
Sorry but I am not sure that you know how the the CO2 laser works .
The population inversion is realised by injecting hot N2 OUT OF EQUILIBRIUM .
The hot N2 transfers energy to CO2 by collision which releases then coherent IR radiation by induced radiation .
This has nothing to do with what’s happening in the atmosphere (induced radiation is negligible and there is no population inversion) unless it is to show that translationnal and vibrationnal energies interact both ways what is what I have been saying since the beginning .
You can’t get out of LTE .
Either you say now that it doesn’t exist and are not only wrong but also contradict yourself because you wrote earlier that LTE is the fundamental assumption .
Or you say that LTE exists and then both the constraint of Planck’s energy distribution (which is really synonymous to LTE) and energy conservation say that emission = absorption AND that the temperature at a given pressure is constant aka independent of time .
You are confusing “warming up” which is a transitory time dependent process and the “equilibrium temperature” which is a stationnary time independent value for given boundary conditions .
If you don’t see that yet , write here your energy conservation for a small volume in LTE and show us emission is “far below equilibrating with the absorbed radiation”.
I think that you are about to discover a new perpetuum mobile :)
Eli Rabbet :
Of course that the result emission = absorption at LTE doesn’t mean that the temperature is constant whatever happens .
Quite trivially if some GHG concentration varies , the equilibrium temperature will vary too .
Clearly if one considers the first slice of the atmosphere at the surface boundary , this surface boundary will reach different equilibrium temperatures when the GHG concentration varies (and in turn its conducting , convecting and radiating behaviour will change) .
John Finn (02:08:56) Says,
”.”
Your arguments are contradictory. If the skin is present, it is evidence that the surface has absorbed radiation. The evidence shows that the skin does exist. The measurement is difficult to make, as Richard Courtney said in a previous post.
If the skin is absent, it is evidence that the downwelling radiation has been absorbed in part by the oceans and the region that is warmed has been mixed with the waters below. You have to account for the disposition of the down-welling radiation flux some how. It can’t just disappear and have no effect. The energy must go somewhere. As you point out, there are upward fluxes of energy from the ocean involved in IR emission, evaporation and convection, but if the downwelling did not exist the net upward flux of energy would be enormously higher.
It doesn’t make sense to claim that absorption of the downwelling radiation from the atmosphere by the ocean is impossible, because the skin layer is so thin. After all, the layer that emits the upwelling radiation is just as thin.
The layer from which the H2O molecules escape by evaporation is even thinner, just a few molecules thick at most.
All this talk of a skin on the ocean, makes it sound like it is some actual physically different kind of water.
The only extent to which the water surface is different, is that the absence of water above the surface means that the very surface molecules experience a downward attractiver force from the molecules below, that is not balanced by molecules above (ther being none). That of course results in the surface tension, which causes the surface to seek the smallest area lowest energy state.
As far as the effect on incoming electromagnetic radiation, there isn’t any “mm thick” surface skin that behaves any differently form the rest of the ocean.
However, water is the most opaque known liquid for long wavelength (IR radiation), such as the thermal radiation from the atmosphere. I suspect that it is somehow related to the fact that water has a very high dielectric constant (81) at radio wavelenghts, that results in very rapid extinction of radio waves in water. whether that is the same mechanism at optical wavelenghts, I can’t say for sure, but the downward IR is avsorbed by ordinary optical absorption processes in the top 10 microns of the ocean surface.
The top few molecular layers of course, are the source for the ocean’s emitted thermal radiation, and also for the evaporation into the atmosphere.
The water molecules follow something like a Maxwell-Boltzmann distribution of kinetic energies, and the high energy tail of that distribution supplies the energetic molecules that leave the surface, and don’t return (in an open environment). A direct result of the loss of those higher energy molecules is that the surface temperature is depressed.
In addition, you have about 545 calories per gram of latent heat of evaporation transported from the water into the atmosphere.
It is well known that Hurricanes leave a cold surface water track behind them, because of the astronomical amounts of energy removed through evaporation. Some people think it is the stirring of the ocean to bring up cooler waters from the deep, but if that happened to any extent, it would simply quench the hurricane, and shut it down.
So downward IR radiation in the 5-50 micron range, from the atmosphere (and/or GHG) results in prompt evaporation from the surface layer, but you would hardly call it a skin.
As for the incoming solar radiation, it is well known that most of that energy propagates deep in the ocean. In the wavelength range 450-550 nm which is where the solar spectrum peaks, the water absoption is a minimum with an attenuation coefficient for oceanic waters of about 0.001 (cm^-1). For mean coastal waters it is more like 0.003, and doesn’t get as high as 0.004 for even turbid coastal waters. For wavelengths above 550, for the yellow/orange/red/IR the absorption is much stronger, and the same thing happens for the shorter wavelengths.
About 3% of the incident soalr spectrum is refelcted due to ordinary Fresnel reflection and a refractive index of about 1.33 for water.
The rest penetrates in a manenr that is almost like the inverse of the solar spectral peak.
Some of that radiation gets taken up by phytoplankton, and much of it is converted to heating of the water. the warmer water expands of course so convection tends to carry that energy back to the surface, in addition of course to some conduction to surrounding (or deeper) waters. In the usual scheme of things, convection usually trumps conduction in the transport of thermal energy. Radiation from those deeper waters is irrelevant, because it would be at long wave IR and would be reabsorbed before going very far..
Of course the preceding presumes a somewhat quiet condition; but the effect of storm induced turbulences, will greatly complicate the issues.
But I don’t see any great thermal engine pumping solar energy into the abyssal depths of the ocean.
I am sure someone with academia’s access to the numerical data for ocean conductivity and expansion and other thermal properties, can do the calculations for conduction, convection, and evaporation, if they want to quantify the different processes.
I read above some very interesting comments about science; observation, experiment, models and theory; and there seems to be some creative explanations; which don’t jibe with anything I ever learned.
The lay person typically thinks that scientific theories explain how the universe works.
Actually, the only thing that explains how the universe works; the reality; is that which is observed and possibly measured. As for laboratory experiments not being reality, how could that be. They are observations and measurments that are every bit as real as going out side, and holding up a moist finger to see which way the wind is blowing. Lab Experiments are simply reality in a controlled environment, so that extraneous influences that may hide the reality, can be minimised.
There is a limit even to what we can observe or measure; and Heisenberg in his principle of “Unbestimmheit” (maybe mit ein umlaut) tells us that in the very act of obseving, we alter the conditions so that that which we seek to measure changes in an unpredictable manner; so that we cannot even observe the present state of a single particle; let alone the whole universe; so that makes any prediction of the future state an impossibility.
At best we can gather statistical information about what mostly happens in repeated circumstances.
So the universe is simply far too complex to ever describe how it works; and science theories don’t attempt to do that.
A science theory is a rigorous description of the properties which we assign to a completely fictional model, that we create out of whole cloth. So the model, and the theory, are one and the same. As a result of our assignment of certain properties to our theoretical modle, we are able, using mathematics to exactly describe the behavior of the model, in any deined circumstance (experiment).
The mathematics of course is no more real than is the model; we made all that stuff up in our heads also,a dn we created our mathematics for the purpose of analysing the behavior of our fictional models.
Some people think that mathematics is some universal language, that must exist anywhere in the universe.
If you believe that; why don’t you write down here, a short list of the objects and elements of mathematics, that you are sure exist somewhere in the real universe.
Don’t waste your time; there are none. The real universe copntains no points, no lines, no circles, no spheres, no conic sections; it is all pure fiction that we made up.
The simple cartesian co-ordinate equation: x^2 + y^2 + z^2 = r^2 describes a mathematical sphere.
No amount of prestidigitation can cause that equation to explain the presence of 8 km high mountains on the surface of the earth.
So our models/theories of science, are entities unto themselves,a dn we can describe in great detail how they work.
Now we constructed our models in the first place, with the idea of building something that we believe behaves in much the same way that the real universe does. We can run experiments (simulations) with out fictional models that are not analagous to any real experiment that anybody actually did in the real universe. If our model does something interesting, we can then perform the analagous experiment to see if we can observe a similar behavior in the real universe.
we are very intolerant of non conformity between the calculated behavior of our fictional models, and the real observed behavior of the universe.
When that happens, and we eliminate experimental error sources, and mathematical calculation errors, our model becomes suspect, and we seek to reconstruct it, and assign it different properties, in such a way that we eliminate the discrepancy between real universe observation, and fictional model simulation.
We were quite happy with Newtonian Gravitation, until we observed a discrepancy in the rate of precession of the perihelion of Mercury, amounting to 43 seconds of arc error per century. Einstein’s general theory of relativity, gave us a new model of gravity, that eliminated that lousy 43 seconds of arc discrepancy..
After an intelligent and erudite discourse on how the progress of science proceeds,
George Smith says,
.”
The above diatribe is a straw man argument, and the author should know better. The Climatology models do not assert that the planet is an isothermal sphere etc. This is just a gross calculation of averages of measured values and does not constitute a real model. The models are termed “General Circulation Models” because they examine how the absorbed energy is circulated around a globe that contains real atmosphere, oceans and land masses.
so what is the model? the IPCC report does not tell you the variables or the state conditions. what flux does the model contain for heat emissions from the earth itself? the ground does have a heat capacity and it is warmer the deeper you get.
I have pored over the ipcc documents to try to even get an inkling of wht they model, what input states they use and I cant get it
Eric,
Actually the author does know better, and he knows that any model that does not model the real time workings of the planet, and its radiant energy inputs and outputs, is not liklely to come up with believable results.
The NOAA model of the earth energy budget, assumes a radiant emission that depends on an assumed average temperature; whereas in the real world, the earth’s surface radiant emissivity ranges over more than an order of magnitude from the coldest regions to the warmest regions. And the total emission for an “average temperature” model always underestimates the emission, because that depends more on the fourth power of the temperature and not on the temperature. So just what is the point of computing a global average temperature; it has no more scientific validity than computing an average global telephone number or enumerating the average number of animals per square km on earth.
Besides GISStemp doesn’t measure anything except GISStemp; it certainly doesn’t measure any average temperature for the earth, or even for the earth’s surface, or even for the air five feet above the earth’s surface; it doesn’t even cite the results in any standard temperature scale; but refers to a baseline that is itself unknown.
How do you use a baseline period from 1961 to 1990 or any other recent epoch as a scale to plot “Data” from a thousand years ago; when there was no information about what would happen from 1961 to 1990.
How the atmosphere and the oceans circulate around the planet and the continents is certainly something I am not schooled in; so I just assume that there are those who do know about that stuff; but as far as the overall question of whether the planet is net gaining energy or losing it, I’m quite confident I have a good understanding of how that works, and you can’t figure that out by taking the averages of anything, because the energy transport mechanisms are highly nonlinear.
Besides as I have stated before; the global sampling regimen, such as goes into Dr. James Hansen’s ritual, falls far short of complying with the well understood laws for sampled data systems.
The thermal processes that go on in different global terrains, bear no simple relationship to the local temperatures; so even if you could measure the true global mean temperature; which you can’t; it tells you exactly nothing at all about the energy flows.
With a summertime daily temperature range spanning almost 150 deg C from the hottest surface locations to the coldest; no average temperature or GISStemp machinations is going to tell you anything useful about whether the planet is warming up or cooling down.
So since you used the term; why don’t you give your standard definition of what a “Straw man” argument is; I notice people like to throw that term around when they are engaged in what passes for debate.
You willnotice Eric, that I made no mention whatsoever of the so-called GCMs; never said a word about circulation models.
I did specifically refer to a CLIMATE model; not a CIRCULATION model (different C-word), and I referred in that to a picture of the earth energy balance (budget) as published on the official NOAA weather site. The “climate model” I referred to in that statement is the model that yields that precise energy budget picture.
If that model upsets you, then maybe you could take that up with NOAA.
I am also willing to entertain, anybody else’s description of a model that they believe will yield the same energy fluxes that the NOAA diagram depicts.
I actually made one that places the earth at the center of the universe (well at least the local universe) and it is surrounded by a hollow spherical sun with a radius of 93 million miles. Every point of the inside surface of that hollow sun emits elecromagnetic radiation that is confined to an emission solid angle of about 0.5 degrees angular diameter, and is essentially constant over that angle and zero elsewhere.
Such a model will produce a half degree angular diameter “sun” that is directly overhead at any point on earth at all times. Well I won’t pursue the construction of that model any further because clearly it is nothing like anything we have ever experienced. Of course neither is the NOAA picture which it models.
So Eric; don’t create your own straw man, if you think you see another. As I said, I made no mention of GCMs whatsoever.
George,
What you call a model is not a model in any sense. It simply is a representation of an annual average energy budget, to give an idea of the energy fluxes in the atmosphere over a period of a year. It is based on measurements.
I consider such an average informative and not the least upsetting.
There is no way that the total picture of the earth’s climate as modelled can be put on a website that can be absorbed by the average person who is interested in climate.
If you want an accurate daily model calculated for each minute or hour for years, you are asking for something that is unrealizable until computers become very much larger and faster, and even then it is unlikely that you will be satisfied. Meanwhile the scientists will make do with what they have and the answers will be imperfect and improve with time.
TomVonk (03:53:05) :
Until you drop your strange theory that the presence of a GHG in the atmosphere doesn’t cause the atmosphere the warm up there’s really nothing to discuss, you’re obviously a disciple of the Jack Barrett school of physics.
I’m well aware how the CO2 laser works, the one I built works rather well!
The reference to the He/Ne and CO2 lasers was to point out that a gas which has a long radiative lifetime can give up its energy to a shorter lived species which preferentially radiates, in response to one of your earlier comments.
Richard: “Hence, outputs of a model are – and can only be – evidence of the opinions and understandings of its constructors.”
If the creators of the models have some understanding of the earth’s climate, then the models will also incorporate those understandings. And the models do incorporate much that is known about the climate, for example in the form of well-established physical laws.
As for the accuracy of climate models, the IPCC third report concluded that models provided a reasonable agreement with observations, although models and observations differ in a number of areas.
Interestingly, models had predicted that tropospheric warming should be greater than surface warming, although the data seemed to show otherwise. But the data were shown to be faulty and corrections have brought them closer to the models. This boosts our confidence in the models.
“A claim that climate models “can provide information to illustrate the way the climate works” is an assertion that the climate models are constructed by people with deific omniscience…”.
No it doesn’t. Omniscient beings would not need models. One can gain sufficient understanding of part of the climate system, and/or a general understanding of climate trends, without requiring omniscience.
It seems that serious debate of the Illis Analysis has ended here and has been replaced by promotion of pseudo-science.
Brendan H your comments concerning models are plain silly.
It does not matter if the climate models “incorporate much that is known about the climate, for example in the form of well-established physical laws.” What matters if the climate models are an adequate emulation of climate as demonstrated by their emulation of reality and their demonstrated forecasting skill.
The indications of any model can only be trusted to the degree that those indications are observed to agree with reality. And the predictions of a model can only be trusted to the degree that the model has demonstrated forecasting skill. These facts are true of all models including climate models.
Clearly, the climate models do not adequately emulate reality because they fail to emulate important climate behaviours such as AMO, ENSO, etc.
And the ability of a computer model to appear to represent existing reality is no guide to the model’s predictive ability. For example, the computer model called ‘F1 Racing’ is commercially available. It “incorporates” “well-established physical laws” (if it did not then the racing cars would not behave realistically), and ‘F1 Racing’ is a much more accurate representation of motor racing than any GCM is of global climate. But the ability of a person to win a race as demonstrated by ‘F1 Racing’ is not an indication that the person could or would win the Monte Carlo Grande Prix if put in a real racing car. Similarly, an appearance of reality provided by a GCM cannot be taken as an indication of the GCM’s predictive ability in the absence of the GCM having any demonstrated forecasting skill.
It is extremely improbable that – within the foreseeable future – the climate models could be developed to a state whereby they could provide reliable predictions of global climate over any time scale. This is because the global climate system is extremely complex. Indeed, the global global climate over any time scale when the models have no demonstrated forecasting skill.
And the climate cannot have demonstrated forecasting skill over the medium and long terms. None of the existing climate models has existed for 20, 50 or 100 years so it is not possible to assess their predictive capability on the basis of their demonstrated forecasting skill over such time scales; i.e. the models have no demonstrated forecasting skill over such time scales. This is why their indications of future climate change are said to be “projections” and not “predictions”.
No model’s predictions should be trusted unless the model has demonstrated forecasting skill. But, as stated above, it is not possible to assess the predictive capability of the climate models on the basis of their demonstrated forecasting skill because none of them has existed for sufficient time for them to have demonstrated any forecasting skill for 20. 50 or 100 years ahead.
Put bluntly, predictions of the future provided by existing climate models have the same degree of demonstrated reliability as has the casting of chicken bones for predicting the future.
You make a laughable appeal to authority when you say;
“As for the accuracy of climate models, the IPCC third report concluded that models provided a reasonable agreement with observations, although models and observations differ in a number of areas.”
Well, use that IPCC report as religious scripture if you want, but those of us who are interested in the science of climate change look at facts and evidence. You cite the IPCC report that included the now discredited ‘Hockey Stick’ of Mann, Bradley and Hughes (i.e. the most discredited graph in the history of statistics) before it had been published elsewhere. And that IPCC report included the ‘Hockey Stick’ 8 times, but the most recent IPCC report dropped it and made no mention of it.
Then you make a direct attack on the scientific method when you say:
“Interestingly, models had predicted that tropospheric warming should be greater than surface warming, although the data seemed to show otherwise. But the data were shown to be faulty and corrections have brought them closer to the models. This boosts our confidence in the models.”
The data were not “shown to be faulty”. And if they had been “shown to be faulty” then that could not mean “This boosts our confidence in the models” because scientists always place observation of reality before any model of assumed reality.
The most recent the US Climate Change Science Program (CCSP) report attempted to hide the inconvenient truth that the ‘fingerprint’ of AGW is absent and, therefore, observed warming is not a result of the AGW the climate models project. But it is obvious to anybody who compares the two relevant Figures in Chapters 1 and 5 of that CCSP report that
(i) the climate models predict AGW will cause most warming to occur in the lower
troposphere at altitude in the tropics,
and
(ii) this predicted warming is not happening according to measurements of the lower
troposphere.
But the same CCSP report that included the two cited figures asserts that there is no significant difference between them! The report justifies this strange assertion by using the fact that outlying data points of the temperature measurements overlap with the indications of the computer models. This justification is nonsense according to the practices of both science and statistics: it is a claim that the bulk of the measurements should be ignored and trust should be placed in a few outlying data points that fit a preconceived notion. But that claim is ‘double edged’: if it is accepted then it has to be agreed that there was no global warming in the twentieth century.
I again repeat that a theory is an idea, a model is a representation of the idea, and reality is something else.
You can have your superstitious belief in climate models but I will continue my scientific view of all models (including climate models) so I will make no further responses to your proclamations of your superstition.
Richard
How would one expect to accurately model a complex chaotic system with poor knowledge of any of the parameters? What is the LOSU of clouds? Aerosols? We’re still learning about this stuff, so I hardly see how we can expect to accurately model such a thing. Best guesses which seem to correspond to what we see are just chance, really. We can see this with weather models that CONSTANTLY get things wrong even just the next day. sometimes they get it right, but the fact that they get it wrong probably just as often shows that there’s no real skill there.
There’s also a major disconnect with the AGW claim that positive feedbacks will overwhelm any “natural” signal. I don’t believe climate is totally random, but we certainly don’t understand the major drivers, much less any secondary or tertiary drivers.
As someone else said, even a broken clock is right twice a day. My beef with models is the use of the output as evidence. They’re not evidence, they’re supposed to be used to test theories/parameters. But without practically perfect knowledge of the complex chaotic system being modelled, any results which replicate reality are most likely accidental. And even a percent or two of error can cause things to spiral away from what the reality might be, but there’s simply no way to know until the reality in time arrives.
Phil.
Until you drop your strange theory that the presence of a GHG in the atmosphere doesn’t cause the atmosphere the warm up there’s really nothing to discuss, you’re obviously a disciple of the Jack Barrett school of physics.
I observe that you only hand wave , make irrelevant statements and avoid under any circumstances to say something that would have a scientific value .
But you are on a hook .
Everybody on this thread has seen that you wrote that CO2 radiates (much) less than it absorbs because “it transfers its energy to N2 by collisions” .
In mathematics less is not the same thing as equal or more .
So I have asked you already 2 times to write up energy conservation for a small volume in LTE and to SHOW that CO2 radiates much less than it absorbs .
Should not be too hard for somebody who pretends to know how a laser works .
I’ll make it even easier for you – suppose that there is only CO2 and N2 in the volume .
Not surprisingly you were sofar unable to do even this trivial physics .
As long as you don’t , there is indeed nothing to discuss in your meaningless posts .
“” Eric (19:42:08) :
George,
What you call a model is not a model in any sense. It simply is a representation of an annual average energy budget, to give an idea of the energy fluxes in the atmosphere over a period of a year. It is based on measurements. “”
Well I don’t want to consume Anthony’s space much more on this; but I do think the NOAA energy budget drawing does create an illusion that is not even close to reality, so it tends to mislead, rather than instruct.
Just the simple change of replacing the solar constant at 1368 W/m^2 with an “average” insolation 1/4 of that value, illustrates the point. The only way that 432 number has any validity is if you DO assume that it falls on the entire surface of the globe at that level. The fact that the sun really strikes the daylight side of the earth at 4 times the rate in the chart, and that vary little of the incoming, ever strikes anywhere near the poles, gives an immediate explanation of exactly why the earth is not an isothermal sphere. In addition the higher real rate in the tropics results in a much quicker warming and reaches a much higher temperature. As a consequence the earth cools at a much higher rate than depicted by the chart’s numbers.
It might appear that the earth heats up in the daytime and cools down at night, unless it is cloudy, in which case it warms up at night; every TV weather man can tell you that.
Actually, the only part of that which is true is that the earth heats up during the daytime. It also COOLS during the daytime, and at a much higher rate than it cools at night; clouds or no clouds. And if it is cloudy at night it never heats up, it still cools but at a slower rate. All of that is masked by the static picture NOAA portrays.
The huge imbalance between equatorial insolation and polar insolation, is what makes the global circulation mechanisms mandatory. The roughly black body radiation rate from the coldest to the hottest surface regions (on any midsummer day) ranges over a factor of about 12 times. As a result, the polar regions are very poor at cooling the planet; they don’t radiate nearly fast enough.
The whole idea of a “mean global temperature” such as GISS, RSS, UAH, and HADcrut imply, at least to the lay public that is aware of any of them, is not of any real scientific value, because there is no link between such a number and any evaluation of the net flow of energy into and out of the planet.
To be talking about changes of tenths or even thousandths of a degree on a temperature map that actually has about a 150 deg C peak to peak range is totally absurd.
As to the analysis work that started this thread, which as Richard Courtney has said, has about petered out (or been hijacked), I am quite sure that those ENSOs, AMOs and PDOs are of great interest to those who study the spatial properties of climate, as distinct from the global total aspects. But I wonder just what if anything can be said about the climate at the earth’s poles inso far as any of those cyclic events having any influence.
What surprises me most about climate discussions; at least to the extent that Anthony has been able to generate them on this site, is how seldom there is any mention of the sun as having anything at all to do with the climate. Well we have had a very interesting sunspot free year; but the climate IPCC supporters have gone out of their way to dismiss the sun and its changes as being involved in the earth climate. They seem to focus only on the effect of GHG (except water), and are quite unfazed by the observable fact (from ice cores at least), that atmospheric CO2 changes, have never ever led to subsequent global surface temperature changes; although the converse is clearly true.
Feedback systems always have a propagation delay; so it is never possible for the output signal (effect) to occur before the input signal (cause).
George
It looks like this thread is wrapping up now so I just wanted to say thanks to everyone for their comments and questions and special thanks to Anthony Watts for allowing me to put forward this method/technique for adjusting temperatures for the natural variation caused by various ocean indices/patterns.
I’m still working on the reconstruction. One thing I noticed in trying to model southern hemisphere temperatures is that the SH has a great deal of unusual up and down swings in temperatures (which are not evident in the northern or tropical series). The same swings occur in the raw southern Atlantic ocean temperatures I had downloaded and when I tried to employ the same smoothing techniques that are applied to the ENSO and AMO, the explanatory power of the swings in the raw data disappeared. So, I am trying to use the unadjusted, unsmoothed raw data for the the Nino 3.4 region and the AMO regions (there is not as much of an upward trend in the raw unsmoothed AMO data BTW). The raw unsmoothed ocean temp data provides a more faithful reconstruction of global temperatures but there is a little too much “squiggle” afterwards. I haven’t decided whether to just use the raw data or employ a 3 month smooth instead of the 5 month employed on the Nino 3.4 region index for example.
We should draw some conclusions, however, as a result of this thread.
1. Temperatures can be adjusted for the ENSO, the AMO and the southern Atlantic natural variations.
2. Temperatures should, in fact, be adjusted for these variations (I mean when you are analyzing them, not the actual temperatures as there has been too much “adjustment” in the record already). These variations are just masking the real global warming signal underneath and some incorrect conclusions have been drawn because of that. This lack of recognition about natural variation has caused some to draw incorrect conclusions about temperatures in the 1980s and 1990s and also contributed to the global cooling scare of the 1970s. The downswing in temperatures in the 50s, 60s and up until 1975 was really just caused by the downswing in the AMO. This reconstruction says the underlying warming trend continued throughout this period.
3. Actual global warming to date has been much, much less than the theory originally predicted. The actual track we are on now would produce warming of just 1.3C to 1.6C per doubling (I haven’t confirmed that final number yet.)
4. The latest proposition from the theory says that the deep oceans are absorbing some of the increase expected and we will still get to the 3.0C or 3.25C per doubling number, it will just take longer. I agree there has been warming of the deep oceans and I have no reason to say the theory is not correct BUT …
5. The analysis says if there is not an uptick in temperatures in the next 5 years or so, we will have moved so far off the global warming trend expected that it will, in fact, likely take well over 100 years to get there (I’m going out on a limb here and saying that my trendline indicates 500 years.)
6. The global warming researchers need to be clear about the actual trendline for temperatures that they expect now. We deserve to know. (I think they have projected a timeline but they do not want to be clear at this time since it may create less urgency in the issue.)
7. We need to create a new Southern Atlantic Multidecadal Oscillation. It has as big an impact on temperatures as the AMO (in my newest analysis) and it is does a very good job of explaining the unusual southern hemisphere temperature trends. The original AMO was discovered by accident when someone noted these north Atlantic sea surface temperatures were correlated with rainfall events in far off places, even Brazil.
8. We need to reduce the smoothing in these indices since explanatory information is being lost.
Once again, thanks to everyone who participated and special thanks to Anthony.
Jeff Alberts: “We can see this with weather models that CONSTANTLY get things wrong even just the next day.”
Yes, but weather is not climate. Using the analogy of a boiling pot of water, a weather forecast is like attempting to predict where the next bubble is going to rise, whereas a climate statement would be that the average temperature of the boiling water is 100 deg C. So we can make accurate statements about a total situation without necessarily knowing what is happening at the local level.
“As someone else said, even a broken clock is right twice a day.”
The Keenlyside forecasts were for the ‘next decade’, ie 2005-2015, so most of that period is in the future, but sceptics were happy to accept these forecasts without much by way of critical analysis.
“But without practically perfect knowledge of the complex chaotic system being modelled, any results which replicate reality are most likely accidental.”
The models don’t attempt to “replicate reality” so much as to understand the interactions between the various climate factors. Over time, the models have improved, and as my boiling water analogy shows, it is not necessary to possess perfect knowledge of all the features of a climate system to understand the main features. And importantly, there are a number of modellers, using different strategies, arriving at generally similar results.
Richard Courtney: “It is extremely improbable that – within the foreseeable future – the climate models could be developed to a state whereby they could provide reliable predictions of global climate over any time scale.”
Depends what you mean by “reliable predictions”. Besides, as mentioned previously, the warming of the troposphere, climate models have also predicted factors such as cooling of the stratosphere and amplification of warming at the poles, both of which have occurred.
“…nobody claims to be able to construct a reliable predictive model of the human brain.”
No, but we can make reasonably reliable predictions about human behaviour. We can predict that through their lifespan people in developed countries will most likely attend school, change into surly teenagers, rebel a bit, get a job, get hitched, drop a sprog or two, gain some assets etc.
We can also predict individual behaviour if we know someone well enough. The assumption you are making is that climate is random to the point where anything at all could happen. That’s a pre-scientific point of view. The climate might be chaotic, but it’s chaotic within limits.
“And that IPCC report included the ‘Hockey Stick’ 8 times, but the most recent IPCC report dropped it and made no mention of it.”
Ch 6 of the Working Group Report – The Physical Science Basis has an extensive discussion of the “hockey stick” reconstruction of recent temperatures and shows a hockey-type graph on p 467.
“The data were not “shown to be faulty”. And if they had been “shown to be faulty” then that could not mean “This boosts our confidence in the models” because scientists always place observation of reality before any model of assumed reality.”
The models predicted a certain outcome. The initial data failed to support this outcome. Faults were discovered with the data, and the corrected data more closely matched the models. Therefore, it follows that this justification of the models’ outputs can increase our confidence in the efficacy of the models.
Furthermore, we have a case of both “adequate emulation” and “reliable prediction”, your two requirements for validating trust in the models.
Bill Illis:
Thankyou for your excellent summary.
But your summary does not say if you intend to publish. I again say that your work deserves publication and is much too important to remain outside the ‘mainstream’ literature.
And I again thank you for the insights your work has provided.
Richard
George,
.”
Actually the Surface of the ocean emits upward more energy flux than the 324W/M2 it receives on average, from the downwelling atmospheric IR.
It emits 390W/M2 upward IR, 78W/M2 by evaporation and 24W/M2 by convection for a total of 492W/M2. As a result the surface of the ocean is cooler on average than the bulk below it. This flux difference is supplied by the bulk, which sends upward toward the surface the amount it receives from the sun, 168W/M2.
If the 324W/M2 downwelling radiation did not get absorbed by the ocean surface, the bulk would have to supply more than 168W/M2 and the system would get cooler.
Your argument that the downwelling radiation can’t effect anything is all wet.
Eric, I would be interested to know just where I said the “downwelling radiation”; presumably the “Back Radiation” referred to in the NOAA static equilibrium “model”, can’t effect anything; nowhere did I say that.
In fact I specifically said that that returned IR is absorbed in the top ten microns of the surface. Then I added that that led to prompt evaporation which resulted in a cooling of the very surface of the ocean. That doesn’t alter the fact that the top layers of the ocean are warmer than the deeper ocean. Oddly the NOAA chart also claims that back radiation is absorbed by the surface.
I did say that the incoming solar radiation did not have any significant surface effect, nor is it affected by what the back radiation does on the surface (the “skin” argument”).
So read what I said, not what you inferred from what I said.
And that 24 W/m^2 you mentioned as a convection item; it may be a convection item in the atmosphere, but since there is no ocean above the surface of the ocean it is hardly a convection from the surface; most likely it is an amount conducted from the surface to the atmosphere, which is not a very efficient heat transfer process from liquid to gas. Perhaps solid ground to atmosphere is more effective conduction; but one wouldn’t know that from NOAAs budget graph, because they have the oceans and the solid ground all at the same 15 deg C. But ata real temperature of up to as high as 60 deg C ground surface temperature, conductive heating of the lower atmosphere would be more effective. This also point out the folly of treating the whole earth as a monolithic object with the same thermal properties everywhere.
I’ll stick to my point that the averaged over the whole earth phony numbers in the NOAA graph do a great job of obfuscating the real physical processes that are actually taking place.
I’ll repeat what I have said several times elsewhere:- CLIMATE is NOT the long term AVERAGE of WEATHER; it IS the long term INTEGRAL of WEATHER.
Nothing useful is learned by averaging weather elements over vastly different terrains and conditions. No part of the planet responds to those averages.
Bill Illis:
I want to thank you for the work you have done; I think it is really very good, and certainly worthy of a peer-reviewed publication.
I have one comment/suggestion. The various climate records (instrumental and proxy) suggest a positive correlation between sunspot activity and global temperatures (Maunder minimum/Little Ice Age, etc.). The sunspot record shows a substantial rise in the long-term average sunspot number (averaged over multiple solar cycles) since early 20th century.
My guess is that adding a trailing average sunspot number to a model based on the ENSO and AMO might yield a model that matches the instrument temperature record even better than the ENSO and AMO model, and might explain a significant portion of the temperature rise since the middle of the 20th century, separate from any contribution from CO2. I am not nearly smart enough to do this myself, but perhaps you or someone else reading this blog may be.
Such a model might even explain the roughly flat temperature trend of the last 10 years, and could be used to predict future temperatures based on the ENSO, AMO, and trailing average sunspot number. If solar cycle 24 is substantially lower in sunspots than cycles 20 to 23 (as about 50% of solar experts seem to think), then a model including sunspots might make predictions of falling average temperatures that turn out to be correct.
The risk is that if we add a sufficient number of arbitrary variables to any statistical model, it is possible to explain almost any historical trend. For example, there may be a correlation between how well the Boston Red Sox played and global temperatures since 1950, but it is hard to see a causal relationship. But in the cases of AMO, ENSO, and solar activity, it is certainly plausible that each is connected in a causal way to the temperature record, so including average sunspot number in a statistical model is not just fishing for a variable that happens to correlate with the temperature record.
Once again, thanks for your efforts.
Hi all,
I have optimized my temperature reconstruction model now.
I opted to use a 60 day smooth on the ocean indices rather than the effective 150 day smooth used now. All the ocean indices are detrended with no warming signal remaining (it was not that high to start with when one uses the raw data).
The global warming figure works out to +1.59C per doubling (0.9C more to go) by 2070.
The global warming models originally predicted +3.25C by 2070 but have now pushed that increase out to 2100.
Here is what the reconstruction looks like – Not too bad.
Here is what the global warming line out to 2100 looks like (this might be easier to view than some of the other log warming charts. It is interesting that we have now moved into that portion of the log warming territory where the growth rate is very close to linear – it will flatten out later but we are now in the linear rate territory – the models predict 0.2C per decade while we are only increasing at 0.09C per decade.)
Once more again, thanks to everyone.
Bill Illis: An admirable effort, congratulations.
Norm K.: Isn’t the remaining 5% that which is (re)transmitted regardless of concentration?
In any case, 1.6 degrees C for a doubling of CO2 is an upper limit, it cannot be higher. The value may well be an order of magnitude lower as Spencer’s work indicates.
Gary Gulrud, can you give me a link to Spencer’s work?
Bill Illis (19:49:46) :
I have optimized my temperature reconstruction model now.
Interesting, no ‘solar’ input, unless hidden in the various ‘AMO’s…
Interesting model. Re: no ‘solar’ input. I still think that solar input is already a part of the equation in that ocean cycles, maybe even jet stream movement, and other atmospheric cycles have as an input, solar variables. But in terms of climate prediction, these cycles alone (which possibly include solar inputs) will do nicely. What say you Leif?
And one more thing about the recent increase. What about the rather sudden decrease in measurement stations at about the time temps went up? This is like doing grass plot experiments but changing the number of plots midway. You corrupt the data source by possibly eliminating non-homogeneous data that if it had been kept, would have resulted in a different picture.
Have you done a model against satellite data?
Pamela Gray (14:05:28) :
But in terms of climate prediction, these cycles alone (which possibly include solar inputs) will do nicely. What say you Leif?
What say Bill?
Leif and Pamela,
Above I noted there are repeating autocorrelation cycles in the residuals (after adjusting for the ocean indices influence) which are curiously close to the numbers one would expect to see with the solar cycle.
There is a slight 5.5 year, 9-11 year repeating cycle, 22 years, a really big one at 25 years, and the beginnings of a cycle at 44 years.
Above there is a link to a paper by Michael Mann (before he got into tree rings) where he found the same thing in the Hadcrut3 data. I found it as well in the Hadcrut3 data but also in my residuals.
If there was a solar cycle influence, with this analysis method you would expect to see some repeating cycles around the solar cycle numbers (actually NOT right at them but close to them), close to 5.5 years, close to 11 years, close to 22 years etc. The problem is the solar cycles are irregular so a cycle might appear for a time period and then not appear afterward.
Given the irregular nature of these signals (sometimes before and sometimes after the expected solar cycle timelines), the best place to actually search for them would be at the solar cycle timelines just where they are not supposed to appear (that is very hard to explain). This is where I found them (other than the 25 year signal) so it seems there is definitely a solar cycle influence in the numbers.
Given the irregular timing of the solar cycles, it would be really, really, really difficult to pull it out (in a practical way) and it wouldn’t help with a monthly temperature reconstruction or estimating the solar influence of today or last year or this year.
Given it is very difficult to even see a solar cycle, how would one adjust for an increase in solar irradiance over time.
I decided to trust Leif’s judgement and just assume there is a small solar influence which might be as high as +/-0.1C but it is not a focus of this model.
There was a recent paper, however, that indicated solar irradiance drove the AMO cycle down during the ice ages.
I note the AMO and the southern counterpart went down in the early 1900s when the solar cycle was low. They both go back up as the solar cycle revved up, up until about WWII. Both indices suddenly fall after WWII just as the solar influence was really peaking at 1950. etc. etc. So, I see some correlation but it is not consistent and I am looking for solid mathematical formulae to rely on, not conjecture.
The Sun plays a huge role in an El Nino, of course, its just that the Trade Winds and ocean currents/upwelling are the main drivers of this – causing the ocean surface in the Nino region to stall in place and be heated day after day by the Equatorial Sun. No Trades, no currents and there would be a permanent El Nino regardless of the solar cycle. Given the wild swings in the ENSO, I don’t see a changing Sun in the numbers, just natural variation in the Trades and currents.
Such good stuff. Bill, your work is eye candy. I still have some issues with CO2 related warming vs the influence of data error that could explain some of the warming, but between Leif and Bill, I think we got this covered, no?
From Steve S (13:39:37) :
Can someone help me understand something?
How is it that the global warming issue became a liberal vs conservative issue? I am blown away by how is sometimes seems more a political issue than a scientific one.
-end quote
I suspect that it is due to the perception, fostered IMHO by the AGW proponents, that being “green” is liberal while being pro-fossil fuels is being pro-business is being “conservative”. That being anti-AGW means you are a coal stooge or an oil lackey.
This is clearly false, but is the perception.
Bill, did you see some of the articles about the equatorial oceanic response to solar heating and the chimney effect? I just read about a study similar to what you have done, reported by:
The study model was even more simple than yours but I believe had more calculations for ocean cycles and no CO2 forcings. Over at Icecap there is an article about Australia’s rainfall and cloud patterns. Could it be that the equatorial chimney cooling theory has a cyclic pattern related to things like cosmic ray/ozone fluctuations? Could this be the butterfly wings that creates the nor’easter? Not to make a big case. I think you are on the right track using simple model constructions of the major players in climate forcing. My question may be related to a forcing that has no greater affect than the CO2 I am exhaling right now.
From Fernando (in Brazil) (17:32:02) :[…]
I can imagine any structure to 4ºC and pressure equal to 100 atm. (In the deep ocean)
-end quote
Yup! And don’t forget the Ca+ ions and …
And on the ocean skin issue: Having lived on a boat for a few years the notion that the surface of the ocean is at all stable is very broken. Wind, waves, ripples, currents, cyclones, mists, rain, evaporation, fish jumping, algae blooms, fog, plankton swarms, sea birds, poop, … It mixes down to the first thermocline and bits mix up into the air at least a few dozen (hundred?) feet. Ask any sailor why they have slickers…
Soooo chaotic…. Give it a skin in a model? Yeah, right… gonna need some better proof to swallow that one.
Bill, loved the model above! I think you have a winner.
From Jeff Alberts (04:37:16) :
Running 47 different models hundreds of times and picking the ones that “match” isn’t evidence of anything except chance.
-end quote
Amen! And this regularly kills stock traders who invent new trend following systems, and takes down ‘quant’ funds and… Every so often a new ‘hot hand’ with a new model will win a streak in the stock market, then when they go down in flames everyone is surprised. It always turns out to be the same thing. They ‘back tested’ and all was well, then reality kicked in. Data modeling is not proof of truth.
That’s part of why I’m a skeptic. I’ve seen this movie before too many times.
Bill Illis:
I downloaded your spreadsheet and have given some additional thought to your model.
An implicit assumption in the model is that CO2 is a reasonable proxy for all greenhouse gases; that they have risen more or less proportionally over the last century. While this may not be completely correct, it is probably not too far off, since the sources for these gases (industrial activities, agriculture, transportation, and electricity production) have all increased pretty much in parallel over the last century.
The global circulation based climate modelers usually say that there is substantial (and unavoidable) warming on the way, even if CO2 were to be held at today’s level, due to the long time required to warm the oceans. Lags of 20 to 30 years up to a thousand years are often claimed, and this long lag is used to at least partly explain why the global average temperature has not already increased much more than it would have based on the assumed level of net radiative forcing from increased greenhouse gases.
This seems like a reasonable argument, since a quick estimate of the rate of temperature change in 1000 meters of ocean (not even considering any heat entering the deep ocean!), with about 2 watts per square meter of radiative forcing due to increases in CO2, NO2, and methane since the pre-industrial era (as currently assumed by the IPCC) gives an increase of only about 0.011 degree per year in ocean temperature. And as everyone seems to agree, the oceans rule the climate.
If the climate modelers are correct about the ocean driven lag in temperature rise, then it should be possible to improve the performance of your model by substituting a trailing average CO2 for the monthly values, to account for uptake of heat by the oceans. Incorporating a lag in the CO2 data should (of course) also increase the CO2 constant in the optimized model, perhaps more in line with the IPCC’s projected warming.
When I incorporated a trailing average CO2 in place of the monthly CO2, I found the following:
1. A 12 month trailing average CO2 value yields a very slight improvement in the scatter plot best fit (R^2 of 0.7829 versus 0.7828, slope of 0.9615 versus 0.9613, and the same slope of -0.005). The CO2 model constant increases from 2.7298 to 2.7560.
2. A 24 month trailing average yields exactly the same scatter plot values for R^2, slope, and intercept as the non-averaged monthly CO2 data, and the CO2 model constant increases to 2.782.
3. A 5 year trailing average yields a scatter plot R^2 of 0.7821, slope of 0.9601, intercept of -0.0057, and a CO2 constant of 2.861.
4. A 10 year trailing average yields R^2 of 0.7805, slope of 0.9577, intercept of 0.006, and a CO2 constant of 2.994.
5. A 20 year trailing average yields R^2 of 0.7746, slope of 0.9532, intercept of -0.0067, and a CO2 constant of 3.264.
6. A 30 year trailing average yields R^2 of 0.7713, slope of 0.9518, intercept of -0.0069, and a CO2 constant of 3.564.
The longer the averaging period, the poorer the fit, and the higher the CO2 constant, as expected. The things I find surprising in the above are:
a) A great deal of future warming “already in the pipeline” does not seem to be supported by the historic temperature and CO2 data, since the best fit is with a short (12 month) trailing average CO2 value. Reaching a CO2 constant of 4.7 (as suggested by IPCC models) would require extremely long ocean temperature lags, perhaps 50 years or so.
b) The change in R^2 values for different lengths of trailing average for CO2 is quite modest, while the change in the CO2 constant for the model is quite large.
So I guess we can say the most likely CO2 constant is a low one, but a model based on long trailing average CO2 concentration lag yields a much higher CO2 constant, and is not a lot different in R^2. I am not sure if a change in scatter plot R^2 from 0.7829 (12 months) to 0.7713 (30 years) is statistically significant, but perhaps you can comment on this. (In other words, I am not sure if we can confidently discount the possibility of a long ocean lag time based on the modest decrease in R^2.)
I think it would be interesting to freeze the model parameters (as you posted at) and see how the model does in predicting temperatures over the next 10 years. It seems likely your model will put to shame the many models the IPCC relies on.
Steve Fitzpatrick,
Good stuff there. I tried out your idea of lagging CO2 by 30 years and it works just as good as anything I have done.
I wouldn’t worry about an R^2 that drops by 0.01. The F-statistic drops by a bigger relative margin but it is still a very significant number.
I will have to think about whether it is really valid to lag CO2 by 30 years (assuming that the oceans are absorbing some of the increase from the atmosphere to cause a net lag effect of 30 years.) (I’m just going to focus on 30 years rather than 10 since its effects are the greatest in terms of the warming conclusion.)
There is data that shows annual CO2 changes are closely related to changes in temperatures (annual CO2 changes are very closely correlated with changes in temperatures – lagged 5 months. CO2 is still increasing but the rate in effected 5 months after temperature changes. )
This suggests there is a relationship which is more immediate than 30 years although this relationship is one-way which is opposite to the one-way effect suggested by the 30 year CO2 lag. [I have to put this chart in since it is so wierd. I saw a similar chart on icecap today so I had to try it out my data going back to 1958.]
I guess the other thing is the models are not really built with a 30 year lag in CO2 built into the assumptions. As you can see in this chart made from the IPCC Third Report (which means they had to adjust the warming line to include actual temperatures up to 2000 – I’m not going to give them that break), the warming levels in the models had temperatures starting to rise at an exponential rate by 1970 or so. (it does flatten out to a close to a linear line at a certain point and then it will flatten out in the future – but there is a pattern with the slopes of the line in this logarithmic CO2 impact.)
Whereas the warming model based on a 30 year CO2 lag doesn’t start rising in an exponential sense until about 1985 or 1990 (I have slightly different numbers than you based on my newest model).
So I have to conclude this is not how it is supposed to work.
But, it is something to watch for sure. Watch the temperature response from the next big El Nino. Like I said before, there will have to be an uptick in temps in the next five years or the modelers will have to go back to the drawing board.
Good work. Made me go whoa when I saw the lag numbers were just as good.
Bill Illis:
After I wrote my last comment, I realized that it would be better to use the trailing average of the natural log of the CO2 concentration instead of the trailing average of the CO2 concentration itself. The trailing average of the log of the CO2 concentration is a more theoretically defensible parameter (Beer’s Law and all) than the trailing average of the CO2 concentration, when the objective is to combine past and present contributions to a long term oceanic temperature rise.
However, it looks like this makes very little difference; the model results (at least with the version of your model that I have) are pretty much the same, whether using trailing average of CO2 or trailing average of Ln(CO2). I would much appreciate if you could post a link to the current version of your model (Excel I assume), including the added southern ocean oscillation, since this seems to be a more accurate model than what I downloaded.
I hope you will forgive me Bill, but my comment on trailing averages of CO2 was in part a ruse to get your attention. The truth is that I believe other factors (like the solar cycle and the long term trend in solar cycles) are very important, and that if were it possible to identify and quantify other significant contributors to global temperature change, then CO2 and other greenhouse gases would become much less important in explaining the temperature trends of the last 125 years. “To the man who has only a hammer, most every problem resembles a nail.” When we attribute global warming to greenhouse gases alone, we limit ourselves to only one tool.
It seems to me preposterous to ignore the large and well documented natural temperature changes of the last several thousand years (yes, the Vikings really did grow crops in Greenland), and equally preposterous to offer no plausible explanation for these climate changes. Yet this is exactly what the IPCC climate models all do. The temperature changes and rates of temperature change of the last three thousand years are comparable in size and rate to those of the last 125 years, yet nobody in the IPCC seems to take note of this. (Or worse, they do their best to discredit the historical record in order to minimize the size and rate of past climate changes, ex post facto, so that they can be safely ignored in the greenhouse gas models.)
The IPCC models would actually be quite humorous, were it not that the predicted catastrophic increases in global temperature, sea level, storms, droughts, floods, hurricanes, tornadoes, general calamity, pestilence, and migraine headaches (OK, maybe not headaches) might motivate the public to accept draconian cuts in fossil fuel usage and rapid shifts to more expensive non-fossil energy sources. These solutions to the “global warming problem” will have serious negative economic consequences in the short to medium term, especially for the poorest of people. The existing climate models can do real harm to real people, and I predict that history will judge them and their purveyors very harshly.
But enough of my sermonizing.
What I think we need (‘we’ being all the rest of humanity, plus you and me) is a truly reasonable climate model; not just a model for the last 125 years, but for the last 5,000 years. Since human influence is clearly small before the last 150 years, any reasonable climate model must include solar forcing and perhaps other factors which can explain documented historical climate changes. The solar activity proxies (historical sunspot numbers, C14, Be10) would seem a reasonable starting point in any such model.
We need a climate model that lets us step back enough to see the forest, not just the trees… and the CO2 they consume.
Cheers.
From Steve Fitzpatrick (20:41:39) :
The IPCC models would actually be quite humorous, were it not that the predicted catastrophic increases in global temperature, sea level, storms, droughts, floods, hurricanes, tornadoes, general calamity, pestilence, and migraine headaches (OK, maybe not headaches)
-end quotes
Don’t be so hard on yourself! If true, AGW will cause migraines! Barometric pressure changes are a common trigger: more storms and stronger storms means more barometric flux therefor more migraines. No fooling.
From Pamela Gray (20:11:26) :
Could it be that the equatorial chimney cooling theory has a cyclic pattern related to things like cosmic ray/ozone fluctuations?
-end quote
Don’t know how related this is to eq. chimneys ,but… From
Tropospheric ozone can act both as a direct greenhouse gas and as an indirect controller of greenhouse gas lifetimes. As a direct greenhouse gas, it is thought to have caused around one third of all the direct greenhouse gas induced warming seen since the industrial revolution.
[…]
The largest net source of tropospheric ozone is influx from the stratosphere.
-end quote
Would not lower solar output lead to less UV so less ozone formation? A direct solar driver of GHG. Could that get ‘levered up’ in some way?
Hi Steve and E.M.
I thought some more about the ocean lag and the lag/trailing average of CO2 impact and I think it is important that we think about how this could work in physics terms which would help with deciding which CO2 etc variables to use.
First, the greenhouse effect of CO2 operates at the speed of light. It is photons of light in the EM spectrum that we are talking about here. It is photons of light which are providing the energy here.
A photon comes in from the Sun, hits a rock on the beach in the morning and warms it up. Overnight, that rock cools off and gives back that photon in the IR spectrum upwards toward the atmosphere.
Half the time, that photon travels right through the atmosphere and goes right out into space. But now, there is slightly more CO2/GHGs in the atmosphere and that photon gets captured by one of the extra CO2 molecules.
An electron in the CO2 molecule moves to a higher energy state and in a picosecond, decides it is more comfortable at the lower energy state, and gives it back up in all directions.
That photon now skips around the atmosphere from Nitrogen molecule to Oxygen molecule back to a CO2 molecule and so on. The atmosphere is now slightly warmer – one photons worth that is. In a few days or less, that photon will be lost to space or be reflected back to the ground.
Here is where the warmer ocean comes in. That photon gets reflected by any one of the warmer atmosphere molecules toward the ocean surface.
The ocean is now just a fraction warmer than it was before, has a fraction more energy in its electrons than before, so it rejects the photon and reflects/gives back the photon to the atmosphere either right away or in a short time – where it happily skips around the atmosphere for a few more days or is reflected into space etc.
So, the now warmer ocean just allows more CO2 captured photons to stay in the atmosphere for some period of time whereas when they were cooler, the oceans would have captured some of those photons.
So, in effect, it is still the CO2 of today that we should be concerned with. It is just that a now warmer ocean (or a warmer land) is a variable in how much impact that CO2 will have.
CO2 of 30 years ago may have warmed the oceans but we are still operating at the speed of light here and it is today’s CO2 which provides the impact.
It may help to talk about some of the lags in the climate as well.
Land temperatures lag the equinox,.)
Overnight, 30% to 50% of the heating from the day is lost. If the Sun stopped working for two or three days, what would the temperature in your backyard be.
So, there are some lags in how much energy/photons can be stored for periods of time, but these are not really long.
The deep ocean warming does take much longer, 500 to 1,000 years but that just means the oceans will continue going on absorbing energy/photons for a long time, not that the CO2 of 30 years ago is impacting today.
If we add all that up, we have to use today’s CO2 in the model, it is just that the impact from each individual molecule will slowly rise as the land and sea surface and deep oceans warms. But then, each individual CO2 molecule itself, has less and less logarithmic impact as its concentration rises.
That was long, but I thought it was important to run this little thought experiment for the “warming in the ocean pipeline” explanation as well.
Bill Illis:
I agree with most of what you said about heat accumulation and CO2. Over land, you are absolutely right about the effect of CO2, since the heat capacity of the land is quite small, and solar heating is mainly lost to radiative cooling in short order.
However over ocean, I think the situation is a little more complicated. It is true that the surface ocean temperature lags the solar seasons by about 80 days, at least outside the tropics. This does not mean that some of the heat from sunlight could not be lost to deeper layers. If the first several hundred meters of ocean have (on average) increased in temperature over the past 100 years, then this would represent a significant net accumulation of heat in the ocean. Perhaps measured increases in temperatures over a range of depths would help clarify how much heat has in fact been absorbed (though I do not know if these measurements exist). The temperature lapse rate for the ocean (often a 10C drop over the first 100 meters when the ocean has a relatively warm surface) suggests that slow heat loss to deeper water is likely, but I have no idea how it could be modeled accurately.
So probably the majority of radiative forcing from CO2 (and other greenhouse gases) is short term, but some undefined fraction is absorbed by the ocean, and so represents a lag in the global temperature response. The existing average ocean and land temperature data seem to support this; there has been significantly more increase in average temperature on land than in ocean surface water. This would be easy to add to your model by splitting the two portions, an immediate fraction (for example, 70% of the non-lagged CO2 concentration) and a long term trailing average portion (for example 30% of the lagged CO2 concentration).
Unfortunately, since the fit to the data is almost equally good for either immediate or trailing average CO2 concentrations, I don’t think it would be possible to tell from the model what the correct split would be.
I will think about this some more.
cheers.
|
https://wattsupwiththat.com/2008/11/25/adjusting-temperatures-for-the-enso-and-the-amo/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Introduction
In a previous tutorial we have seen how to use Cordova and Ionic Native 3.x+ to create and show the native Action Sheet In Ionic 2 .Lets now see how to display an Ionic implementation of Action Sheet without using any Cordova plugin .
Required Steps
Start by creating a new Ionic 2 project using the Ionic CLI v3 .
ionic start ActionSheetControllerExample blank
You can also use an existing project .
Next open src/pages/home/home and add a button to trigger the Action Sheet component .
<button ion-button (click)="openActionSheetController()" class="button">Open Action Sheet</button>
Then open src/pages/home/home.ts
Import and Inject ActionSheetController
import { ActionSheetController } from 'ionic-angular'; @Component({ selector: 'home', templateUrl: 'home.html', }) export class HomePage { constructor(public actionSheetCtrl: ActionSheetController) { }
Then add openActionSheetController()
openActionSheetController(){ let actionSheet = this.actionSheetCtrl.create({ title: 'Action Sheet Title', buttons: [{ text: 'Hide', handler: () => { let navTransition = actionSheet.dismiss(); return false; } }] }); actionSheet.present(); }
So first we create the actionSheet object with the required options such as the title and buttons
Each button has its own title and the handler which gets executed when the button is clicked .
Then we use present() method of actionSheet object to dispaly the Action Sheet to the user .
Conclusion
We have covered how to use the Action Sheet component Controller to use and dispaly an Action Sheet with a set of custom buttons .
You can also see this tutorial for how to display native action sheet using Cordova and Ionic<<
|
https://www.techiediaries.com/ionic-action-sheet-controller/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Message Queuing Error and Information Codes
Updated: July 19, 2016
Applies To: Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server Technical Preview, Windows Vista
Error and information codes are returned in two ways: directly by Message Queuing functions and COM object methods, or via the optional aStatus array specified in an MQQUEUEPROPS, MQMSGPROPS, MQQMPROPS, or MQPRIVATEPROPS, structure.
The following error and information codes are defined in mq.h.
MQ_ERROR (0xC00E0001)
Returned when a nonspecific Message Queuing error occurs.
MQ_ERROR_ACCESS_DENIED (0xC00E0025)
Returned when access to the specified queue or computer is denied.
When returned, verify that your are allowed the access rights for performing the operation on the applicable object (for example, creating, setting properties, or deleting a queue). the SOAP envelope of an SRMP message do not form a valid XML document.
MQ_ERROR_BUFFER_OVERFLOW (0xC00E001A)
When reading messages, the buffer supplied for a property is too small. In the case of the message body buffer, the portion of the message body that fits is copied to the buffer, but the message is not removed from the queue.
MQ_ERROR_CANNOT_CREATE_CERT_STORE (0xC00E006F)
Returned when Message Queuing cannot create a certificate store for its internal certificate. This error is returned only when you do not have permission to manipulate your own profile.
MQ_ERROR_CANNOT_CREATE_HASH_EX (0xC00E0081)
Returned when Message Queuing cannot compute the hash value for validating an MSMQ 2.0 signature in an authenticated message.
MQ_ERROR_CANNOT_CREATE_PSC_OBJECTS (0xC00E0095)
Returned when an attempt is made to create an object that should be owned by a primary site controller and the operation cannot be performed.
MQ_ERROR_CANNOT_DELETE_PSC_OBJECTS (0xC00E0083)
Returned when an attempt is made to delete an object that is owned by a primary site controller and the operation cannot be performed.
MQ_ERROR_CANNOT_LOAD_MQAD (0xC00E0085)
Returned when the dynamic-link library Mqad.dll cannot be loaded.
MQ_ERROR_CANNOT_SIGN_DATA_EX (0xC00E0080)
Returned when the hash calculated from the message properties cannot be encrypted with the sender's private key to create an MSMQ 2.0 signature, for example, when an MSMQ 2.0 signature is requested for an authenticated message sent to a multiple-element format name or a distribution list.
MQ_ERROR_CANNOT_IMPERSONATE_CLIENT (0xC00E0024)
Returned when the RPC server cannot impersonate the client application. The security credentials could not be verified.
MQ_ERROR_CANNOT_OPEN_CERT_STORE (0xC00E0070)
Message Queuing cannot open the certificate store for its internal certificate. an attempt is made to update an object that is owned by a primary site controller and the operation cannot be performed.
MQ_ERROR_CANT_RESOLVE_SITES (0xC00E0089)
Returned when the sites where the computer resides cannot be resolved. The subnets in the network may not be configured correctly in Active Directory Domain Services (AD DS) or one or more sites may not be configured with the appropriate subnet.
MQ_ERROR_CERTIFICATE_NOT_PROVIDED (0xC00E006D)
Returned when the sending application attempts to send a message with a request for authentication without a certificate or with a security context that does not include a certificate.
MQ_ERROR_COMPUTER_DOES_NOT_SUPPORT_ENCRYPTION (0xC00E0033)
Returned when encryption is requested and the computer (source or destination) does not support encryption operations. corrupted.
MQ_CORRUPTED_QUEUE_WAS_DELETED (0xC00E0068)
Returned when the file for the queue specified in the Lqs folder has been deleted because it was corrupted.
MQ_ERROR_CORRUPTED_SECURITY_DATA (0xC00E0030)
Returned when a cryptographic (CryptoAPI) function has failed.
MQ_ERROR_COULD_NOT_GET_ACCOUNT_INFO (0xC00E0037)
Returned when Message Queuing cannot retrieve the account information for the user.
MQ_ERROR_COULD_NOT_GET_USER_SID (0xC00E0036)
Returned when Message Queuing cannot retrieve the SID from the thread access token or from the security context specified in the message.
MQ_ERROR_DELETE_CN_IN_USE (0xC00E0048)
(MSMQ 1.0 only.) Returned when the specified connected network (CN) cannot be deleted because it is defined in at least one other computer. the forest root fails.
This error usually indicates a problem in the DNS configuration.
MQ_ERROR_DS_ERROR (0xC00E0043)
Returned when an internal error in the directory service is issued.
MQ_ERROR_DS_IS_FULL (0xC00E0042)
(MSMQ 1.0 only.) Returned when the MSMQ Information Store (MQIS) is full.
MQ_ERROR_DS_LOCAL_USER (0xC00E0090)
(Introduced in MSMQ 3.0.) Returned when a local user, who is authenticated as an anonymous user, attempts to access AD DS.
Only authenticated domain users can access AD DS.
MQ_ERROR_DTC_CONNECT (0xC00E004C)
Returned when Message Queuing cannot connect to the Microsoft® Distributed Transaction Coordinator (MS DTC).
MQ_ERROR_ENCRYPTION_PROVIDER_NOT_SUPPORTED (0xC00E006B)
Returned when the cryptographic service provider specified is not supported by Message Queuing.
MQ_ERROR_FAIL_VERIFY_SIGNATURE_EX (0xC00E0082)
Returned when Message Queuing cannot verify the MSMQ 2.0 signature in the message.
MQ_ERROR_FORMATNAME_BUFFER_TOO_SMALL (0xC00E001F)
Returned when the specified format name buffer is too small to contain the format name of the queue.
MQ_ERROR_GC_NEEDED (0xC00E008E)
Returned when an attempt is made to create an MSMQ Configuration (msmq) object with a predetermined GUID. By default, a Windows forest does not allow adding an AD DS object with a predetermined GUID.
MQ_ERROR_ILLEGAL_CONTEXT (0xC00E005B)
Returned when the lpwcsContext parameter of MQLocateBegin is not NULL.
MQ_ERROR_ILLEGAL_CURSOR_ACTION (0xC00E001C)
Returned when the dwAction parameter of MQReceiveMessage is set to MQ_ACTION_PEEK_NEXT and the cursor is currently at the end of the queue.
MQ_ERROR_ILLEGAL_ENTERPRISE_OPERATION (0xC00E0071)
Returned when an application attempts to create an MsmqServices object.
MQ_ERROR_ILLEGAL_FORMATNAME (0xC00E001E)
Returned when the specified format name is not valid.
MQ_ERROR_ILLEGAL_MQCOLUMNS (0xC00E0038)
Returned when pColumns is set to NULL.
MQ_ERROR_ILLEGAL_MQPRIVATEPROPS (0xC00E007B)
Returned when no properties are specified in the MQPRIVATEPROPS structure, or the pPrivateProps parameter of MQGetPrivateComputerInformation is set to NULL.
MQ_ERROR_ILLEGAL_MQQMPROPS (0xC00E0041)
Returned when no properties are specified in the MQQMPROPS structure, or the pQMprops parameter of MQGetMachineProperties is set to NULL.
MQ_ERROR_ILLEGAL_MQQUEUEPROPS (0xC00E003D)
Returned when no properties are specified in the MQQUEUEPROPS structure, or the pQueueProps parameter is set to NULL.
MQ_ERROR_ILLEGAL_OPERATION (0xC00E0064)
Returned when the requested operation is not supported on the foreign messaging system.
MQ_ERROR_ILLEGAL_PROPERTY_SIZE (0xC00E003B)
Returned when the specified buffer for the message identifier or correlation identifier does not have the correct size.
MQ_ERROR_ILLEGAL_PROPERTY_VALUE (0xC00E0018)
Returned when an illegal property value is specified in the MQPROPVARIANT array.
MQ_ERROR_ILLEGAL_PROPERTY_VT (0xC00E0019)
Returned when an illegal type indicator is specified in the vt field of the MQPROPVARIANT array.
MQ_ERROR_ILLEGAL_PROPID (0xC00E0039)
Returned when an invalid property identifier is specified in the property identifier array.
MQ_ERROR_ILLEGAL_QUEUE_PATHNAME (0xC00E0014)
Returned when an invalid Message Queuing path name is specified for the queue.
MQ_ERROR_ILLEGAL_RELATION (0xC00E003A)
Returned when an invalid relationship parameter is specified.
MQ_ERROR_ILLEGAL_RESTRICTION_PROPID (0xC00E003C)
Returned when an invalid property identifier is specified in MQRESTRICTION.
MQ_ERROR_ILLEGAL_SECURITY_DESCRIPTOR (0xC00E0021)
Returned when an invalid security descriptor is specified.
MQ_ERROR_ILLEGAL_SORT (0xC00E0010)
Returned when an illegal sort operation is specified in MQSORTSET.
MQ_ERROR_ILLEGAL_SORT_PROPID (0xC00E005C)
Returned when an invalid property identifier is specified in MQSORTSET.
MQ_ERROR_ILLEGAL_USER (0xC00E0011)
Returned when an invalid user is specified.
MQ_ERROR_INSUFFICIENT_PROPERTIES (0xC00E003F)
Returned when not all properties required for the operation were specified.
MQ_ERROR_INSUFFICIENT_RESOURCES (0xC00E0027)
Returned when there are insufficient resources (for example, not enough memory) to complete the operation.
When this error is returned the operation fails.
MQ_ERROR_INTERNAL_USER_CERT_EXIST (0xC00E002E)
Returned when the internal or external certificate specified is already registered in the directory service for the user.
MQ_ERROR_INVALID_CERTIFICATE (0xC00E002C)
Returned when the user certificate specified by PROPID_M_SENDER_CERT is invalid, or the certificate is not correctly placed in the Microsoft® Internet Explorer personal certificate store.
MQ_ERROR_INVALID_HANDLE (0xC00E0007)
Returned when the specified queue handle is invalid.
MQ_ERROR_INVALID_OWNER (0xC00E0044)
Returned when an invalid object owner is specified. For example, this code is returned when trying to create a queue on a computer where Message Queuing is not installed.
MQ_ERROR_INVALID_PARAMETER (0xC00E0006)
Returned when one of the IN parameters supplied by the operation is not valid.
MQ_ERROR_IO_TIMEOUT (0xC00E001B)
Returned when the MQReceiveMessage I/O time-out has expired.
MQ_ERROR_LABEL_TOO_LONG (0xC00E005D)
Returned when the specified message label is too long. The message label should be equal to or less than MQ_MAX_MSG_LABEL_LEN (250 Unicode characters).
MQ_ERROR_LABEL_BUFFER_TOO_SMALL (0xC00E005E)
Returned when the message label buffer supplied is too small for the label of the message received.
MQ_ERROR_MACHINE_EXISTS (0xC00E0040)
Returned when an MSMQ configuration (msmq) object already exists in AD DS for the specified computer name.
MQ_ERROR_MACHINE_NOT_FOUND (0xC00E000D)
Returned when the computer specified could not be found in the directory service.
MQ_ERROR_MESSAGE_ALREADY_RECEIVED (0xC00E001D)
Returned when some other cursor, application, or system administrator (using the directory service) has already removed the message from the queue.
MQ_ERROR_MESSAGE_NOT_FOUND (0xC00E0088)
Returned when the message does not exist or was removed from the queue.
MQ_ERROR_MESSAGE_STORAGE_FAILED (0xC00E002A)
Returned when a recoverable or journal message cannot be stored on the local computer.
MQ_ERROR_MISSING_CONNECTOR_TYPE (0xC00E0055)
Returned when the connector type property (PROPID_M_CONNECTOR_TYPE) is not specified and either a property typically generated by Message Queuing is specified by the application or the message is an application-encrypted message.
MQ_ERROR_MQIS_SERVER_EMPTY (0xC00E005F)
Returned when the list of MSMQ Information Store (MQIS) servers (in the registry) is empty.
MQ_ERROR_MULTI_SORT_KEYS (0xC00E008D)
Returned when multiple sort keys are specified in MQSORTSET in an AD DS environment.
MQ_ERROR_NO_DS (0xC00E0013)
Returned when the application cannot access the directory service.
When this error is returned, verify permissions for accessing the directory service.
MQ_ERROR_NO_INTERNAL_USER_CERT (0xC00E002F)
Returned when no internal certificate is registered or the registered certificate is corrupted.
MQ_ERROR_NO_MQUSER_OU (0xC00E0084)
Returned during migration when there is no MSMQ Users organizational unit object in AD DS for the domain. The container for the upgraded user object must be created manually.
MQ_ERROR_NO_RESPONSE_FROM_OBJECT_SERVER (0xC00E0049)
Returned when there is no response from the object owner.
When this error is returned the status of the operation is unknown.
MQ_ERROR_NOT_A_CORRECT_OBJECT_CLASS (0xC00E008C)
Returned when the object whose properties are being retrieved from AD DS does not belong to the class requested.
MQ_ERROR_NOT_SUPPORTED_BY_DEPENDENT_CLIENTS (0xC00E008A)
Returned when the requested operation is not supported for dependent clients.
MQ_ERROR_OBJECT_SERVER_NOT_AVAILABLE (0xC00E004A)
Returned when the object owner is not available.
When this error is returned the operation fails.
MQ_ERROR_OPERATION_CANCELLED (0xC00E0008)
Returned when an operation is canceled before it could be completed. For example, during a message receiving operation, the queue handle is closed by another thread before a new message arrives. perform the operation.
MQ_ERROR_PROPERTIES_CONFLICT (0xC00E0087)
Returned when both PROPID_M_RESP_QUEUE and PROPID_M_RESP_FORMAT_NAME or the equivalent COM object properties are set in the message.
MQ_ERROR_PROPERTY (0xC00E0002)
Returned when one or more of the property identifiers specified is invalid.
MQ_ERROR_PROPERTY_NOTALLOWED (0xC00E003E)
Returned when a specified property is not valid for the operation requested (for example, specifying PROPID_Q_INSTANCE when setting queue properties).
MQ_ERROR_PROV_NAME_BUFFER_TOO_SMALL (0xC00E0063)
Returned when the provider name buffer is too small for the cryptographic service provider name returned.
MQ_ERROR_PUBLIC_KEY_DOES_NOT_EXIST (0xC00E007A)
(Introduced in MSMQ 2.0.) Returned when you attempt to retrieve any encryption key property and the key is not registered in the directory service. Message Queuing was able to successfully query the directory service, but the enhanced key was not found.
MQ_ERROR_PUBLIC_KEY_NOT_FOUND (0xC00E0079)
(Introduced in MSMQ 2.0.) Returned when you attempt to retrieve PROPID_QM_ENCRYPTION_PK_ENHANCED and Message Queuing is operating in a mode that supports only 40-bit encryption. For example, you are trying to retrieve the computer properties of a computer running MSMQ 1.0.
MQ_ERROR_Q_ADS_PROPERTY_NOT_SUPPORTED (0xC00E0091)
Returned when PROPID_Q_ADS_PATH is specified in the pColumns parameter of MQLocateBegin. You cannot retrieve the ADs path of a queue in a query.
MQ_ERROR_Q_DNS_PROPERTY_NOT_SUPPORTED (0xC00E006E)
Returned when PROPID_Q_PATHNAME_DNS is specified in the pColumns parameter of MQLocateBegin. You cannot retrieve the DNS path name of a queue in a query.
MQ_ERROR_QUEUE_DELETED (0xC00E005A)
Returned when the queue is deleted before the message could be read.
The specified queue handle is no longer valid, and the queue must be closed.
MQ_ERROR_QUEUE_EXISTS (0xC00E0005)
Returned when a queue with the identical Message Queuing path name is already registered.
Public queues are registered in the directory service. Private queues are registered in the local computer.
MQ_ERROR_QUEUE_NOT_ACTIVE (0xC00E0004)
Returned when the queue is not open or does not exist, for example, in an attempt to restart the transmission of messages from an outgoing queue.
MQ_ERROR_QUEUE_NOT_AVAILABLE (0xC00E004B)
Returned when an error occurs while trying to read a message from a queue residing on a remote computer.
MQ_ERROR_QUEUE_NOT_FOUND (0xC00E0003)
Returned when Message Queuing cannot find the queue. Such queues include public queues not registered in the directory service and Internet queues that do not exist in the MSMQ namespace. This error is also returned when the user does not have sufficient permissions to perform the operation.
MQ_ERROR_REMOTE_MACHINE_NOT_AVAILABLE (0xC00E0069)
Returned when opening a queue for reading messages on a remote computer that is not available or when creating a cursor for a queue on a remote computer that is not available.
MQ_ERROR_RESULT_BUFFER_TOO_SMALL (0xC00E0046)
Returned when the buffer supplied for the result is too small.
MQLocateNext could not return at least one complete query result.
MQ_ERROR_SECURITY_DESCRIPTOR_TOO_SMALL (0xC00E0023)
Returned when the buffer passed to MQGetQueueSecurity is too small for the security descriptor.
MQ_ERROR_SENDER_CERT_BUFFER_TOO_SMALL (0xC00E002B)
Returned when the sender certificate buffer supplied is too small for the certificate retrieved.
MQ_ERROR_SENDERID_BUFFER_TOO_SMALL (0xC00E0022)
Returned when the sender identifier buffer supplied is too small.
MQ_ERROR_SERVICE_NOT_AVAILABLE (0xC00E000B)
Returned when the application is unable to connect to the queue manager.
MQ_ERROR_SIGNATURE_BUFFER_TOO_SMALL (0xC00E0062)
Returned when the digital signature buffer is too small for the digital signature retrieved.
MQ_ERROR_SHARING_VIOLATION (0xC00E0009)
Returned when the application is trying to open an already opened queue for exclusive reading, or the application is trying to open a queue that is already opened and does not allow sharing.
MQ_ERROR_STALE_HANDLE (0xC00E0056)
Returned when the specified handle was obtained in a previous session of the Message Queuing service.
MQ_ERROR_SYMM_KEY_BUFFER_TOO_SMALL (0xC00E0061)
Returned when the symmetric key buffer is too small for the symmetric key retrieved.
MQ_ERROR_TRANSACTION_ENLIST (0xC00E0058)
Returned when Message Queuing cannot enlist in the specified transaction.
MQ_ERROR_TRANSACTION_IMPORT (0xC00E004E)
Returned when Message Queuing cannot import the specified transaction.
MQ_ERROR_TRANSACTION_SEQUENCE (0xC00E0051)
Returned when the transaction operation sequence is incorrect.
MQ_ERROR_TRANSACTION_USAGE (0xC00E0050)
While reading messages, one of the following actions was attempted within the context of a transaction:
An attempt was made to open a remote queue for read access.
An attempt was made to read a message from a nontransactional queue.
An attempt was made to read a message using a callback or overlap function.
For sending messages in MSMQ 1.0 and MSMQ 2.0, either the message is sent as part of a transaction and the destination queue is a nontransactional queue, or the message is not sent as part of a transaction and the destination queue is a transactional queue. In MSMQ 3.0, the loss of transactional messages sent to nontransactional queues or of nontransactional messages sent to transactional queues is reported in negative acknowledgment messages.
MQ_ERROR_WRITE_NOT_ALLOWED (0xC00E0065)
Returned when a write operation is attempted to the MSMQ Information Store (MQIS) and write operations are not allowed because a new MQIS server is being installed.
MQ_ERROR_UNINITIALIZED_OBJECT (0xC00E0094)
Returned when an attempt is made to use an MSMQManagement object before it is initialized.
MQ_ERROR_UNSUPPORTED_ACCESS_MODE (0xC00E0045)
Returned when the access mode specified when opening the queue is set to an invalid value, or the access mode and the share mode specified are not compatible.
MQ_ERROR_UNSUPPORTED_CLASS (0xC00E0093)
Returned when information is requested for an AD DS object that is not an instance of a supported class.
MQ_ERROR_UNSUPPORTED_FORMATNAME_OPERATION (0xC00E0020)
Returned when the requested operation is not supported for the specified format name (for example, trying to delete a queue using a direct format name).
MQ_ERROR_UNSUPPORTED_OPERATION (0xC00E006A)
Returned when a computer operating in workgroup mode attempts to perform an operation that is not supported in workgroup mode.
MQ_ERROR_USER_BUFFER_TOO_SMALL (0xC00E0028)
Returned when the buffer supplied is too small to hold the user information returned.
MQ_ERROR_WKS_CANT_SERVE_CLIENT (0xC00E0066)
Returned when an RPC request is sent to an independent client to perform an operation for a dependent client. A Message Queuing server is required.
MQ_INFORMATION_DUPLICATE_PROPERTY (0x400E0005)
Returned when a property specified has already been specified in the property identifier array.
When duplicate settings are found, the first entry is used, and subsequent settings are ignored.
MQ_INFORMATION_FORMATNAME_BUFFER_TOO_SMALL (0x400E0009)
Returned when the format name buffer supplied to MQCreateQueue is too small. pending.
MQ_INFORMATION_OWNER_IGNORED (0x400E000B)
Returned when the queue owner is not set in the SECURITY_INFORMATION structure during the processing of a call to MQSetQueueSecurity.
MQ_INFORMATION_PROPERTY (0x400E0001)
Returned when one or more of the properties passed resulted in a warning, but the operation completed.
MQ_INFORMATION_PROPERTY_IGNORED (0x400E0003)
Returned when a specified property is not valid for the operation being performed (for example, PROPID_M_SENDERID is not valid when it is specified in the message properties structure passed to MQSendMessage; this property is set by Message Queuing when sending messages).
MQ_INFORMATION_UNSUPPORTED_PROPERTY (0x400E0004)
Returned when a specified property is not supported by the operation being performed. The property is ignored.
|
https://msdn.microsoft.com/en-us/library/ms700106.aspx
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
What is the most concise and efficient way to find out if a JavaScript array contains an obj?
This is the only way I know to do it:
function contains(a, obj) { for (var i = 0; i < a.length; i++) { if (a[i] === obj) { return true; } } return false; }
Is there a better and more concise way to accomplish this?
This is very closely related to Stack Overflow question Best way to find an item in a JavaScript Array? which addresses finding objects in an array using
indexOf.
If you are using JavaScript 1.6 or later (Firefox 1.5 or later) you can use Array.indexOf. Otherwise, I think you are going to end up with something similar to your original code.).
Update: As @orip mentions in comments, the linked benchmark was done in 2008, so results may not be relevant for modern browsers. However, you probably need this to support non-modern browsers anyway and they probably haven't been updated since. Always test for yourself.
As others have said, the iteration through the array is probably the best way, but it has been proven that a decreasing
while
Here's a Javascript 1.6 compatible implementation of Array.indexOf:
if (!Array.indexOf) { Array.indexOf = [].indexOf ? function (arr, obj, from) { return arr.indexOf(obj, from); }: function (arr, obj, from) { // (for IE6) var l = arr.length, i = from ? parseInt( (1*from) + (from<0 ? l:0), 10) : 0; i = i<0 ? 0 : i; for (; i<l; i++) { if (i in arr && arr[i] === obj) { return i; } } return -1; }; }
Extending the JavaScript
Array object is a really bad idea because you introduce new properties (your custom methods) into
for-in loops which can break existing scripts. A few years ago the authors of the Prototype library had to re-engineer their library implementation to remove just this kind of thing.
If you don't need to worry about compatibility with other JavaScript running on your page, go for it, otherwise, I'd recommend the more awkward, but safer free-standing function solution.
Here's how Prototype does it:
/** * Array#indexOf(item[, offset = 0]) -> Number * - item (?): A value that may or may not be in the array. * - offset (Number): The number of initial items to skip before beginning the * search. * * Returns the position of the first occurrence of `item` within the array — or * `-1` if `item` doesn't exist in the array. **/ function indexOf(item, i) { i || (i = 0); var length = this.length; if (i < 0) i = length + i; for (; i < length; i++) if (this[i] === item) return i; return -1; }
Also see here for how they hook it up.
Modern browsers have
Array#indexOf, which does exactly that; this is in the new(ish) ECMAScript 5th edition specification, but it has been in several browsers for years. Older browsers can be supported using the code listed in the "compatibility" section at the bottom of that page.
jQuery has a utility function for this:
$.inArray(value, array)
It returns the index of a value in an array. It returns -1 if the array does not contain the value.
jQuery has several useful utility functions.
An excellent JavaScript utility library is underscore.js:
_.contains(list, value), alias
_.include(list, value)(underscore's contains/include uses indexOf internally if passed a JavaScript array).
Some other frameworks:
dojo.indexOf(array, value, [fromIndex, findLast])documentation. Dojo has a lot of utility functions, see.
array.indexOf(value)documentation
array.indexOf(value)documentation
findValue(array, value)documentation
array.indexOf(value)documentation
Ext.Array.indexOf(array, value, [from])documentation
Notice how some frameworks implement this as a function. While other frameworks add the function to the array prototype.
In CoffeeScript, the
in operator is the equivalent of
contains:
a = [1, 2, 3, 4] alert(2 in a)
var mylist = [1, 2, 3]; assert(mylist.contains(1)); assert(mylist.indexOf(1) == 0);
Thinking out of the box for a second, if you are in making this call many many times, it is more efficient to use an associative array to do lookups using a hash function.
Just another option
// usage: if ( ['a','b','c','d'].contains('b') ) { ... } Array.prototype.contains = function(value){ for (var key in this) if (this[key] === value) return true; return false; }
If you are checking repeatedly for existence of an object in an array you should maybe look into
contains(a, obj).
Literally:
(using Firefox v3.6, with
for-in caveats as previously noted (HOWEVER the use below might endorse
for-in for this very purpose! That is, enumerating array elements that ACTUALLY exist via a property index (HOWEVER, in particular, the array
length property is NOT enumerated in the
for-in property list!).).)
(Drag & drop the following complete URI's for immediate mode browser testing.)
javascript: function ObjInRA(ra){var has=false; for(i in ra){has=true; break;} return has;} function check(ra){ return ['There is ',ObjInRA(ra)?'an':'NO',' object in [',ra,'].'].join('') } alert([ check([{}]), check([]), check([,2,3]), check(['']), '\t (a null string)', check([,,,]) ].join('\n'));
which displays:
There is an object in [[object Object]]. There is NO object in []. There is an object in [,2,3]. There is an object in []. (a null string) There is NO object in [,,].
Wrinkles: if looking for a "specific" object consider:
javascript: alert({}!={}); alert({}!=={});
and thus:
javascript: obj={prop:"value"}; ra1=[obj]; ra2=[{prop:"value"}]; alert(ra1[0]==obj); alert(ra2[0]==obj);
Often
ra2 is considered to "contain"
obj as the literal entity
{prop:"value"}.
A very coarse, rudimentary, naive (as in code needs qualification enhancing) solution:
javascript: obj={prop:"value"}; ra2=[{prop:"value"}]; alert( ra2 . toSource() . indexOf( obj.toSource().match(/^.(.*).$/)[1] ) != -1 ? 'found' : 'missing' );
See ref: Searching for objects in JavaScript arrays.
Hmmm. what about
Array.prototype.contains = function(x){ var retVal = -1; //x is a primitive type if(["string","number"].indexOf(typeof x)>=0 ){ retVal = this.indexOf(x);} //x is a function else if(typeof x =="function") for(var ix in this){ if((this[ix]+"")==(x+"")) retVal = ix; } //x is an object... else { var sx=JSON.stringify(x); for(var ix in this){ if(typeof this[ix] =="object" && JSON.stringify(this[ix])==sx) retVal = ix; } } //Return False if -1 else number if numeric otherwise string return (retVal === -1)?false : ( isNaN(+retVal) ? retVal : +retVal); }
I know it's not the best way to go, but since there is no native IComparable way to interact between objects, I guess this is as close as you can get to compare two entities in an array. Also, extending Array object might not be a wise thing to do sometimes it's ok (if you are aware of it and the trade-off)
b is the value, a is the array
It returns true or false
function(a,b){return!!~a.indexOf(b)}
While
array.indexOf(x)!=-1 is the most concise way to do this (and has been supported by non-IE browsers for over decade...), it is not O(1), but rather O(N), which is terrible. If your array will not be changing, you can convert your array to a hashtable, then do
table[x]!==undefined or
===undefined:
Array.prototype.toTable = function() { var t = {}; this.forEach(function(x){t[x]=true}); return t; }
Demo:
var toRemove = [2,4].toTable(); [1,2,3,4,5].filter(function(x){return toRemove[x]===undefined})
(Unfortunately, while you can create an Array.prototype.contains to "freeze" an array and store a hashtable in this._cache in two lines, this would give wrong results if you chose to edit your array later. Javascript has insufficient hooks to let you keep this state, unlike python for example.)
function inArray(elem,array) { var len = array.length; for(var i = 0 ; i < len;i++) { if(array[i] == elem){return i;} } return -1; }
Returns array index if found, or -1 if not found
Similar thing: Finds the first element by a "search lambda":
Array.prototype.find = function(search_lambda) { return this[this.map(search_lambda).indexOf(true)]; };
Usage:
[1,3,4,5,8,3,5].find(function(item) { return item % 2 == 0 }) => 4
Same in coffeescript:
Array.prototype.find = (search_lambda) -> @[@map(search_lambda).indexOf(true)]
As others have mentioned you can use
Array.indexOf, but it isn't available in all browsers. Here's the code from to make it work the same in older browsers.
indexOf is a recent addition to the ECMA-262 standard; as such it may not be present in all browsers. You can work around this by inserting the following code at the beginning of your scripts, allowing use of indexOf in implementations which do not natively support it. This algorithm is exactly the one specified in ECMA-262, 5th edition, assuming Object, TypeError, Number, Math.floor, Math.abs, and Math.max have their original value.
if (!Array.prototype.indexOf) { Array.prototype.indexOf = function (searchElement /*, fromIndex */ ) { "use strict";; } }
I looked through submitted answers and got that they only apply if you search for the object via reference. A simple linear search with reference object comparison.
But lets say you don't have the reference to an object, how will you find the correct object in the array? You will have to go linearly and deep compare with each object. Imagine if the list is too large, and the objects in it are very big containing big pieces of text. The performance drops drastically with the number and size of the elements in the array.
You can stringify objects and put them in the native hash table, but then you will have data redundancy remembering these keys cause JavaScript keeps them for 'for i in obj', and you only want to check if the object exists or not, that is, you have the key.
I thought about this for some time constructing a JSON Schema validator, and I devised a simple wrapper for the native hash table, similar to the sole hash table implementation, with some optimization exceptions which I left to the native hash table to deal with. It only needs performance benchmarking... All the details and code can be found on my blog: I will soon post benchmark results.
The complete solution works like this:
var a = {'a':1, 'b':{'c':[1,2,[3,45],4,5], 'd':{'q':1, 'b':{'q':1, 'b':8},'c':4}, 'u':'lol'}, 'e':2}; var b = {'a':1, 'b':{'c':[2,3,[1]], 'd':{'q':3,'b':{'b':3}}}, 'e':2}; var.";));
My little contribution:
function isInArray(array, search) { return array.indexOf(search) >= 0; } //usage if(isInArray(my_array, "my_value")) { //... }
var myArray = ['yellow', 'orange', 'red'] ; alert(!!~myArray.indexOf('red')); //true
To know exactly what the
tilde
~ do at this point refer to this question What does a tilde do when it precedes an expression?
Ecmascript 6 has an elegant proposal on find.
The find method executes the callback function once for each element present in the array until it finds one where callback returns a true value. If such an element is found, find immediately returns the value of that element. Otherwise, find returns undefined. callback is invoked only for indexes of the array which have assigned values; it is not invoked for indexes which have been deleted or which have never been assigned values.
Here is the MDN documentation on that.
The find functionality works like this.
function isPrime(element, index, array) { var start = 2; while (start <= Math.sqrt(element)) { if (element % start++ < 1) return false; } return (element > 1); } console.log( [4, 6, 8, 12].find(isPrime) ); // undefined, not found console.log( [4, 5, 8, 12].find(isPrime) ); // 5
You can use this in ES5 and below by defining the function.
if (!Array.prototype.find) { Object.defineProperty(Array.prototype, 'find', { enumerable: false, configurable: true, writable: true, value: function(predicate) {++) { if (i in list) { value = list[i]; if (predicate.call(thisArg, value, i, list)) { return value; } } } return undefined; } }); }
I use the following:
Array.prototype.contains = function (v) { return this.indexOf(v) > -1; } var a = [ 'foo', 'bar' ]; a.contains('foo'); // true a.contains('fox'); // false
You can use Array.prototype.some()
var items = [ {a: '1'}, {a: '2'}, {a: '3'} ] items.some(function(item) { item.a === '3' }) // returns true items.some(function(item) { item.a === '4' }) // returns false
One thing to note is that
some() is not present in all js versions: (from the website)
some was added to the ECMA-262 standard in the 5th edition; as such it may not be present in all implementations of the standard
You can add it in case it's not there:
if (!Array.prototype.some) { Array.prototype.some = true; } return false; }; }
We use this snippet (works with objects, arrays, strings):
/* * @function * @name Object.prototype.inArray * @description Extend Object prototype within inArray function * * @param {mix} needle - Search-able needle * @param {bool} searchInKey - Search needle in keys? * */ Object.defineProperty(Object.prototype, 'inArray',{ value: function(needle, searchInKey){ var object = this; if( Object.prototype.toString.call(needle) === '[object Object]' || Object.prototype.toString.call(needle) === '[object Array]'){ needle = JSON.stringify(needle); } return Object.keys(object).some(function(key){ var value = object[key]; if( Object.prototype.toString.call(value) === '[object Object]' || Object.prototype.toString.call(value) === '[object Array]'){ value = JSON.stringify(value); } if(searchInKey){ if(value === needle || key === needle){ return true; } }else{ if(value === needle){ return true; } } }); }, writable: true, configurable: true, enumerable: false });
Usage:
var a = {one: "first", two: "second", foo: {three: "third"}}; a.inArray("first"); //true a.inArray("foo"); //false a.inArray("foo", true); //true - search by keys a.inArray({three: "third"}); //true var b = ["one", "two", "three", "four", {foo: 'val'}]; b.inArray("one"); //true b.inArray('foo'); //false b.inArray({foo: 'val'}) //true b.inArray("{foo: 'val'}") //false var c = "String"; c.inArray("S"); //true c.inArray("s"); //false c.inArray("2", true); //true c.inArray("20", true); //false
function contains(a, obj) { return a.some(function(element){return element == obj;}) }
Array.prototype.some() was added to the ECMA-262 standard in the 5th edition
EcmaScript 7 introduces
Array.prototype.includes.
It can be used like this:
[1, 2, 3].includes(2); // true [1, 2, 3].includes(4); // false
It also accepts an optional second argument
fromIndex:
[1, 2, 3].includes(3, 3); // false [1, 2, 3].includes(3, -1); // true
Unlike
indexOf, which uses Strict Equality Comparison,
includes compares using SameValueZero equality algorithm. That means that you to detect if an array includes a
NaN:
[1, 2, NaN].includes(NaN); // true
It can be polyfilled to make it work on all browsers.
one liner:
function contains(arr, x) { return arr.filter(function(elem) { return elem == x }).length > 0; }
A hopefully faster Bidirectional
indexOf /
lastIndexOf alternative
While the new method includes is very nice, the support is basically zero for now.
It's long time that i was thinking of way to replace the slow indexOf/lastIndexOf functions.
A performant way has ben already found, looking at the top answers. From those i choosed the
contains function posted by @Damir Zekić which should be the fastest one. But also states that the benchmarks are from 2008 and so outdated.
I also prefer
while over
for but for not a specific reason i ended writing the function with a for loop.it could be also done with a
while --
I was curious if the iteration was much slower if i check both sides of the array while doing it. Apparently no, and so this function is around 2x faster than the Top voted ones. Obiovsly it's also faster than the native one.This in a real world environment, where you never know if the value you are searching is at the beginning or at the end of the array.
When you know you just pushed an array with a value, using lastIndexOf remains probably the best solution, but if you have to travel trough big arrays and the result could be everywhere this could be a solid solution to make things faster.
Bidirectional indexOf/lastIndexOf
function bidirectionalIndexOf(a,b,c,d,e){ for(c=a.length,d=c*1;c--;){ if(a[c]==b)return c; //or this[c]===b if(a[e=d-1-c]==b)return e; //or a[e=d-1-c]===b } return -1 } //usage bidirectionalIndexOf(array,'value');
Performance test
As test i created an array with 100k entries.
Three queries : at the beginning, in the middle & at the end of the array.
I hope you also find this intresting and test the performance.
note: as you can see i slightly modified the
contains function to reflect the indexOf & lastIndexOf output.(so basically
true with the
index and
false with
-1). That shouldn't harm it.
The array prototype variante
Object.defineProperty(Array.prototype,'bidirectionalIndexOf',{value:function(b,c,d,e){ for(c=this.length,d=c*1;c--;){ if(this[c]==b)return c; //or this[c]===b if(this[e=d-1-c]==b)return e; //or this[e=d-1-c]===b } return -1 },writable:false,enumerable:false}); //usage array.bidirectionalIndexOf('value');
The function can also be easely modified to return true or false or even the object, string or whatever it is.
if you have any questions just ask.
EDIT
And here is the
while variante.
function bidirectionalIndexOf(a,b,c,d){ c=a.length;d=c-1; while(c--){ if(b===a[c])return c; if(b===a[d-c])return d-c; } return c } //usage bidirectionalIndexOf(array,'value');
How is this possible?
I think that the simple calculation to get the reflected index in an array is so simple that it's 2 times faster than doing an actual loop iteration.
Here is a complex example doing 3 checks per iteration, but this is only possible with a longer calculation which causes the slow down of the code.
This might work for you: $.inArray(obj, a)
Use lodash's some function.
It's concise, accurate and has great cross platform support.
The accepted answer does not even meet the requirements.
Requirements: Recommend most concise and efficient way to find out if a JavaScript array contains an object.
Accepted Answer:
$.inArray({'b': 2}, [{'a': 1}, {'b': 2}]) > -1
My recommendation:
_.some([{'a': 1}, {'b': 2}], {'b': 2}) > true
Notes:
$.inArray works fine for determining whether a scalar value exists in an array of scalars...
$.inArray(2, [1,2]) > 1
... but the question clearly asks for an efficient way to determine if an object is contained in an array.
In order to handle both scalars and objects, you could do this:
(_.isObject(item)) ? _.some(ary, item) : (_.indexOf(ary, item) > -1)
EDIT
One can use Set () that has a method "has()":
function contains(arr, obj) { var proxy = new Set(arr); if (proxy.has(obj)) return true; else return false; } var arr = ['Happy', 'New', 'Year']; console.log(contains(arr, 'Happy'));
By no means the best, but just getting creative and adding to the repertoire do not use this
Object.defineProperty(Array.prototype, 'exists', { value: function(element, index) { var index = index || 0 return index === this.length ? -1 : this[index] === element ? index : this.exists(element, ++index) } }) // outputs 1 console.log(['one', 'two'].exists('two')); // outputs -1 console.log(['one', 'two'].exists('three')); console.log(['one', 'two', 'three', 'four'].exists('four'));
you can also use that trick :
var arrayContains = function(object) { return (serverList.filter(function(currentObject) { if (currentObject === object) { return currentObject } else { return false; } }).length > 0) ? true : false }
Solution that works in all modern browsers:
const contains = (arr, obj) => { const stringifiedObj = JSON.stringify(obj); // Cache our object to not call `JSON.stringify` on every iteration return arr.some(item => JSON.stringify(item) === stringifiedObj); }
Usage:
contains([{a: 1}, {a: 2}], {a: 1}); // true
IE6+ solution:
function contains(arr, obj) { var stringifiedObj = JSON.stringify(obj) return arr.some(function (item) { return JSON.stringify(item) === stringifiedObj; }); } // .some polyfill, not needed for IE9+ if (!('some' in Array.prototype)) { Array.prototype.some = function (tester, that /*opt*/) { for (var i = 0, n = this.length; i < n; i++) { if (i in this && tester.call(that, this[i], i, this)) return true; } return false; }; }
Usage:
contains([{a: 1}, {a: 2}], {a: 1}); // true
JSON.stringify?
Array.indexOf and
Array.includes (as well as most of the answers here) only compare by reference and not by value.
[{a: 1}, {a: 2}].includes({a: 1}); // false, because {a: 1} is a new object
Non-optimized ES6 one-liner:
[{a: 1}, {a: 2}].some(item => JSON.stringify(item) === JSON.stringify({a: 1)); // true
Note: Comparing objects by value will work better if the keys are in the same order, so to be safe you might sort the keys first with a package like this one:
Updated the
contains function with a perf optimization. Thanks itinance for pointing it out.
OK, you can just optimise your code to get the result, there are many ways to do this which are cleaner, but I just wanted to get your pattern and apply it to that, just simply do something like this:
function contains(a, obj) { for (var i = 0; i < a.length; i++) { if (JSON.stringify(a[i]) === JSON.stringify(obj)) { return true; } } return false; }
Or this solution:
Array.prototype.includes = function (object) { return !!+~this.indexOf(object); };
OK, you can just optimise your code to get the result! There are many ways to do this which are cleaner, but I just wanted to get your pattern and apply it to that using
JSON.stringify, just simply do something like this:
function contains(a, obj) { for (var i = 0; i < a.length; i++) { if (JSON.stringify(a[i]) === JSON.stringify(obj)) { return true; } } return false; }
Using idnexOf() it is a good solution, but you should hide embedded implementation indexOf() function which returns -1 with ~ operator:
function include(arr,obj) { return !!(~arr.indexOf(obj)); }
With ECMA 6 you can use Array.find(FunctionName) where FunctionName is a user defined function to search for the object in the array.
Hope this helps!
I was working on a project that I needed a functionality like python
set which removes all duplicates values and returns a new list, so I wrote this function maybe useful to someone
function set(arr) { var res = []; for (var i = 0; i < arr.length; i++) { if (res.indexOf(arr[i]) === -1) { res.push(arr[i]); } } return res; }
Similar Questions
|
http://ebanshi.cc/questions/74/array-containsobj-in-javascript
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
hi,
can i make the animation appear randomly but with time limite ?
example:
if the random number allow that the animation appear, the animation must stay 3 secondes then dissapear even if the random number is changed before the 3 secondes are finish.
also when random number allow that the animation disappear, the animation must stay hidden 3 to 6 secondes randomly but at least it must stay disappeared 3 secondes .
thank you
what exactly do you want?
exactly i want a make a game that the character can mark point by eating things .
-i want that those things appeare and dissapear randomly .
- they must stay 3 seconds then dissapear.
-and when they dissapear i want that they stay hidden between 2 and 4 seconds then they appear again.
thank you
use tweenlite ().
if your tweenlite class is in com/greensock/, use:
import com.greensock.TweenLite;
// for each movieclip (mc) you want to appear between minTime and maxTime seconds, call randomizeF(mc,minTime,maxTime);
function randomizeF(mc:MovieClip,minTime:Number,maxTime:Number):Void(
TweenLite.to(mc,.5,{autoAlpha:100,delay:minTime+(maxTime-minTime)*Math.random(),onComplete :fadeoutF,onCompleteParams:[mc]});
}
function fadeoutF(mc:MovieClip):Void{
TweenLite.to(mc,.5,{autoAlpha:0,delay:3,onComplete:randomizeF,onCompleteParams:[mc]});
}
if that AS 2 ?
|
https://forums.adobe.com/thread/905091
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Use
eclipselink.logging.session to specify if EclipseLink should include session identifier in each log message.
Values
Table 5-58 describes this persistence property's values.
Usage
This setting is applicable to messages that require a database connection such as SQL and the transaction information to determine on which underlying session (if any) the message was sent.
Examples
Example 5-55 shows how to use this property in the
peristence.xml file.
Example 5-55 Using logging.session in persistence.xml file
<property name="eclipselink.logging.session" value="false" />
Example 5-56 shows how to use this property in a property map.
Example 5-56 Using logging.session in a Property Map
import org.eclipse.persistence.config.PersistenceUnitProperties;propertiesMap.put(PersistenceUnitProperties.LOGGING_SESSION, "false");
See Also
For more information, see:
"Configuring WebLogic Server to Expose EclipseLink Logging" in Solutions Guide for EclispeLink
Logging Examples
|
http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/p_logging_session.htm
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Introduction: Your Own Color Sensor Using LEDs
Did you know that you can make a "cheap" but effective color sensor using some basic components?
This super-easy instructable will guide you to make your own color sensor using a bunch of LEDs and an LDR.
I've managed to make a well arranged, compact , enhanced and relatively thin sensor ( PCB Version ).
This instructable covers up two types of sensor that I've made, the first sensor is the Perf-board version and second is the enhanced SMD version ( 2cm x 2cm x 0.5cm ).
The Perf-board version is ultra-easy to make but if you want the SMD version , you'd require SMD soldering skills ( to solder SOT-23 package transistors ).
Note : In the video , you'll see a board ( Arduino Shield ) that I'm using. That shield is a custom made shield for my upcoming RGB lamp. The process to built it and other details will be covered up in the upcoming Instructable. The Shield basically uses a CD4051 multiplexer to minimize the PWM pins required.
Step 1: Introduction and Part List
These sensors that we're about to build can be used as a substitute to the TCS3200 sensor. Though the TCS3200 is much accurate than these sensors, they are also quite expensive ( ~ 7-10 $ ) than what we're going to make ( max 2 $ ).
SMD version : The SMD version Requires :
1. 2x Red SMD ( 1206 ) - Sparkfun
2. 2x Green SMD ( 1206 ) - Sparkfun
3. 2x Blue SMD ( 1206 ) - Sparkfun
4. 2x White SMD ( 1206 ) - Sparkfun
5. 3cm x 3cm Double sided PCB ( fiberglass epoxy )
6. 1x LDR ( photocell ) - Sparkfun
7. 3x BC847 ( SOT -23 -3 ) - Mouser
8. 1x 10k ( 1206 SMD ) resistor - Sparkfun
9. A 7 pin Ribbon cable
10. Male Headers
Perfboard Version :
1. 1x RGB LED ( SMD / Through hole / piranha ) - SMD ( sparkfun ) , Through Hole ( Sparkfun )
2. 1x LDR ( Photocell ) - Sparkfun
3. 1x 10K resistor
4. 1x 330 ohm Resistor
5. Perfboard - 3cm x 2cm
6. 6-pin Ribbon cable
7. Male Headers
Step 2: Start Building the Board ( Perf-board )
Start making it by soldering the RGB LED on the board
Then Solder the LDR. Finally when the board is soldered with LDR and LED , cut it along the edges of LDR and LED ( leave some space for Resistors and Headers ).
After cutting the board , solder the headers along the pins of LED ( optional ).
Solder a 330 ohm / 220 ohm resistor from the common pin of LED to the ground bus. Then solder the SMD 10K resistor between one pin of the LDR and Vcc ( +5V ). The other pin of the LDR goes into Ground Bus.
Solder the Ribbon cable to the board :
Red : LED's Red pin
Green : LED's Green pin
Blue : LED's blue pin
Orange : Vcc ( +5V ) bus
Brown : Ground
Yellow : Analog - on the junction of 10K resistor and the LDR
Finally solder the other end of the ribbon cable to Headers
Red - Green - Blue - Vcc - Gnd - Analog
Finish off the board by capping the LDR using a piece of 4mm Heat shrink ( black ) tube.
Step 3: The SMD Version I - Making the Board
The SMD sensor is quite difficult to make as compared to the perf-board version.
Print the given layout ( .brd ) on a glossy ( magazine ) paper. Cut the layout along the edges. Place one of the layout on the Double Sided Copper Clad Board. Mark the edges on the board and cut along the edges either by a hack-saw or a demel ( The edges are quite uneven coz I've used a hack-saw ).
After you have a proper sized board, scrub it with a steel scrub and wipe it clean. Pre-heat the board using an iron and carefully place the layouts on either sides of the board ( be careful about the orientation ) .
#Tip : To match the layouts accurately , first drill two ( or more ) reference holes in the board and then align the boards properly with the holes.
After You have the layouts properly aligned , Heat them with an iron without disturbing their position.
#Tip : While cutting the board , leave some extra space for pasting the layouts on the board using superglue. ( Be careful not to apply superglue on the tracks or any place within the boundaries of the board.
Heat the board evenly for 5 ~ 6 minutes . Avoid overheating. After 5 minutes , place the board in water and let the paper soak for about 10 minutes. Now gently peel off the excess paper from the board under a stream of water. Rub the board gently with your fingers to remove any more paper from the board.
Dry the board and make sure that the layout has been transferred correctly. Make corrections with a thin tip permanent marker. After making sure that the layout is correct , place the board in a shallow plastic container.
Warning : This process should be strictly carried out in a well ventilated area and while etching the board , wear latex gloves and a protective eye-wear. The reaction of FeCl3 and water is highly Exothermic and releases toxic fumes.
Place a heap of FeCl3 besides the board. Bring boiling water and add it slowly to the container ( add verry little quantity of water , just enough to completely submerge the board )
Stir the container constantly until the PCB has been finally etched ( this may take upto 15 mins ). After the PCB has been etched, remove it using plastic tweezers or tongs. Carefully transfer the solution in another bottle ( Do not drain the solution without neutralizing it ).
Wash the PCB thoroughly and scrub off the toner using steel scrub and acetone / rubbing alcohol. Dry the board and Drill it ( 1mm or 0.8 mm bits ). Sand the edges to achieve a better looking PCB.
Step 4: SMD Version II - Making the Board
After the board is complete , Tin it using a chisel tip and solder.
Start soldering the LEDs first ( remember , the inner ring is for ground and the "blue" dot on the LED represents ground terminal ).
Then solder the SOT-23-3 BC847 transistors. Solder the 10K resistor and then solder the LDR.
Make sure that the LDR is firm and cap it with a 4mm heat shrink tube.
Now solder the ribbon cable ( in any sequence ). The other end of the ribbon cable has to be in a proper sequence
White - Red - Green - Blue - Vcc - Gnd - Analog
To make it look like "professional PCBs" paint it green / red using a permanent marker.
Your sensor is ready for testing !
Step 5: Testing and Graphing the Results
When you've finished making your sensors, plug them into your bread board and hook it up to your Arduino.
Arduino | Sensor
pin 9 -------R
pin 10 ------G
pin 11 ------B
Gnd -------Gnd
Vcc --------Vcc
A0 ---------Analog
Place a white object in front of the sensor and Run the given code on arduino :
int LED[3] = {9,10,11},i, j ; // DECLARE R G B PINS void setup() { Serial.begin(9600); for(i=0;i<3;i++) // set LED pins to OUTPUT pinMode(LED[i], OUTPUT); } void loop() { for(j = 0 ; j < 3 ; j++) // CYCLE PINS { for(i =0 ; i < 255 ; i++) // CYCLE VALUES { analogWrite(LED[j],i); Serial.println(1024-analogRead(0)); // PRINT VALUES delay(100); } analogWrite(LED[j],0); delay(100); } }
And run this code in processing :
import processing.serial.*; Serial myPort; // The serial port float xPos = 20,prevtime=0; // horizontal position of the graph void setup () { // set the window size: size(1300, 700); // List all the available serial ports println(Serial.list()); // I know that the first port in the serial list on my mac // is always my Arduino, so I open Serial.list()[0]. // Open whatever port is the one you're using. myPort = new Serial(this, Serial.list()[1], 9600); // don't generate a serialEvent() unless you get a newline character: myPort.bufferUntil('\n'); // set inital background: background(255); } void draw () { // everything happens in the serialEvent() } void serialEvent (Serial myPort) { // get the ASCII string: String inString = myPort.readStringUntil('\n'); if (inString != null) { // trim off any whitespace: inString = trim(inString); // convert to an int and map to the screen height: float inByte = float(inString); inByte = map(inByte, 0, 1023, 0, height); // draw the line: stroke(255,0,0); line(xPos, height, xPos, height - inByte); // at the edge of the screen, go back to the beginning: if(xPos>=width) { xPos=20; background(255); } else xPos+=0.7; } }
Try changing the codes and delay values, the graphs will change
More delay gives more precise graphs
The graph clearly tells that the variation of brightness w.r.t the PWM values isn't same for R,G,B leds and hence they need to calibrated.
Step 6: Testing and Calibrating the Color Sensor
Now here comes the code , which analyses and calibrates each color as per the values that are reflected back.
/* Color Sensor code by - electro18 find more details about this project at : This code is open source and is created by It demonstrates the use of LEDs and LDRs as a color sensor Steps: Place a white screen in front of the sensor Power up the arduino Let it calibrate for a while Once it is calibrated , the colors RGB will flash periodically The percentage composition of the particular color will be displayed on the serial monitor Open the serial monitor for debugging ang to verify the values */ int sensor,minVal, Val[3] , colArray[3] = {9,10,11}, total; float Percent[3]; int i ,readRGB[3] , readMax[3], Domin; // DECLARE VARIABLES long calibtime,prevtime; // RECORD THE TIME ELAPSED void setup() { Serial.begin(9600); for(i =0 ; i<3 ; i++) { pinMode(colArray[i],OUTPUT); // SET THE OUTPUT PINS } calibrate(); // RUN THE CALIBRATE FUNCTION } void loop() { total = 0 ; for( i = 0 ; i < 3 ; i++) // CHECK VALUES IN A LOOP { prevtime = millis(); while(millis()-prevtime < 1000) // AVOID DELAY { analogWrite(colArray[i],Val[i]); // WRITE THE CALIBRATED VALUES readRGB[i] = 1024 - analogRead(0); delay(50); } digitalWrite(colArray[i],0); prevtime = millis(); // RESET TIME total = total + readRGB[i]; } for(i = 0 ; i < 3 ; i ++) { Percent[i] = readRGB[i]*100.0/total; // PRINT IN THE FORM OF PERCENTAGE Serial.print(Percent[i]); Serial.print(" % "); } Serial.println(""); delay(1000); } /////////////////////////////////##############################################################################################//////////// void calibrate() // CALIBRATE FUNCTION { for(i=0;i<3;i++) { while(millis()-calibtime < 1000) // FLASH EACH COLOR AT MAX FOR 1 SEC { analogWrite(colArray[i],255); readMax[i] = 1024-analogRead(0); // RECORD MAX VALUES } analogWrite(colArray[i],0); Serial.println(readMax[i]); delay(10); calibtime = millis(); } if(readMax[0] < readMax[1] && readMax[0] < readMax[2]) // GET THE MINIMUM VALUE FROM ARRAY minVal = readMax[0]; else { if( readMax[1] < readMax[0] && readMax[1] < readMax[2]) minVal = readMax[1]; else minVal = readMax[2]; } for(i = 0 ; i < 3 ; i++) { analogWrite(colArray[i],10); sensor = 1024 - analogRead(0); // START CALIBRATION delay(100); while ( sensor - minVal <= -1 || sensor - minVal >= 1 ) // GET THE DIFFERENCE BETWEEN CURRENT VALUE AND THRESHOLD { sensor = 1024 - analogRead(0); if( sensor > minVal ) // INCREASE OR DECREASE THE VALUE TO EQUALIZE THE BRIGHTNESS Val[i]--; else Val[i]++; Serial.print(1024-analogRead(0)); Serial.print(" "); Serial.println(minVal); delay(50); Val[i] = constrain(Val[i],0,255); // CONSTRAIN THE VALUE B/W 0 -- 255 analogWrite(colArray[i],Val[i]); } analogWrite(colArray[i],0); delay(50); } }
Working of the code :
STEPS :
1. Place a White non-glossy object in front of the sensor
2. Power-up your arduino
3. The sensor will start auto-calibration sequence
4. It will flash all three colors first , then it will equalize all the colors
5. The calibration process completes when it starts to cycle RGB sequence
6. Place any object whose color you want to analyze
7. Open the serial monitor for getting the values in % format
Explanation :
Basically the sensor Flashes each color and records the max values when a white object is placed in front of it.
It notes the light that reflects back and compares all the values.
The minimum value is set as the threshold and then it tries to equalize all the colors ( R, G, B )
After the calibration has been done , the program starts the loop which checks the color. It does the job by reading the reflected colors from the surface and then converting these values in a systematic form.
The amount of color reflected tells the percentage of every color in the particular color.
Step 7: Applications
Recently, I've used this sensor to make a Chameleon lamp.
This lamp uses a color sensor to sense the color of it's base and replicate it using RGB LEDs. It is interesting to see how simple and cheap electronics can be used to make something unusual. And the lamp worked quite well ( unexpectedly though :D )
Still there are infinite possible ways in which this sensor can be used.
Step 8: TROUBLESHOOTING AND CONCLUSION
Troubleshooting and Precautions :
1. Verify that the LEDs are flashing correctly
2. Ambient light may interfere with the sensor and give false readings
3. To calibrate the sensor , you need a perfectly white and non-glossy surface.
4. Use appropriate current limiting resistors for your LED
Conclusion :
This project is an example of how some simple components can be connected together to form something unique and fascinating.
This sensor can be used in robots, for sorting objects of different colors and so on....
Questions , suggestions and critics are welcome :)
If you find my Instructable interesting then please leave a vote :)
nice, thnks really it helped me
Hello
This is a really awesome color sensor, congrats!
I wanted to know if it's possible to configurate this color sensor to work without an Arduino, like using it for a lego NXT robot... Do you know if is it possible?
Hi
Is it possible to measure different shades of a colour
Well, yes you can measure different shades of colors using this sensor ( if calibrated correctly ) though the sensor can't be used to record shades that just differ by a tad bit.
Nice project! please could you add txt file for your code.
Hey ! Thanks
Sorry but I'm currently unable to access my account from the web, I have to rely on the mobile site.Though you could just copy the code from the instructable itself.
Dear Sir,
Do you think the color sensor could measure a skin color?
Thank you.
Sorry for silly question...can i ask about the perfboard version.
Can i know what type of rgb led been used?
Common anode or cathode?
Nice idea...
what to use one of these sensors in?
i have seen them costing a tonne sot this seems economical.very good.
As said earlier , these sensors have a wide range of applications right from robots to industrial equipment ( to check and analyse the color composition of any object ). With the help of these sensors, you can actually convert the color of any tangible object into digital data.
thanks.
Can we read colors other then Red, Green, Blue? Like shades of main colors, greys, oranges, browns, pinks, purples, white and black etc.
Yes ! that's why I've used white LEDs in the SMD board. The RGB LEDs detect the color and the white LEDs give information about the shade of that color. You can theoretically detect and replicate around 16M colours but as stated above, the precision cannot be achieved due to irregular variations in the brightness of the LEDs vs the voltage applied. The response time ( and the response for various wavelengths ) of the LDR also decides the accuracy of this sensor.
can we use red blue and green led's instead of one RBG Led?
Yes you can surely use discrete R, G and B LEDs. But you need to be careful with the Green LED because most of the green LEDs ( except superbright LED ) tend to have a slight yellow-shift in the color. This might give you false readings and that's why I recommend you to use an RGB LED ( for accuracy ).
Perfect!!! Thanks!
Voted!
I only have 2 questions. For the smd 10k resistor, can I just use a normal 10k? And is there any way you can upload a schematic of the perfboard version? Your pictures are kind of hard to follow.
There you go :
Hope this helps :)
Thanks a lot ! :)
Yes , you can definitely use a simple 10K resistor instead of the SMD one.
I'll be uploading the perf-board files shortly.
Simply amazing! I'm going to make this tomorrow for my high school robotics class. Should score me an A, right? :) Thanks so much for this awesome instructable!
Thanks ! :)
Yes ! I'm sure this is gonna fetch you an A !
Good luck for your project and feel free to ask if anything seems unclear !
If you find this instructable helpful , please leave a vote ! :)
Nice instructable, I found really interesting, thank you. You could say how much fast is the sensor, once is calibrated and working through arduino?
Thanks !
After calibration and all the start-up processes , the Sensor will start reading. The speed of the sensor depends on the delay() that you've used in the program. Reducing the delay will enable you to take more number of readings in less time but they wouldn't be so accurate, this is because the CdS Photocell is quite slow and the response time ( rising and falling time ) is considerably high [ 25 ~ 60 ms ]. To get proper and accurate readings, it is recommended to run the sensor on low speed ( 100 ms - 150 ms ). Hope this helps :)
Nice. I'm going to make one to let my robot identify my coloured doors while it drives around the house. Thanks
Awesome Idea !
Good luck for your project ! :)
Hi'
Very usefull sensor Thanks
I'm glad that you found it interesting :)
|
http://www.instructables.com/id/Your-Own-Color-Sensor-using-LEDs/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
).
An application does not create an instance of the IDbTransaction interface directly, but creates an instance of a class that inherits IDbTransaction.
Classes that inherit IDbTransaction must implement the inherited members, and typically define additional members to add provider-specific functionality. For example, the IDbTransaction interface defines the Commit method. In turn, the OleDbTransaction class inherits this property, and also defines the Begin method.Notes to Implementers:
To promote consistency among .NET Framework data providers, name the inheriting class in the form Prv Transaction where Prv is the uniform prefix given to all classes in a specific .NET Framework data provider namespace. For example, Sql is the prefix of the SqlTransaction class in the System.Data.SqlClient namespace.
The following example creates instances of the derived classes, SqlConnection and SqlTransaction. It also demonstrates how to use the BeginTransaction, Commit,.
|
https://msdn.microsoft.com/en-us/library/system.data.idbtransaction(v=vs.90)
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Download: their use it recommended for all new projects.
SPI has 4 signals: SS, SCK, MOSI, MISO. SCK is a clock signal. Master Out Slave In (MOSI) sends data from the SPI master to one or more slaves. Master In Slave Out (MISO) is how slaves send data back to the master. To talk to only one of several slaves, the Slave Select (SS) pin is used. Thus, some chips need only 3 or even 2 of these signals; a display, for example, will use MOSI but not MISO, as it is an output only device.
Multiple SPI devices use the same SPI SCK, MISO and MOSI signals but each device will need it's own SS pin.
Arduino automatically defines "SS", "SCK", "MOSI", and "MISO" as the pin
numbers for the selected board. Teensy 3.0 and 3.1 can use an alternate set of SPI pins; see below.
Call this function first, to initialize the SPI hardware. The SCK, MOSI and MISO pins are initialized. You should manaully
configure the SS pin.
If your program will perform SPI transactions within an interrupt, call this function
to register the interrupt number or name with the SPI library. This allows beginTransaction()
to prevent usage conflicts.
Begin using the SPI bus. Normally this is called before asserting the chip select
signal. The SPI is configured to use the clock, data order
(MSBFIRST or LSBFIRST) and data mode (SPI_MODE0, SPI_MODE1, SPI_MODE2, or SPI_MODE3).
The clock speed should be the maximum speed the SPI slave device can accept.
Most SPI devices define a transfer of multiple bytes. You need to write
the SS pin before the transfer begins (most chips use LOW during the transfer)
and write it again after the last byte, to end the transfer.
See below for more SS pin details.
Transmit a byte from master to slave, and simultaneously receive a byte from slave to master.
SPI always transmits and receives
at the same time, but often the received byte is ignored. When only reception is needed,
0 or 255 is transmitted to cause the reception.
Stop using the SPI bus. Normally this is called after de-asserting the chip select,
to allow other libraries to use the SPI bus.
#include <SPI.h> // include the SPI library:
const int slaveSelectPin = 20;
void setup() {
// set the slaveSelectPin as an output:
pinMode (slaveSelectPin, OUTPUT);
//);
}
}
}
int digitalPotWrite(int address, int value) {
//);
}
However, the SS pin must either be configured as an output, or if it is an input, it must
remain low during the SPI transfer. Unconfigured pins default to input, and a pin with
no signal can easily "float" to random voltages due to electrical noise. Always configure
the SS pin as an output, or make sure it remains low.
Most SPI devices are designed to work together with others, where SCK, MISO, and MOSI
are shared. Each chip needs a separate SS signal. Only the selected chip will
communicate. The others ignore SCK and MOSI, and avoid driving MISO when they are
not selected.
The SPI protocol allows for a range of transmission speeds ranging from 1Mhz to 100MHz. SPI slaves vary in the maximum speed at which they can reliably work. Slower speeds are usually needed when 5 volt signals are converted to 3 volts using only resistors. Together with the capacitance associated with the wire and pins,
resistors can distort the pulses on the wire, requiring slower speeds.
Very long wires may also require slower speeds. When using long wires (more than
25 cm or 1 foot), a 100 ohm resistor should be placed between the Teensy's pin and
the long wire.
Most SPI chips transfer data with the MSB (most significant bit) first, but LSB first is also used by some devices.
The serial clock can be either normally high or normally low (clock polarity), and data can be sent on the rising or falling clock edge (clock phase). The four combinations of clock phase and polarity are expressed as the clock mode, numbered 0 to 3 (more on clock modes). Most SPI chips are designed to work with either mode 0 or mode 3.
If all the SPI slaves in a project use mode 0, MSB first, and work at the default 4MHz clock speed, you don't need to set any SPI options. If any use non default setting, then define this for all SPI devices you are using.
A common problem used to be that different SPI devices needed different, incompatible settings. Your sketch had to take care of saving and restoring the SPI settings before communicating with each SPI device. If any SPI device was accessed from an interrupt, this could result in data corruption if another SPI device was communicating at the time.
With the new SPI library, configure each SPI device once as an SPISettings object. Also, if that device will be called from an interrupt, say so with SPI.usingInterrupt(interruptNumber). To communicate with a specific SPI device, use SPI.beginTransaction which automatically uses the settings you declared for that device. In addition, it will disable any interrupts that use SPI for the duration of the transaction. Once you are finished, use SPI.endTransaction() which re-enables any SPI-using interrupts.
#include <SPI.h> // include the new SPI library:
// using two incompatible SPI devices, A and B
const int slaveAPin = 20;
const int slaveBPin = 21;
// set up the speed, mode and endianness of each device
SPISettings settingsA(2000000, MSBFIRST, SPI_MODE1);
SPISettings settingsB(16000000, LSBFIRST, SPI_MODE3);
void setup() {
// set the Slave Select Pins as outputs:
pinMode (slaveAPin, OUTPUT);
pinMode (slaveBPin, OUTPUT);
// initialize SPI:
SPI.begin();
}
uint8_t stat, val1, val2, result;
void loop() {
// read three bytes from device A
SPI.beginTransaction(settingsA);
digitalWrite (slaveAPin, LOW);
// reading only, so data sent does not matter
stat = SPI.transfer(0);
val1 = SPI.transfer(0);
val2 = SPI.transfer(0);
digitalWrite (slaveAPin, HIGH);
SPI.endTransaction();
// if stat is 1 or 2, send val1 or val2 else zero
if (stat == 1) {
result = val1;
} else if (stat == 2) {
result = val2;
} else {
result = 0;
}
// send result to device B
SPI.beginTransaction(settingsB);
digitalWrite (slaveBPin, LOW);
SPI.transfer(result);
digitalWrite (slaveBPin, HIGH);
SPI.endTransaction();
}
Used by older versions of the SPI library, this method was not interrupt safe and depended on your sketch doing low-level SPI configuration management.
SPI speed was set indirectly, as a function of the Teensy clock, with SPI.setClockDivider(divider). SPI_CLOCK_DIV2 was the fastest option. This meant that code running on a 16MHz Teensy 2 and a 96MHz Teensy 3.1 would set the SPI speed differently to achieve same actual speed for the device.
SPI bit order was set with SPI.setBitOrder(LSBFIRST), and SPI.setBitOrder(MSBFIRST) to set it back to the default.
The SPI library defaults to mode 0. If a different mode
was needed, SPI.setDataMode(mode) was used.
Sometimes, the SPI pins are already in use for other tasks when an SPI device is added to a project. If that task is simply a digital pin, or an analog input, it is usually better to move that to another pin so that the hardware SPI can be used. Sometimes though, the conflicting pin cannot be moved. The Audio Adapter, for example, uses some of the SPI pins to talk to the Audio DAC over I2S. For this case, Teensy 3.0 and 3.1 provide an alternate set of SPI pins.
The main SPI pins are enabled by default. SPI pins can be moved to their alternate position with SPI.setMOSI(pin), SPI.setMISO(pin), and SPI.setSCK(pin). You can move all of them, or just the ones that conflict, as you prefer. The pin must be the actual alternate pin supported by the hardware, see the table above; you can't just assign any random pin.
You should be aware that libraries sometimes have to move SPI pins. (The Audio Library is an example). If you add an SPI device yto your project and it does not work, check whether the library has moved the pins and if so, use the same pins the library does.
If all else fails, the SPI protocol can be emulated ("bit-banged") in software. This has the advantage that any convenient pins can be used, and the disadvantage that it is much, much slower and prevents your sketch from doing useful work meanwhile.
SPI slave devices do the opposite. They way for a master to select them, and they
receive the SCK and MOSI signals from the master and transmit on MISO. Virtually all
chips controlled by SPI are slave devices.
The SPI port can work in slave mode, which may be useful if Teensy should appear as
a SPI device to be controlled by another Teensy or other board. The SPI library
does not support slave mode.
Apart from this AVR example, is there an Arduino library which supports slave mode?
|
https://www.pjrc.com/teensy/td_libs_SPI.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Kinect SDK Skeleton Custom Control for WPF
- Posted: Jul 13, 2011 at 6:00 AM
- 11,631 Views
Why re-invent the skeleton wheel when we can stand on the shoulders of giants (and leverage their work)?
I’ve been using the Microsoft Research SDK for Kinect for a little over a week in my spare hours, it is tonnes of fun and so much more stable then the previous frameworks I had been using. One thing that I think everyone will want to do is show a nice skeleton of up to two people being tracked by the Kinect.
I’ve created a Custom WPF Control ready for you to use or to style to your taste in Blend. It uses András Velvárt Bone Behaviour to create awesome little bones between the tracked joints.
...
Project Information URL:
Project Download URL:
public bool Jump { get { return SkeletonData.Joints[JointID.FootLeft].Position.Y > -0.7 && SkeletonData.Joints[JointID.FootRight].Position.Y > -0.7; } } public bool LeftArmOut { get { return ((SkeletonData.Joints[JointID.HandLeft].Position.X - SkeletonData.Joints[JointID.ShoulderLeft].Position.X) < -0.5); } } public bool RightArmOut { get { return ((SkeletonData.Joints[JointID.HandRight].Position.X - SkeletonData.Joints[JointID.ShoulderRight].Position.X) > 0.5); } } public bool RightArmUp { get { return ((SkeletonData.Joints[JointID.HandRight].Position.Y - SkeletonData.Joints[JointID.Head].Position.Y) > 0); } } public bool LeftArmUp { get { return ((SkeletonData.Joints[JointID.HandLeft].Position.Y - SkeletonData.Joints[JointID.Head].Position.Y) > 0); } } public bool Crouched { get { return (Math.Abs(SkeletonData.Joints[JointID.FootLeft].Position.Y - SkeletonData.Joints[JointID.HandLeft].Position.Y) < 0.2); } } public bool HandsTogether { get { return (Math.Abs(SkeletonData.Joints[JointID.HandRight].Position.Y - SkeletonData.Joints[JointID.HandLeft].Position.Y) + Math.Abs(SkeletonData.Joints[JointID.HandRight].Position.X - SkeletonData.Joints[JointID.HandLeft].Position.X) <.
|
http://channel9.msdn.com/coding4fun/kinect/Kinect-SDK-Skeleton-Custom-Control-for-WPF
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
16 April 2013 22:56 [Source: ICIS news]
LONDON (ICIS)--A French court on Tuesday rejected the two remaining bids for the 161,800 bbl/day Petroplus refinery at Petit-Couronne near ?xml:namespace>
Following the ruling, the refinery will be liquidated.
Neither bid offered sufficient guarantees to ensure the continued operation of the refinery,
Petit-Couronne was one of five European refinery sites affected by the insolvency of Switzerland-based Petroplus in
|
http://www.icis.com/Articles/2013/04/16/9659677/french-court-rejects-bids-for-petroplus-refinery.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Default implementation of credentials interface. More...
#include <qgscredentials.h>
Default implementation of credentials interface.
This class outputs message to the standard output and retrieves input from standard input. Therefore it won't be the right choice for apps without GUI.
Definition at line 98 of file qgscredentials.h.
Definition at line 94 of file qgscredentials.cpp.
signals that object will be destroyed and shouldn't be used anymore
request a password
Implements QgsCredentials.
Definition at line 99 of file qgscredentials.cpp.
|
http://qgis.org/api/classQgsCredentialsConsole.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
naming conventions: id, xtype
naming conventions: id, xtype
Common naming convention for xtypes?
Just curious of any naming conventions being used for id's versus xtype. I had an inclination to use the same name.
Code:
items: [{ xtype: 'panel-login', itemId: 'loginPanel', id: 'loginPanel', region: 'north', height: 50 },
I haven't tracked down what the problem is yet, but if you use the same id and xtype name it causes problems. For example, using something like:
Code:
items: [{ xtype: 'loginPanel', itemId: 'loginPanel', id: 'loginPanel',//note I match the xtype region: 'north', height: 50 },
Code:
Ext.getCmp('loginPanel')
Ext.getCmp('loginPanel') has no propertiesMJ
API Search || Ext 3: docs-demo-upgrade guide || User Extension Repository
Frequently Asked Questions: FAQs
Tutorial: Grid (php/mysql/json) , Application Design and Structure || Extensions: MetaGrid, MessageWindow
By matching the id to the xtype you are restricted to one instance of that class...
And in your example Ext.getCmp() is looking for "'loginPanel'" instead of 'loginPanel' so it won't find it.
We're all using our own namespaces like good coders, but if we're sharing code in the community seems like there might be a good standard to go by so our xtypes don't get clobbered by others...or are easily intuitive, etc.MJ
API Search || Ext 3: docs-demo-upgrade guide || User Extension Repository
Frequently Asked Questions: FAQs
Tutorial: Grid (php/mysql/json) , Application Design and Structure || Extensions: MetaGrid, MessageWindow
I try to steer clear of using ids as much as possible, there are akin to global variables in my opinion. using getId() or itemId and getComponent does the job for me in virtually every scenario i come across.
in regards to xtype naming, the norm seems lowercase without word separation which is a bit contradictory to the class naming convention which seems to be captial for new words. although i think i recall looking in the code and seeing that teh xtype is actually formatted to lower case so it doesn't matter if you use capitals to separate words for xtype
and really xtype doesn't really have anyting to do with id, it is the name of the class / method being called - in your exmaple above it holds relavence, but when you think of a more generic type like xtype: 'panel' it does not seem so relevent
|
http://www.sencha.com/forum/showthread.php?46023-naming-conventions-id-xtype
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
libnice stores its headers in e.g. /usr/include/nice/ but 'pkg-config --cflags nice'
gives '-D_REENTRANT -I/usr/include/nice' meaning a generic-named header should as agent.h, debug.h or interfaces.h is includable as:
#include <agent.h>
#include <debug.h>
#include <interfaces.h>
Moreover, the include-guard used in e.g. agent.h:
#ifndef _AGENT_H
#define _AGENT_H
is overly generic as well, and risks collision with other packages.
I couldn't agree more, but changing this would break existing apps. So it will have to wait until we do an API break of some kind.
The include-guards can be changed already.
Possibly, we should encourage people to use #include <nice/agent.h> instead of just <agent.h>.
I improved the guards a little.
commit 52534c43be1fdc74cd15f64ba28b8e753c212b62
Author: Olivier Crête <olivier.crete@collabora.com>
Date: Mon Apr 20 15:44:16 2015 -0400
Prefix include guards
The include file names are very generic, at least make
the guards a bit less generic.
Migrated to Phabricator:
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
|
https://bugs.freedesktop.org/show_bug.cgi?id=90013
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
69172/how-to-create-a-not-null-column-in-case-class-in-spark
Hi@Deepak,
In your test class you passed empid as string, that's why it shows nullable=true. So you have to import the below package.
import org.apache.spark.sql.types
You can use these kind of codes in your program.
df.withColumn("empid", $"empid".cast(IntegerType))
df.withColumn("username", $"username".cast(StringType))
You can select the column and apply ...READ MORE
spark do not have any concept of ...READ MORE
Yes, you can go ahead and write ...READ MORE
you can access task information using TaskContext:
import org.apache.spark.TaskContext
sc.parallelize(Seq[Int](), ...READ MORE
You can do it dynamically be setting ...READ MORE
Hey,
You can try this code to get ...READ MORE
Hi,
Paired RDD is a distributed collection of ...READ MORE
Hi,
If you have a file with id ...READ MORE
val coder: (Int => String) = v ...READ MORE
Hi,
In Spark, fill() function of DataFrameNaFunctions class is used to replace ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/69172/how-to-create-a-not-null-column-in-case-class-in-spark?show=69204
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
How to Start a Blog and Make Money
Blogs are a great way to make money online — so much so that today many successful bloggers make a full-time income from their blogs.
Read on for a step-by-step guide on how to make money blogging.
Table of Contents
- How to start a blog
- How to make money blogging
- How to start a blog and make money FAQ
- Summary of How to start a blog and make money
How to start a blog
Starting your own blog takes creativity, some technical know-how, and quite a bit of strategic thinking.
Here are five steps to take to help your website succeed:
1. Pick the right topic
It could be the most frequently cited piece of writing advice: write what you know. This is especially true when it comes to your own blog.
When you’re starting your own site, it’s important to center it around issues that you’re both passionate and knowledgeable about.
This will help you stay motivated to create new content frequently, which will be essential to your blog’s popularity. You’ll also be more likely to create engaging, truly helpful content that readers are likely to share in social media.
Additionally, writing about topics you have established expertise on increases your credibility and authority — which can help you both grow an audience and improve your ranking in search engine results.
2. Buy a domain name
Put simply, a domain name is the name of your website, or what comes after the “www” in a web address.
To purchase a domain name, look for a domain registrar — a company that sells and registers website domains — that’s accredited by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is a nonprofit organization that coordinates IP addresses and namespaces on the internet. You can also do it directly with a web hosting company, many of which typically offer a free domain name for a year with a hosting plan subscription.
You can choose your new domain name before building your website or later on, if you decide to start with a free blog domain. However, it’s a good idea to buy it as soon as you have an official brand or blog name.
You’ll also have to decide on a domain extension or a top-level domain (TLD). Even though there are hundreds of domain extension options, .com and .net are the most popular and are usually given preference by search engines like Google.
Choosing one of these two could, in the long run, help your blog rank higher in the results page than if you choose less common extensions like .blog or .club, for instance.
3. Select a hosting service
To get your blog online, you’ll need a web hosting service.
A web host is a company that can store, maintain and manage access and traffic to your website. A web hosting service is necessary — it gives your website a home, and without it you wouldn’t be able to publish your site on the internet.
Most web hosting providers, including popular options such as Bluehost, Dreamhost, SiteGround and GoDaddy, offer three types of hosting services.
Shared hosting: Refers to a service where a single web server hosts many websites. It’s one of the most popular types of web hosting, and affordable since the server is shared with multiple users.
Virtual Private Server (VPS) hosting: A service where multiple websites are hosted on the same server, but each user gets dedicated resources. VPS hosting is more expensive than shared hosting and may require some technical knowledge to configure.
Dedicated hosting: Provides a dedicated server for your website. It’s ideal for websites with heavy traffic that can benefit from more responsiveness and the flexibility to upgrade and control performance. It’s expensive, though, with monthly plans ranging from $100 to over $200.
When choosing a web hosting provider, check whether it offers the type of service you need at a price you’re able to afford for at least a couple of years. In addition, consider factors such as the server’s uptime, response time, scalability, ease of use and customer support.
Reports like the Signal’s WordPress Hosting Performance Benchmarks and HRANK’s Web Hosting Companies Rating provide reliable data on web hosting companies’ uptime metrics and may give you an overall idea of its performance.
Make sure to check out our guide to the best web hosting companies for some great choices.
4. Choose a blogging platform
A blogging platform is a web-based service that allows users to create, manage and publish blog posts. Most blogging platforms also include tools for optimizing your website with metadata, title descriptions and keywords that make it easy for search engines to identify what the page is about.
Many popular blogging platforms offer both free and paid options, including some of the most widely used sites like WordPress, Medium, Weebly and Blogger. There are also website builders like Wix and Squarespace, which require less tech-savviness.
Many blogging platforms already come with pre-made themes that you can customize. A theme typically includes templates, layouts, colors, images and other features you need to format the website and its content.
But your theme affects much more than your page’s looks — your blog’s theme can also impact your ranking in search engine results. When choosing a template, do some research first and make sure it’s responsive, loads quickly, is mobile-friendly and works with plugins.
5. Publish your first blog post
Once you’ve picked a web hosting service, a blogging platform and a theme for your website, you’re ready to start your blogging journey.
The key to generating page traffic is to create original, high-quality content and publish new blog posts on a regular basis.
Keep in mind what potential readers are looking for and why (otherwise known as the user search intent), your blog’s central theme, and what others have already published on the topic. This way you can identify what needs to be written or how to present the information in an original and creative way.
Keyword research through Google Analytics (or even just Google Search) is key to finding relevant content ideas. Learning proper search engine optimization (SEO) techniques is also essential if you want to increase traffic to your site and rank higher in the results page.
Lastly, it’s important to stay authentic to your voice and be mindful of your grammar. Mistakes and typos can be off-putting to many readers and take a toll on your site’s credibility. If grammar isn’t your strong suit, it’s a good idea to invest in one of the many writing-assistance apps on the market now, which are designed to catch and correct spelling and grammar mistakes.
How to make money blogging
Revenue largely depends on generating traffic to your website. Gaining and growing an audience may take a lot of time and effort, but with the right strategy you might see results sooner rather than later.
It’s important to create content consistently and establish a social media presence — once you do so, there are quite a few ways to start making money from your blog.
1. Display ads
A simple way to start earning some revenue is to sell ad space.
Letting brands advertise on your page has many advantages, especially since it doesn’t require a big time investment from you.
There are two ways to generate income selling ad real estate:
Cost per click (CPC): Also known as pay per click (PPC), this means you get paid each time users click on an ad shown on your website.
Cost per thousands (CPM): Also known as cost per mile, this lets you negotiate a set price for every 1,000 impressions (or views) the ad gets.
To get started, you’ll need to create an account with an advertising network, such as Google AdSense, Mediavine, BuySellAds, PropellerAds or other similar platforms.
Tip: Use ads judiciously. Filling up your site with tons of ads can affect its ranking, credibility, load time and, ultimately, the user’s experience.
2. Join affiliate programs
Many bloggers sample products or services and review them on their site using affiliate links (or tracking links) that redirect readers to the sellers’ website.
This process is known as affiliate marketing and it lets you earn a commission for every sale, click, lead or transaction your content generates to a seller or company.
There are several affiliate programs and networks you can join, including some from popular stores and e-commerce sites. These include:
Amazon Associates
Apple
WalMart
Commision Junction
ShareASale
eBay Partner Network
Joining an affiliate program will let you find a list of products to review and tools that let you keep track of links’ performance and increase conversion rate — that is, the number of users that complete a desired action or transaction in your site.
Tip: Set up news alerts to find hot new products your readers might be interested in.
3. Sell products
Selling your own products or services is another good monetization method for a blog.
Make time to create products that add value to your readers and visitors, preferably things that tie in with your blog. While these can be physical products — for example, books or photographic prints — they can also be digital products like PDFs or audio files that your readers can download.
Most web hosting providers and blogging platforms have widgets and other features that you can add to create an online store. These are typically known as plugins, which are a bit of code that give your website added functionality. Plugins give you the ability to add secure contact forms, optimize your images or create online stores.
There are also many popular WordPress plugins and eCommerce platforms like WooCommerce, BigCommerce, Ecwid and Shopify you can use to get started.
Tip: Don’t have your blog revolve around your products even if you add an online store. Instead, keep creating the high-quality content that attracted readers in the first place.
4. Post sponsored content
Many popular bloggers seek out sponsorships, that is, they get a company to pay them to write sponsored posts that promote or talk about its products.
Let’s say you occasionally upload tutorial videos to your photography blog showing how you edit photos in a particular app or software. You could then approach the app manufacturer and ask whether they’d be interested in sponsoring that particular post.
Typically, to get a sponsorship you have to reach out to a brand and make a pitch. Your pitch should include a brief explanation of who you are and what you do, along with details on your blog’s performance, such as audience demographics and traffic statistics.
Alternatively, you can try writing paid reviews. This option is like a sponsorship with one main difference: you’re sent a product for free or given early access to an app or software, so that you can test it and write a review about it.
Tip: Think of your readers when you seek out sponsorships. Make sure to review products or partner with companies that are relevant to your blog’s content and that your audience will find helpful.
5. Create a membership
Some readers may be willing to pay for a membership plan to get access to exclusive content, such as downloadable PDFs, in-depth articles, forums, podcasts, online courses or subscription boxes.
Subscriptions can be set up using membership-builder plugins. There are many popular options you can install easily, such as:
- WooCommerce Memberships
- LearnDash
- MemberPress
- Restrict Content Pro
Most membership plugins offer guides and tools to regulate content access, create membership levels and integrate payment options.
Tip: Look for a membership plugin that can handle a growing audience, and that offers flexible membership options and pricing.
6. Create a newsletter
With the right email marketing strategy and a large enough email list, you could also create a profitable newsletter.
Creating a profitable newsletter involves some of the same strategies that monetizing your blog entails. For example, you could reach out to a brand your readers would be interested in and offer advertising space in your newsletter.
You could also do affiliate marketing: mention or recommend a particular product within the newsletter and add its tracking — or affiliate — links. This way you can receive a commission for every transaction your subscribers complete.
Tip: Add a newsletter signup to your blog to get readers’ email and consider using email marketing software, such as Constant Contact and Mailchimp, to manage and automate your newsletter.
How. The key is to build a strong social media presence and create high-quality content that users find relevant and helpful.
How to start a successful blog
If you want to start a successful blog, there are a few important steps to follow. First, buy a domain name. Second, get to know your potential audience and their needs. Third, create a content strategy around topics they want. Fourth, write compelling, high-quality content. Lastly, follow search engine optimization (SEO) best practices.
How much can you make blogging?
It all depends on your website's traffic and monetization strategy. New bloggers could make between $500 and $2,000 per month in their first year with the right strategies -- but don't expect to make a lot of money right off the bat. Give yourself time to increase your traffic, which will lead you to increased revenue. Basically, the more traffic you have, the more money you can make.
Summary of How to start a blog and make money
- You can make a full-time income from a successful blog, provided you have the right tools and strategies.
- Pick a topic that you’re both passionate and knowledgeable about. This will enhance credibility with audiences and positively impact your search engine ranking.
- Buy a domain name. Look for an ICANN-accredited domain registrar.
- Choose a web hosting provider. Consider the server’s uptime stats, response time, ease of use and customer support availability.
- Pick a blogging platform. The most popular services provide both free and paid options.
- Publish your blog. Keep in mind what potential readers are looking for and why, your blog’s purpose, and what competitor sites have already written about.
- Some popular methods to make money from a site include displaying ads, joining affiliate programs, creating newsletters and membership plans, creating and selling your own products and seeking out content sponsorships..
|
https://www.nasdaq.com/articles/how-to-start-a-blog-and-make-money
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
11
Serialization With JSON how to serialize JSON data into model classes. A model class represents an object that your app can manipulate, create, save and search. An example is a recipe model class, which usually has a title, an ingredient list and steps to cook it.
You’ll continue with the previous project, which is the starter project for this chapter, and you’ll add a class that models a recipe and its properties. Then you’ll integrate that class into the existing project.
By the end of the chapter, you’ll know:
- How to serialize JSON into model classes.
- How to use Dart tools to automate the generation of model classes from JSON.
What is JSON?
JSON, which stands for JavaScript Object Notation, is an open-standard format used on the web and in mobile clients. It’s the most widely used format for Representational State Transfer (REST)-based APIs that servers provide (). If you talk to a server that has a REST API, it will most likely return data in a JSON format. An example of a JSON response looks something like this:
{ "recipe": { "uri": "", "label": "Chicken Vesuvio" } }
That is an example recipe response that contains two fields inside a recipe object.
While it’s possible to treat the JSON as just a long string and try to parse out the data, it’s much easier to use a package that already knows how to do that. Flutter has a built-in package for decoding JSON, but in this chapter, you’ll use the json_serializable and json_annotation packages to help make the process easier.
Flutter’s built-in dart:convert package contains methods like
json.decode and
json.encode, which convert a JSON string to a
Map<String, dynamic> and back. While this is a step ahead of manually parsing JSON, you’d still have to write extra code that takes that map and puts the values into a new class.
The json_serializable package comes in handy because it can generate model classes for you according to the annotations you provide via json_annotation. Before taking a look at automated serialization, you’ll see in the next section what manual serialization entails.
Writing the code yourself
So how do you go about writing code to serialize JSON yourself? Typical model classes have
toJson() and
fromJson() methods, so you’ll start with those.
class Recipe { final String uri; final String label; Recipe({this.uri, this.label}); }
factory Recipe.fromJson(Map<String, dynamic> json) { return Recipe(json['uri'] as String, json['label'] as String); } Map<String, dynamic> toJson() { return <String, dynamic>{ 'uri': uri, 'label': label} }
Automating JSON serialization
Open the starter project in the projects folder. You’ll use two packages in this chapter: json_annotation and json_serializable from Google.
Adding the necessary dependencies
Add the following package to pubspec.yaml in the Flutter
dependencies section underneath and aligned with
flutter_statusbarcolor: ^0.2.3:
json_annotation: ^3.1.0
build_runner: ^1.10.0 json_serializable: ^3.5.0
dependencies: flutter: sdk: flutter cupertino_icons: ^1.0.0 cached_network_image: ^2.3.2+1 flutter_slidable: ^0.5.7 flutter_svg: ^0.19.0 shared_preferences: ">=0.5.8 <2.0.0" flutter_statusbarcolor: ^0.2.3 json_annotation: ^3.1.0 dev_dependencies: flutter_test: sdk: flutter build_runner: ^1.10.0 json_serializable: ^3.5.0
Generating classes from JSON
The JSON that you’re trying to serialize looks something like:
{ "q": "pasta", "from": 0, "to": 10, "more": true, "count": 33060, "hits": [ { "recipe": { "uri": "", "label": "Pasta Frittata Recipe", "image": "", "source": "Food Republic", "url": "", } ] }
Creating model classes
Start by creating a new directory named network in the lib folder. Inside this folder, create a new file named recipe_model.dart. Then add the needed imports:
import 'package:flutter/foundation.dart'; import 'package:json_annotation/json_annotation.dart'; part 'recipe_model.g.dart';
@JsonSerializable() class APIRecipeQuery { }
final bool nullable; /// Creates a new [JsonSerializable] instance. const JsonSerializable({ this.anyMap, this.checked, this.createFactory, this.createToJson, this.disallowUnrecognizedKeys, this.explicitToJson, this.fieldRename, this.ignoreUnannotated, this.includeIfNull, this.nullable, this.genericArgumentFactories, });
Converting to and from JSON
Now, return to recipe_model.dart add these methods for JSON conversion within the
APIRecipeQuery class:
factory APIRecipeQuery.fromJson(Map<String, dynamic> json) => _$APIRecipeQueryFromJson(json); Map<String, dynamic> toJson() => _$APIRecipeQueryToJson(this);
@JsonKey(name: 'q') String query; int from; int to; bool more; int count; List<APIHits> hits;
APIRecipeQuery({ @required this.query, @required this.from, @required this.to, @required this.more, @required this.count, @required this.hits, });
// 1 @JsonSerializable() class APIHits { // 2 APIRecipe recipe; // 3 APIHits({ @required this.recipe, }); // 4 factory APIHits.fromJson(Map<String, dynamic> json) => _$APIHitsFromJson(json); Map<String, dynamic> toJson() => _$APIHitsToJson(this); }
@JsonSerializable() class APIRecipe { // 1 String label; String image; String url; // 2 List<APIIngredients> ingredients; double calories; double totalWeight; double totalTime; APIRecipe({ @required this.label, @required this.image, @required this.url, @required this.ingredients, @required this.calories, @required this.totalWeight, @required this.totalTime, }); // 3 factory APIRecipe.fromJson(Map<String, dynamic> json) => _$APIRecipeFromJson(json); Map<String, dynamic> toJson() => _$APIRecipeToJson(this); } // 4 String getCalories(double calories) { if (calories == null) { return '0 KCAL'; } return calories.floor().toString() + ' KCAL'; } // 5 String getWeight(double weight) { if (weight == null) { return '0g'; } return weight.floor().toString() + 'g'; }
@JsonSerializable() class APIIngredients { // 1 @JsonKey(name: 'text') String name; double weight; APIIngredients({ @required this.name, @required this.weight, }); // 2 factory APIIngredients.fromJson(Map<String, dynamic> json) => _$APIIngredientsFromJson(json); Map<String, dynamic> toJson() => _$APIIngredientsToJson(this); }
Generating the .part file
Open the terminal in Android Studio by clicking on the panel in the lower left, or by selecting View ▸ Tool Windows ▸ Terminal, and type:
flutter pub run build_runner build
Precompiling executable... Precompiled build_runner:build_runner. [INFO] Generating build script... ... [INFO] Creating build script snapshot...... ... [INFO] Running build... ... [INFO] Succeeded after ...
flutter pub run build_runner watch
// 1 APIRecipeQuery _$APIRecipeQueryFromJson(Map<String, dynamic> json) { return APIRecipeQuery( // 2 query: json['q'] as String, // 3 from: json['from'] as int, to: json['to'] as int, more: json['more'] as bool, count: json['count'] as int, // 4 hits: (json['hits'] as List) ?.map((e) => e == null ? null : APIHits.fromJson(e as Map<String, dynamic>)) ?.toList(), ); }
Testing the generated JSON code
Now that you have the ability to parse model objects from JSON, you’ll read one of the JSON files included in the starter project and show one card to make sure you can use the generated code.
import 'dart:convert'; import '../../network/recipe_model.dart'; import 'package:flutter/services.dart'; import '../recipe_card.dart';
APIRecipeQuery _currentRecipes1;
Future loadRecipes() async { // 1 final jsonString = await rootBundle.loadString('assets/recipes1.json'); setState(() { // 2 _currentRecipes1 = APIRecipeQuery.fromJson(jsonDecode(jsonString)); }); }
@override void initState() { super.initState(); loadRecipes(); // ... rest of method }
import 'recipe_details.dart';
Widget _buildRecipeCard(BuildContext context, List<APIHits> hits, int index) { // 1 final recipe = hits[index].recipe; return GestureDetector( onTap: () { Navigator.push(context, MaterialPageRoute( builder: (context) { return const RecipeDetails(); }, )); }, // 2 child: recipeStringCard(recipe.image, recipe.label), ); }
Widget _buildRecipeLoader(BuildContext context) { // 1 if (_currentRecipes1 == null || _currentRecipes1.hits == null) { return Container(); } // Show a loading indicator while waiting for the movies return Center( // 2 child: _buildRecipeCard(context, _currentRecipes1.hits, 0), ); }
Key points
- JSON is an open-standard format used on the web and in mobile clients, especially with REST APIs.
- In mobile apps, JSON code is usually parsed into the model objects that your app will work with.
- You can write JSON parsing code yourself, but it’s usually easier to let a JSON package generate the parsing code for you.
- json_annotation and json_serializable are packages that will let you generate the parsing code.
Where to go from here?
In this chapter, you’ve learned how to create models that you can parse from JSON and then use when you fetch JSON data from the network. If you want to learn more about json_serializable, go to.
|
https://www.raywenderlich.com/books/flutter-apprentice/v1.0.ea3/chapters/11-serialization-with-json
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Bulk AMA with Stone DeFi
Hello guys, today we would like to welcome Vincent Khoo — Stone DeFi Marketing Lead.
Hi Vincent! It’s great you have joined us, we are looking forward to the talk.
Hello Everyone.
Could you tell us about yourself and give us your team’s introduction? What is the idea behind Stone Defi?
Hi. My name is Vincent, Marketing Lead for Stone, everyone can call me VK. Before I go into the topic of Stone, pls allow me to do a brief introduction. I used to be in the traditional finance space before crypto. In late 2016 and early 2017, I started my journey of crypto and I was with a crypto fund named Chain Capital from 2017 until 2019. During the period with the crypto fund, I studied and did due diligence in many kinds of projects. We invested in a number of good projects and bad performance projects too. I was also involved in project incubations from beginning until exchange listing and post-investment management.
After a few years in a crypto fund, I stepped down from my position as a Business officer and started my own Fintech Advisory startup in Singapore & Malaysia as well as fund management for the secondary market.
During summer 2020, we found out DeFi is something fresh and revolutionary. We started to fund into DeFi protocols and experienced the first mover rewards. Sooner we found out there’re significant problems in the market like high APY volatility, security issue, gas fee problems, and so on. That’s the point me and my team came out the idea of Stone which want to focus on bringing “Rock Solid Yield” to DeFi user in the market. Stone is also looking to provide more innovative products based on a wide range of yield-bearing assets to users across multiple blockchains.
Your slogan is Rock Solid Yield. Could you explain what it means?
Our logo is a hollow S on a stripped stone pattern. We propose the following SOLID principles for Stone:
S for Stable returns: Manage risks and rewards to achieve stable returns. DeFi is complex and looking at only indicative APY creates more tears than happiness. A paradigm shift in yield philosophy is to consider both risk, return (and the sustainability of such return), a principle widely used in the traditional financial industry.
O for Open collaboration: Work with as many community members as possible to source the best ideas. Ensure the right incentive model is in place to reward contributors starting from day one. Stone is flexible so any projects can be connected with it as well. Stone protocol (including strategies) will be open-sourced for transparency. This also allows communities and partners to contribute to the protocol development easily.
L for Long term development: Establish commitment and an inclusive culture to get more contributors along the way with the right incentive system.
I for Incremental deployment: Make incremental improvements with extensive testing and constantly learning from other projects. DeFi is a nascent industry requiring a large number of trial and error.
D for DAO driven: Provide a clear roadmap towards a DAO governed protocol. We acknowledge that at the beginning a committed small committee is more practical during bootstrap and a fully decentralized organization takes time. Stone shall engage the community to discuss a plan from day 1 and ensure sufficient fundings (tokens) are reserved for the DAO to manage in the future.
You recently completed your integration with Polygon/Matic. What opportunities did it open up for Matic users to farm?
Well, this is the highlight for tonight. We are glad to integrate with Polygon (formerly Matic Network), a Layer-2 scaling solution with payment and lending solutions, atomic swaps, and improved dApp and DEX performance. The link is as following:.
As Polygon is more open and robust primarily in terms of the types of architecture it can support, our newly launched product on Polygon allows Stone users to benefit from the yield income opportunities. This is because Polygon is built on Ethereum. So, it incorporates any scaling or infrastructure solution from the Ethereum ecosystem. Polygon fully adopts the Ethereum ethos of open innovation and has designed Polygon with the same goals in mind.
From above, we can see that our new integration with Polygon has solved common pain points in the DeFi world which are the high gas fee, scalability, and user experience as a whole. Stone aims to achieve the best yield aggregation platform which allows all the PoS assets to be flowing or transaction in between every chain as we truly believe that there will be a multi-chain existence in the long run. The communication between chains can be low or even zero friction.
Stone also wishes to provide the best user experience to all our users. For example, the experience on CeFi nowadays can be achieved in the DeFi world one day, and this is very significant in creating “Rock Solid Yield” for all the users in the DeFi ecosystem. In order to kickstart this kind of vision, Matic is just another new journey for us and we will continue to expand our public chain coverage and deliver the best product for our users.
DEFI is evolving too fast and the ability to break is very high. What do you think about this and can you be sure that Stone’s products will be in demand in the long term? What is Stone planning to contribute to DeFi’s growth?
We have observed in the DeFi space is that many projects utilize unsustainable yields to attract TVL deposits into their protocols. Unfortunately, this always resulted in wildly fluctuating token prices for holders and eventually depress the value of the protocol tokens. STONE focuses on creating long-term sustainable yield strategies that are reliable and allow our token holders to sleep well at night knowing that the STONE protocol is powering higher investment alpha with properly balanced risk/reward outcomes. Hence — the promise of Rock Solid Yield.
Next, as a key differentiator, STONE will be launching innovative and unique yield strategies, allowing for decentralized fund creations and asset deployments. Currently, yield aggregators available in the market rely primarily on lending and liquidity provision to generate yield. While Stone will have strategies in this space, our strategies will also address two major markets — liquid staking strategies and data yield strategies that are untouched to date. In particular, if we look at the staking market cap, it is well over US$120B and is a massive global market.
Many projects are currently slowed down in development due to the situation in the market, do you have delays in drafting on the roadmap?
Due to the market condition, we can see basically the majority of the projects have slowed down a lot and we can clearly see that the trading volume in the exchanges has dropped at least 50% and people are losing interest or confidence in the crypto market.
Although the market condition with low interest, our team are still working as usual and we continue to the PR event like AMA or activities. Internal-wise, we are focusing more on the product and tech part the most. During this time is the best period for us to prepare and improve on the product. Everything on the roadmap is still on track and be completed.
How do you keep your customers’ assets safe from hackers? How do you manage if there is a cyber attack on your platform that will infringe on user privacy? Is StoneDefi protected and ready to deal with this issue?
For Stone, the audit is one part, and we are also very careful about the strategies. We are glad that we have passed the audit by Peckshield and in addition, we were using our own funds to test before the public. We totally agree with the idea of using smart contracts to control fund flow and set authority clear.
Besides, we understand there are many exploits after the audit, so we opt to take a more careful approach for the product release, we have been releasing a more stable version like the alpha test and also continuously engage with the third-party security service party and other developers to enhance the product safety. Again to emphasize, we are pushing all functions at once, but making sure we are right at each release
We do not blindly trust code audits. Our approach is to launch features one by one, and have checkpoints to test out things in a real environment. that’s what the alpha version is for. for future features, we will do additional audits for security.
As TVL grows, with more feedback from the community, our team is also gaining experience in improving the process. We have hired more external experts to stress test our platform. All this is to ensure security and we wish the community and user could participate as part of us and go far with us.
As you are going to launch products on substrate, are you also planning to apply for a parachain slot somewhere in the future? Or integrate with solutions like Bifrost?
First of all, we all know that the para chain seats on Polkadot are limited, so we need to bid. But we all know clearly that the slot is not for who locked the most DOT on-chain. That’s why we need to talk about strategy. For the project, the significance of the card slot auction is to obtain the use rights of limited resources. Stone does not need to participate in the slot auction at this stage. If you really need to use it, we can also use para chain in a leased way.
But in the future, we will help more high-quality projects in the Dot ecosystem to participate in the slot auction, because we have a lot of ksm (5 nodes) and dots (about 4.5 million votes) in our hands.
Therefore, our strategy is to first support other projects that are more in need. When Stone needs an auction slot at a certain moment, we hope to get the support of other project parties and ticket holders too.
What is StoneDefi revenue model? In which ways do you generate revenue/profit? So many projects just like to speak about the “long term vision and mission” but what are your short term objectives? What are you focusing on right now?
Our ultimate focus is to ensure user funds are safe and yield as rock-solid as possible. We are looking at layer 2 and also building on substrate to leverage the cross-chain capabilities and low fees in the future.
We knew that is more important in the defi space than new technical advancement can lower gas fee
meanwhile, Stone aggregate capital deployment by tranches, stone will monitor the gas fees, actually, we also plan to compensate the partial gas costs from the business income of stone in the future. Therefore, if TVL getting bigger on stone, gas fee will be lowered for every user.
And we are a platform for liquid-staked assets. We will work together with more chains in the future and creating more use cases or provide more yield for their staked assets. That’s also part of our income to sustain the project.
What is the competitive advantage of StoneDefi? What do you have over other competitors? About features such as security, scalability, community development,… do you think you’ve finished or need to continue to develop?
Well, STONE won’t encourage users to go into new protocols because the APY is high, but we will assess its credibility and how sustainable that APY would be.
In STONE, we introduce an index to hedge the risks of single assets, and STONE will be able to deploy the underlying assets to generate additional yield for the index holder.
In simple terms, the Sharpe ratio is the tool for risk-adjusted allocations. It doesn’t simply focus on APY but overall risk and rewards assessment. This is the way we provide the “Rock Solid Yield”.
In our Litepaper, there are mathematical explanations of Sharpe Ratio and Portfolio Rebalancing. This enables investors to examine the overall risk-adjusted return of a portfolio or an asset. In fact, it has been widely used in the traditional financial markets.
Therefore, we strongly believe that investors will tend to have stable and solid returns compared to high volatility APY. of course, there are people who would like to take risks, but we all know that’s not a long-term and relaxing game.
Another key differentiator is our cross-chain yield strategies. Typically, yield aggregators only reside on a single chain. STONE’s yield strategies allow for cross-chain asset deployment that can help users maximize their returns on a global, multi-chain portfolio level.
Lastly, Stone aims to build the most open and collaborative community culture in the space. The reason we are launching the community development before the product launch and token issuance is that we want community members to make an impact since day 1, and to be part of Stone’s growth journey together. We will launch a committee first to make collective decisions, providing options for our community to choose from. We will subsequently decentralize into a DAO model and pass on full control to the community. We will soon provide an explanation of Stone tokenomics, the general idea is that besides tokens for yield farming, the largest reserve is for the DAO and contributors to the product. Stone, at its core, will be a project for the community.
Currently, most projects and platforms are in English. How will you reach non-English local communities? Do you have any plan for them to better understand your project?
The current expansion for Stone will be more on markets like Korea and China as they are a kind of big quantity of players coverage in crypto space. However, due to the regulation in these two countries, we need to work smart and planned better in order to penetrate. And we also looking for ambassadors for non-English speaking communities to expand.
Thank you, Vincent, I’m out of questions now. It’s been great talking to you!
|
https://crypto-bulk-intl.medium.com/bulk-ama-with-stone-defi-747f4d2c2515?readmore=1&source=user_profile---------2----------------------------
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Using the Populate Utility
InterSystems IRIS® includes a utility for creating pseudo-random test data for persistent classes. The creation of such data is known as data population; the utility for doing this, known as the InterSystems IRIS populate utility, is useful for testing persistent classes before deploying them within a real application. It is especially helpful when testing how various parts of an application will function when working against a large set of data.
The populate utility takes its name from its principal element — the %Populate
class, which is part of the InterSystems IRIS class library. Classes that inherit from %Populate
contain a method called Populate(), which allows you to generate and save class instances containing valid data. You can also customize the behavior of the %Populate
class to provide data for your needs.
Along with the %Populate
class, the populate utility uses %PopulateUtils
. %Populate
provides the interface to the utility, while %PopulateUtils
is a helper class.
Note that the Samples-Data sample (
) uses the populate utility. InterSystems recommends that you create a dedicated namespace called SAMPLES (for example) and load samples into that namespace. For the general process, see Downloading Samples for Use with InterSystems IRIS.
Data Population Basics
To use the populate utility, do the following:
Modify each persistent and each serial class that you want to populate with data. Specifically, add %Populate
to the end of the list of superclasses, so that the class inherits the interface methods. For example, if a class inherits directly from %Persistent
, its new superclass list would be:
Class MyApp.MyClass Extends (%Persistent,%Populate) {}
Do not use %Populate
as a primary superclass; that is, do not list it as the first class in the superclass list.
Or when using the New Class Wizard within Studio, check Data Population on the last screen. This is equivalent to adding the %Populate
class to the superclass list.
In those classes, optionally specify the POPSPEC and POPORDER parameters of each property, to control how the populate utility generates data for those properties, if you want to generate custom data rather than the default data, which is described in the next section.
Later sections of this appendix provide information on these parameters.
Recompile the classes.
To generate the data, call the Populate() method of each persistent class. By default, this method generates 10 records for the class (including any serial objects that it references):
Do ##class(MyApp.MyClass).Populate()
If you prefer, you can specify the number of objects to create:
Do ##class(MyApp.MyClass).Populate(num)
where num is the number of objects that you want.
Do this in the same order in which you would add records manually for the classes. That is, if Class A has a property that refers to Class B, use the following table to determine which class to populate first:
Later, to remove the generated data, use either the %DeleteExtent() method (safe) or the %KillExtent() method (fast) of the persistent interface. For more information, see “Deleting Saved Objects” in the chapter “Working with Persistent Objects.”
In practice, it is often necessary to populate classes repeatedly, as you make changes to your code. Thus it is useful to write a method or a routine to populate classes in the correct order, as well as to remove the generated data.
Populate() Details
Formally, the Populate() class method has the following signature:
classmethod Populate(count As %Integer = 10, verbose As %Integer = 0, DeferIndices As %Integer = 1, ByRef objects As %Integer = 0, tune As %Integer = 1, deterministic As %Integer = 0) as %Integer
Where:
count is the desired number of objects to create.
verbose specifies whether the method should print progress messages to the current device.
DeferIndices specifies whether to sort indices after generating the data (true) or while generating the data.
objects, which is passed by reference, is an array that contains the generated objects.
tune specifies whether to run $SYSTEM.SQL.TuneTable() after generating the data. If this is 0, the method does not run $SYSTEM.SQL.TuneTable(). If this is 1 (the default), the method runs $SYSTEM.SQL.TuneTable() for this table. If this is any value higher than 1, the method runs $SYSTEM.SQL.TuneTable() for this table and for any tables projected by persistent superclasses of this class.
deterministic specifies whether to generate the same data each time you call the method. By default, the method generates different data each time you call it.
Populate() returns the number of objects actually populated:
Set objs = ##class(MyApp.MyClass).Populate(100) // objs is set to the number of objects created. // objs will be less than or equal to 100
In cases with defined constraints, such as a minimum or maximum length, some of the generated data may not pass validation, so that individual objects will not be saved. In these situations, Populate() may create fewer than the specified number of objects.
If errors prevent objects from being saved, and this occurs 1000 times sequentially with no successful saves, Populate() quits.
Default Behavior
This section describes how the Populate() method generates data, by default, for the following kinds of properties:
The Populate() method ignores stream properties.
Literal Properties
This section describes how the Populate() method, by default, generates data for properties of the forms:
Property PropertyName as Type; Property PropertyName;
Where Type is a datatype class.
For these properties, the Populate() method first looks at the name. Some property names are handled specially, as follows:
If the property does not have one of the preceding names, then the Populate() method looks at the property type and generates suitable values. For example, if the property type is %String
, the Populate() method generates random strings (respecting the MAXLEN parameter of the property). For another example, if the property type is %Integer
, the Populate() method generates random integers (respecting the MINVAL and MAXVAL parameters of the property).
If the property does not have a type, InterSystems IRIS assumes that it is a string. This means that the Populate() method generates random strings for its values.
Exceptions
The Populate() method does not generate data for a property if the property is private, is multidimensional, is calculated, or has an initial expression.
Collection Properties
This section describes how the Populate() method, by default, generates data for properties of the forms:
Property PropertyName as List of Classname; Property PropertyName as Array of Classname;
For such properties:
If the referenced class is a data type class, the Populate() method generates a list or array (as suitable) of values, using the logic described earlier for data type classes.
If the referenced class is a serial object, the Populate() method generates a list or array (as suitable) of serial objects, using the logic described earlier for serial objects.
If the referenced class is a persistent class, the Populate() method performs a random sample of the extent of the referenced class, randomly selects values from that sample, and uses those to generate a list or array (as suitable).
Properties That Refer to Serial Objects
This section describes how the Populate() method, by default, generates data for properties of the form:
Property PropertyName as SerialObject;
Where SerialObject is a class that inherits from %SerialObject
.
For such properties:
Properties That Refer to Persistent Objects
This section describes how the Populate() method, by default, generates data for properties of the following form:
Property PropertyName as PersistentObject;
Where PersistentObject is a class that inherits from %Persistent
.
For such properties:
If the referenced class inherits from %Populate
, the Populate() method performs a random sample of the extent of the referenced class and then randomly selects one value from that sample.
Note that this means you must generate data for the referenced class first. Or create data for the class in any other way.
If the referenced class does not inherit from %Populate
, the Populate() method does not generate any values for the property.
For information on relationships, see the next section.
Relationship Properties
This section describes how the Populate() method, by default, generates data for properties of the following form:
Relationship PropertyName as PersistentObject;
Where PersistentObject is a class that inherits from %Persistent
.
For such properties:
If the referenced class inherits from %Populate
:
If the cardinality of the relationship is one or parent, then the Populate() method performs a random sample of the extent of the referenced class and then randomly selects one value from that sample.
Note that this means you must generate data for the referenced class first. Or create data for the class in any other way.
If the cardinality of the relationship is many or children, then the Populate() method ignores this property because the values for this property are not stored in the extent for this class.
If the referenced class does not inherit from %Populate
, the Populate() method does not generate any values for the property.
Specifying the POPSPEC Parameter
For a given property in a class that extends %Populate
, you can customize how the Populate() method generates data for that property. To do so, do the following:
Find or create a method that returns a random, but suitable value for this property.
The %PopulateUtils
class provides a large set of such methods; see the Class Reference for details.
Specify the POPSPEC parameter for this property to refer to this method. The first subsection gives the details.
The POPSPEC parameter provides additional options for list and array properties, discussed in later subsections.
For a literal, non-collection property, another technique is to identify an SQL table column that contains values to use for this property; then specify the POPSPEC parameter to refer to this property; see the last subsection.
There is also a POPSPEC parameter defined at the class level that controls data population for an entire class. This is an older mechanism (included for compatibility) that is replaced by the property-specific POPSPEC parameter. This appendix does not discuss it further.
Specifying the POPSPEC Parameter for Non-Collection Properties
For a literal property that is not a collection, use one of the following variations:
POPSPEC="MethodName()" — In this case, Populate() invokes the class method MethodName*( of the %PopulateUtils
class.
POPSPEC=".MethodName()" — In this case, Populate() invokes the instance method MethodName() of the instance that is being generated.
POPSPEC="##class(ClassName).MethodName()" — In this case, Populate() invokes the class method MethodName() of the ClassName class.
For example:
Property HomeCity As %String(POPSPEC = "City()");
If you need to pass a string value as an argument to the given method, double the starting and closing quotation marks around that string. For example:
Property PName As %String(POPSPEC = "Name(""F"")");
Also, you can append a string to the value returned by the specified method. For example:
Property JrName As %String(POPSPEC = "Name()_"" jr."" ");
Notice that it is necessary to double the starting and closing quotation marks around that string. It is not possible to prepend a string, because the POPSPEC is assumed to start with a method.
Also see “Specifying the POPSPEC Parameter via an SQL Table” for a different approach.
Specifying the POPSPEC Parameter for List Properties
For a property that is a list of literals or objects, you can use the following variation:
POPSPEC="basicspec:MaxNo"
Where
basicspec is one of the basic variations shown in the preceding section. Leave basicspec empty if the property is a list of objects.
MaxNo is the maximum number of items in the list; the default is 10.
For example:
Property MyListProp As list Of %String(POPSPEC = ".MyInstanceMethod():15");
You can omit basicspec. For example:
Property Names As list of Name(POPSPEC=":3");
In the following examples, there are lists of several types of data. Colors is a list of strings, Kids is a list of references to persistent objects, and Addresses is a list of embedded objects:
Property Colors As list of %String(POPSPEC="ValueList("",Red,Green,Blue"")"); Property Kids As list of Person(POPSPEC=":5"); Property Addresses As list of Address(POPSPEC=":3");
To generate data for the Colors property, the Populate() method calls the ValueList() method of the PopulateUtils class. Notice that this example passes a comma-separated list as an argument to this method. For the Kids property, there is no specified method, which results in automatically generated references. For the Addresses property, the serial Address class inherits from %Populate
and data is automatically populated for instances of the class.
Specifying the POPSPEC Parameter for Array Properties
For a property that is an array of literals or objects, you can use the following variation:
POPSPEC="basicspec:MaxNo:KeySpecMethod"
Where:
basicspec is one of the basic variations shown earlier. Leave basicspec empty if the property is a array of objects.
MaxNo is the maximum number of items in the array. The default is 10.
KeySpecMethod is the specification of the method that generates values to use for the keys of the array. The default is String(), which means that InterSystems IRIS invokes the String() method of %PopulateUtils
.
The following examples show arrays of several types of data and different kinds of keys:
Property Tix As array of %Integer(POPSPEC="Integer():20:Date()"); Property Reviews As array of Review(POPSPEC=":3:Date()"); Property Actors As array of Actor(POPSPEC=":15:Name()");
The Tix property has its data generated using the Integer() method of the PopulateUtils class; its keys are generated using the Date() method of the PopulateUtils class. The Reviews property has no specified method, which results in automatically generated references, and has its keys also generated using the Date() method. The Actors property has no specified method, which results in automatically generated references, and has its keys generated using the Name() method of the PopulateUtils class.
Specifying the POPSPEC Parameter via an SQL Table
For POPSPEC, rather than specifying a method that returns a random value, you can specify an SQL table name and an SQL column name to use. If you do so, then the Populate() method constructs a dynamic query to return the distinct column values from that column of that table. For this variation of POPSPEC, use the following syntax:
POPSPEC=":MaxNo:KeySpecMethod:SampleCount:Schema_Table:ColumnName"
Where:
MaxNo and KeySpecMethod are optional and apply only to collection properties (see earlier the subsections on lists and arrays).
SampleCount is the number of distinct values to retrieve from the given column, to use as a starting point. If this is larger than the number of existing distinct values in that column, then all values are possibly used.
Schema_Table is the name of the table.
ColumnName is the name of the column.
For example:
Property P1 As %String(POPSPEC=":::100:Wasabi_Data.Outlet:Phone");
In this example, the property P1 receives a random value from a list of 100 phone numbers retrieved from the Wasabi_Data.Outlet table.
Basing One Generated Property on Another
In some cases, the set of suitable value for one property (A) might depend upon the existing value of another property (B). In such a case:
Create an instance method to generate values for property A. In this method, use instance variables to obtain the value of property B (and any other properties that should be considered). For example:
Method MyMethod() As %String { if (i%MyBooleanProperty) { quit "abc" } else { quit "def" } }
For more information on instance variables, see “i%PropertyName” in the chapter “Working with Registered Objects.”
Use this method in the POPSPEC parameter of the applicable property. See “Specifying the POPSPEC Parameter”, earlier in this appendix.
Specify the POPORDER parameter of any properties that must be populated in a specific order. This parameter should equal an integer. InterSystems IRIS populates properties with lower values of POPORDER before properties with higher values of POPORDER. For example:
Property Name As %String(POPORDER = 2, POPSPEC = ".MyNameMethod()"); Property Gender As %String(POPORDER = 1, VALUELIST = ",1,2");
How %Populate Works
This section describes how %Populate
works internally. The %Populate
class contains two method generators: Populate() and PopulateSerial(). Each persistent or serial class inheriting from %Populate
has one or the other of these two methods included in it (as appropriate).
We will describe only the Populate method here. The Populate() method is a loop, which is repeated for each of the requested number of objects.
Inside the loop, the code:
Creates a new object
Sets values for its properties
Saves and closes the object
A simple property with no overriding POPSPEC parameter has a value generated using code with the form:
Set obj.Description = ##class(%PopulateUtils).String(50)
While using a library method from %PopulateUtils
via a “Name:Name()” specification would generate:
Set obj.Name = ##class(%PopulateUtils).Name()
An embedded Home property might create code like:
Do obj.HomeSetObject(obj.Home.PopulateSerial())
The generator loops through all the properties of the class, and creates code for some of the properties, as follows:
It checks if the property is private, is calculated, is multidimensional, or has an initial expression. If any of these are true, the generator exits.
If the property is has a POPSPEC override, the generator uses that and then exits.
If the property is a reference, on the first time through the loop, the generator builds a list of random IDs, takes one from the list, and then exits. For the subsequent passes, the generator simply takes an ID from the list and then exits.
If the property name is one of the specially handled names, the generator then uses the corresponding library method and then exits.
If the generator can generate code based on the property type, it does so and then exits.
Otherwise, the generator sets the property to an empty string.
Refer to the %PopulateUtils
class for a list of available methods.
Custom Populate Actions and the OnPopulate() Method
For additional control over the generated data, you can define an OnPopulate() method. If an OnPopulate() method is defined, then the Populate() method calls it for each object it generates. The method is called after assigning values to the properties but before the object is saved to disk. Each call to the Populate() method results in a check for the existence of the OnPopulate() method and a call to OnPopulate() it for each object it generates.
This instance method is called by the Populate method after assigning values to properties but before the object is saved to disk. This method provides additional control over the generated data. If an OnPopulate() method exists, then the Populate method calls it for each object that it generates.
Its signature is:
Method OnPopulate() As %Status { // body of method here... }
This is not a private method.
The method returns a %Status
code, where a failure status causes the instance being populated to be discarded.
For example, if you have a stream property, Memo, and wish to assign a value to it when populating, you can provide an OnPopulate() method:
Method OnPopulate() As %Status { Do ..Memo.Write("Default value") QUIT $$$OK }
You can override this method in subclasses of %Library.Populate
.
Alternative Approach: Creating a Utility Method
There is another way to use the methods of the %Populate
and %PopulateUtils
classes. Rather than using %Populate
as a superclass, write a utility method that generates data for your classes.
In this code, for each class, iterate a desired number of times. In each iteration:
Create a new object.
Set each property using a suitable random (or nearly random) value.
To generate data for a property, call a method of %Populate
or %PopulateUtils
or use your own method.
Save the object.
As with the standard approach, it is necessary to generate data for independent classes before generating it for the dependent classes.
Tips for Building Structure into the Data
In some cases, you might want to include certain values for only a percentage of the cases. You can use the $RANDOM function to do this. For example, use this function to define a method that returns true or false randomly, depending on a cutoff percentage that you provide as an argument. So, for example, it can return true 10% of the time or 75% of the time.
When you generate data for a property, you can use this method to determine whether or not to assign a value:
If ..RandomTrue(15) { set ..property="something" }
In the example shown here, approximately 15 percent of the records will have the given value for this property.
In other cases, you might need to simulate a distribution. To do so, set up and use a lottery system. For example, suppose that 1/4 of the values should be A, 1/4 of the values should be B, and 1/2 the values should be C. The logic for the lottery can go like this:
Choose an integer from 1 to 100, inclusive.
If the number is less than 25, return value A.
If the number is between 25 and 49, inclusive, return value B.
Otherwise, return value C.
|
https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_populate
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Access cellular service strength?
Hi all,
I want to write a script that will repeatedly log my location and cell service strength so I can identify dead zones along my commute. Gaining access to location information seems simple enough, but I'm wondering if anyone can help me get access to cell strength.
Here's one article that shows how to manually access what I'm looking for, rsrp0.
Thanks for the help!
@kylenessen, try this, not documented API, and just returns 100 for me, could be blocked by Apple.
from objc_util import * import ctypes load_framework('CoreTelephony') CTGetSignalStrength = c.CTGetSignalStrength CTGetSignalStrength.restype = ctypes.c_int CTGetSignalStrength.argtypes = [] print(CTGetSignalStrength())
It's returning 100 for me as well :/
Will have to reconsider my project. Thank you for the code, though!
For any future readers, the app Sensorly seems to do exactly what I want. Unfortunately, development seems to have officially stopped.
|
https://forum.omz-software.com/topic/6008/access-cellular-service-strength/?
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Subject: [owner-abiword-user@abisource.com: BOUNCE abiword-user@abisource.com: Non-member submission from ["William Kreamer"
From: Sam TH (sam@uchicago.edu)
Date: Wed Mar 14 2001 - 09:16:03 CST
sam th --- sam@uchicago.edu ---
OpenPGP Key: CABD33FC ---
DeCSS:
Return-Path: <owner-abiword-user@abisource.com>
Delivered-To: abiword-user@abisource.com
Received: from 954access.net (mail.954access.net [216.235.105.251])
by parsons.abisource.com (Postfix) with ESMTP id 1CD4B13B8AF
for <abiword-user@abisource.com>; Wed, 14 Mar 2001 08:29:43 -0600 (CST)
Received: from default [216.235.99.60] by 954access.net
(SMTPD32-5.05) id A093570B0114; Wed, 14 Mar 2001 09:30:43 -0500
Message-ID: <001201c0ac92$dba68560$3c63ebd8@default>
From: "William Kreamer" <kreamer@954access.net>
To: "Paul Filiault" <pdf1234@worldnet.att.net>,
"Dan Stromberg" <strombrg@nis.acs.uci.edu>
Cc: "Bernard_REVET" <bmrevet@igr.fr>,
"Kevin Vajk" <kvajk@ricochet.net>, <abiword-user@abisource.com>
References: <20010312221422.208c096a.finnbakk@world-online.no> <Pine.LNX.4.30.0103121418090.25962-100000@sophia.localdomain> <20010312143105.K10093@seki.acs.uci.edu> <3AAE3F8A.519770B1@igr.fr> <20010313165237.N16335@seki.acs.uci.edu> <3AAEC513.D1D3CD78@worldnet.att.net>
Subject: Re: libgal.so.4 ATTENTION WITH RED HAT 7 gcc 2.96 compiler
Date: Wed, 14 Mar 2001 09:27:07 -0500
intend to install Linux as a second OS in the near future, and I want
Abiword to be a cross-platform word processer.
From: "Paul Filiault" <pdf1234@worldnet.att.net>
To: "Dan Stromberg" <strombrg@nis.acs.uci.edu>
Cc: "Bernard_REVET" <bmrevet@igr.fr>; "Kevin Vajk" <kvajk@ricochet.net>;
<abiword-user@abisource.com>
Sent: Tuesday, March 13, 2001 20:10
Subject: Re: libgal.so.4 ATTENTION WITH RED HAT 7 gcc 2.96 compiler
have no
> major problems but would like to contact users that use it more heavily
that I.
>
> Dan Stromberg wrote:
>
> > On Tue, Mar 13, 2001 at 10:40:58AM -0500, Bernard_REVET wrote:
> > > Dear all
> > > If it was only Gnome for which you are going to have trouble would be
fine.
> > > The worst are the compilers which come with Red Hat 7 even the
upgrade
> > > versions . Being so in a bleeding edge gcc 2.96 is just not stable
and should
> > > not be used if you do not want to be with nighmares while compiling .
> > > Just replace it with 2.95.2.1
> > > For example you can get it from
> > >
> > >
> > > or wait for 3.0....
> > >
> > > Look at
> > >
> > >
> > > for extra information
> >
> > Actually if you patch your 2.96 it's a better compiler than 2.95.1.
> >
> > Practically everything that fails to build with 2.96, fails to build
> > because the code wasn't standards conformant, and the older gcc's
> > passed nonconformant code. Typically you just have to define one or
> > two preprocessor symbols to bring back the heavily populated
> > namespaces ("overpopulated", according to the standards), and you're
> > ok, but there are other places where standards compliance was improved
> > as well.
> >
> > And of course if you just hate progress, you can always use the "kgcc"
> > that comes with redhat 7.
> >
> > > I mentioned this point to Red Hat but did not get any answer
> >
> > They're probably tired of hearing it. How do you respond to hoardes
> > of people insisting that an improvement isn't an improvement? I'd
> > probably consider ignoring it too. Perhaps I should have this time
> > (I'll know I should have if this turns into a battle).
> >
> > BTW, the mesa modes in xscreensaver run MUCH faster on redhat 7 than
> > they did on 6.2. I haven't figured out why yet. It could even be
> > because of the improvements in the compiler's output, but perhaps it's
> > more to do with using XFree86 4.x with an 3d video card.
> >
> > --
> > Dan Stromberg UCI/NACS/DCS
> >
>
------------------------------------------------------------------------
> > Part 1.2Type: application/pgp-signature
>
>
> -----------------------------------------------
>
|
https://www.abisource.com/mailinglists/abiword-user/01/March/0071.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Raspberry Pi OS
Introduction
Raspberry clean (
sudo apt-get clean in older releases of apt).
Upgrading from Previous Operating System Versions
The latest version of Raspberry Pi OS is based on Debian Bullseye. The previous version was based on Buster. If you want to perform an in-place upgrade from Buster to Bullseye (and you’re aware of the risks) see the instructions in the forums..
Playing Audio and Video
The simplest way of playing audio and video on Raspberry Pi is to use the installed OMXPlayer application.
This is hardware accelerated, and can play back many popular audio and video file formats. OMXPlayer uses the OpenMAX (
omx) hardware acceleration interface (API) which is the officially supported media API on Raspberry Pi. OMXPlayer was developed by the Kodi Project’s Edgar Hucek.
The OMXPlayer Application
The simplest command line is
omxplayer <name of media file>. The media file can be audio or video or both. For the examples below, we used an H264 video file that is included with the standard Raspberry Pi OS installation.
omxplayer /opt/vc/src/hello_pi/hello_video/test.h264
By default the audio is sent to the analog port. If you are using a HDMI-equipped display device with speakers, you need to tell omxplayer to send the audio signal over the HDMI link.
omxplayer --adev hdmi /opt/vc/src/hello_pi/hello_video/test.h264
When displaying video, the whole display will be used as output. You can specify which part of the display you want the video to be on using the window option.
omxplayer --win 0,0,640,480 /opt/vc/src/hello_pi/hello_video/test.h264
You can also specify which part of the video you want to be displayed: this is called a crop window. This portion of the video will be scaled up to match the display, unless you also use the window option.
omxplayer --crop 100,100,300,300 /opt/vc/src/hello_pi/hello_video/test.h264
If you are using the Raspberry Pi Touch Display, and you want to use it for video output, use the display option to specify which display to use.
n is 5 for HDMI, 4 for the touchscreen. With the Raspberry Pi 4 you have two options for HDMI output.
n is 2 for HDMI0 and 7 for HDMI1.
omxplayer --display n /opt/vc/src/hello_pi/hello_video/test.h264
How to Play Audio
How to Play Video.
An Example Video
A video sample of the animated film Big Buck Bunny is available on your Raspberry Pi. To play it
Options During Playback
There are a number of options available during playback, actioned by pressing the appropriate key. Not all options will be available on all files. The list of key bindings can be displayed using
omxplayer --keys:
Playing in the Background &
Using a USB webcam
Rather than using the Raspberry Pi camera module, you can use a standard USB webcam to take pictures and video on your Raspberry Pi.
First, install the
fswebcam package:
sudo apt install fswebcam
If you are not using the default
pi user account, you need to add your username to the
video group, otherwise you will see 'permission denied' errors.
sudo usermod -a -G video <username>
To check that the user has been added to the group correctly, use the
groups command.'.
The webcam used in this example has a resolution of
1280 x 720 so to specify the resolution I want the image to be taken at, use the
-r flag:
fswebcam -r 1280x720 image2.jpg
This command will show the following information:
--- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 1 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Writing JPEG image to 'image2.jpg'.
Picture now taken at the full resolution of the webcam, with the banner present.
Removing the Banner
Now add the
--no-banner flag:
fswebcam -r 1280x720 --no-banner image3.jpg
which shows the following information:
--- 'image3.jpg'.
Now the picture is taken at full resolution with no banner.
Automating Image Capture
You can write a Bash script which takes a picture with the webcam. The script below saves the images in the
/home/pi/webcam directory, so create the
webcam subdirectory first with:
mkdir webcam
To create a script, open up your editor of choice and write the following example code:
#!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") fswebcam -r 1280x720 --no-banner /home/pi/webcam/$DATE.jpg
This script will take a picture and name the file with a timestamp. Say we saved it as
webcam.sh, we would first make the file executable:
chmod +x webcam.sh
Then run with:
./webcam.sh
Which would run the commands in the file and give the usual output:
--- '/home/pi/webcam/2013-06-07_2338.jpg'.
Time-Lapse Captures
You can use
cron to schedule taking a picture at a given interval, such as every minute to capture a time-lapse.
First above):
* * * * * /home/pi/webcam.sh 2>&1
Save and exit and you should see the message:
crontab: installing new crontab
Ensure your script does not save each picture taken with the same filename. This will overwrite the picture each time.
Useful Utilities
There are several useful command line
tvservice
tvservice is a command line application used to get and set information about the display, targeted mainly at HDMI video and audio.
Typing
tvservice by itself will display a list of available command line options.
-o, --off
Powers off the display output.
A better option is to use the vcgencmd display_power option, as this will retain any framebuffers, so when the power is turned back on the display will be the returned to the previous power on state.
-e, --explicit="Group Mode Drive"
Power on the HDMI with the specified settings
Group can be one of
CEA,
DMT,
CEA_3D_SBS,
CEA_3D_TB,
CEA_3D_FP,
CEA_3D_FS.
Mode is one of the modes returned from the
-m, --modes option.
Drive can be one of
HDMI,
DVI.
-c, --sdtvon="Mode Aspect [P]"
Power on the SDTV (composite output) with the specified mode,
PAL or
NTSC, and the specified aspect,
4:3,
14:9,
16:9. The optional
P parameter can be used to specify progressive mode.
-m, --modes=Group
where Group is
CEA or
DMT.
Shows a list of display modes available in the specified group.
-s, --status
Shows the current settings for the display mode, including mode, resolution, and frequency.
-a, --audio
Shows the current settings for the audio mode, including channels, sample rate and sample size.
-d, --dumpid=filename
Save the current EDID to the specified filename. You can then use
edidparser <filename> to display the data in a human readable form.
-j, --json
When used in combination with the
--modes options, displays the mode information in JSON format.
vcgencmd
The
vcgencmd tool is used to output information from the VideoCore GPU on the Raspberry Pi. You can find source code for the
vcgencmd utility on Github.
To get a list of all commands which
vcgencmd supports, use
vcgencmd commands. Some useful commands and their required parameters are listed below.
vcos
The
vcos command has two useful sub-commands:
versiondisplays the build date and version of the firmware on the VideoCore
log statusdisplays the error log status of the various VideoCore firmware areas
get_camera
Displays the enabled and detected state of the Raspberry Pi camera:
1 means yes,
0 means no. Whilst all firmware except cutdown versions support the camera, this support needs to be enabled by using raspi-config.
get_throttled
Returns the throttled state of the system. This is a bit pattern - a bit being set indicates the following meanings:
measure_temp
Returns the temperature of the SoC as measured by its internal temperature sensor;
on Raspberry Pi 4,
measure_temp pmic returns the temperature of the PMIC.
measure_clock [clock]
This returns the current frequency of the specified clock. The options are:
e.g.
vcgencmd measure_clock arm
otp_dump
Displays the content of the OTP (one-time programmable) memory inside the SoC. These are 32 bit values, indexed from 8 to 64. See the OTP bits page for more details.
get_config [configuration item|int|str]
Display value of the configuration setting specified: alternatively, specify either
int (integer) or
str (string) to see all configuration items of the given type. For example:
vcgencmd get_config total_mem
returns the total memory on the device in megabytes.
get_mem type
Reports on the amount of memory addressable by the ARM and the GPU. To show the amount of ARM-addressable memory use
vcgencmd get_mem arm; to show the amount of GPU-addressable memory use
vcgencmd get_mem gpu. Note that on devices with more than 1GB of memory the
arm parameter will always return 1GB minus the
gpu memory value, since the GPU firmware is only aware of the first 1GB of memory. To get an accurate report of the total memory on the device, see the
total_mem configuration item - see
get_config section above.
codec_enabled [type]
Reports whether the specified CODEC type is enabled. Possible options for type are AGIF, FLAC, H263, H264, MJPA, MJPB, MJPG, MPG2, MPG4, MVC0, PCM, THRA, VORB, VP6, VP8, WMV9, WVC1. Those highlighted currently require a paid for licence (see the this config.txt section for more info), except on the Pi 4 and 400, where these hardware codecs are disabled in preference to software decoding, which requires no licence. Note that because the H.265 HW block on the Raspberry Pi 4 and 400 is not part of the VideoCore GPU, its status is not accessed via this command.
mem_oom
Displays statistics on any OOM (out of memory) events occurring in the VideoCore memory space.
hdmi_timings
Displays the current HDMI settings timings. See Video Config for details of the values returned.
display_power [0 | 1 | -1] [display]
Show current display power state, or set the display power state.
vcgencmd display_power 0 will turn off power to the current display.
vcgencmd display_power 1 will turn on power to the display. If no parameter is set, this will display the current power state. The final parameter is an optional display ID, as returned by
tvservice -l or from the table below, which allows a specific display to be turned on or off.
Note that for the 7" Raspberry Pi Touch Display this simply turns the backlight on and off. The touch functionality continues to operate as normal.
vcgencmd display_power 0 7 will turn off power to display ID 7, which is HDMI 1 on a Raspberry Pi 4.
To determine if a specific display ID is on or off, use -1 as the first parameter.
vcgencmd display_power -1 7 will return 0 if display ID 7 is off, 1 if display ID 7 is on, or -1 if display ID 7 is in an unknown state, for example undetected.
vcdbg
vcdbg is an application to help with debugging the VideoCore GPU from Linux running on the the ARM. It needs to be run as root. This application is mostly of use to Raspberry Pi engineers, although there are some commands that general users may find useful.
sudo vcdbg help will give a list of available commands.
log
Dumps logs from the specified subsystem. Possible options are:
e.g. To print out the current contents of the message log:
vcdbg log msg
reloc
Without any further parameters, lists the current status of the relocatable allocator. Use
sudo vcdbg reloc small to list small allocations as well.
Use the subcommand
sudo vcdbg reloc stats to list statistics for the relocatable allocator.
Python
Python is a powerful programming language that’s easy to use easy to read and write and, with Raspberry Pi, lets you connect your project to the real world. Python syntax is clean, with an emphasis on readability, and uses standard English keywords.
Thonny
The easiest introduction to Python is through Thonny, a Python 3 development environment. You can open Thonny from the Desktop or applications menu.
Thonny gives you a REPL (Read-Evaluate-Print-Loop), which is a prompt you can enter Python commands into. Because it’s a REPL, you even get the output of commands printed to the screen without using
You can use variables if you need to but you can even use it like a calculator. For example:
>>> 1 + 2 3 >>>>> "Hello " + name 'Hello Sarah'
Thonny
Python files in Thonny
To create a Python file in Thonny, click
File > New and you’ll be given a.
Using the Command Line
You can write a Python file in a standard editor, and run it as a Python script from the command line. Just navigate to the directory the file is saved in (use
cd and
ls for guidance) and run with
python3, e.g.
python3 hello.py.
Other Ways of Using Python
The standard built-in Python shell is accessed by typing
python3 in the terminal.
This shell is a prompt ready for Python commands to be entered. You can use this in the same way as Thonny, Raspberry Pi OS archives, and can be installed using apt, for example:
sudo apt update sudo apt install python-picamera
This is a preferable method of installing, as it means that the modules you install can be kept up to date easily with the usual
sudo apt update and
sudo apt full-upgrade commands.
pip
Not all Python packages are available in the Raspberry Pi OS archives, and those that are can sometimes be out of date. If you can’t find a suitable version in the Raspberry Pi OS archives, you can install packages from the Python Package Index (known as PyPI).
To do so, install pip:
sudo apt install python3-pip
Then install Python packages (e.g.
simplejson) with
pip3:
sudo pip3 install simplejson. Raspberry Pi OS is pre-configured to use piwheels for pip. Read more about the piwheels project at.
GPIO and the 40-pin Header.
Any of the GPIO pins can be designated (in software) as an input or output pin and used for a wide range of purposes.
Voltages
Two 5V pins and two 3V3 pins are present on the board, as well as a number of ground pins (0V), which are unconfigurable. The remaining pins are all general purpose 3V3 pins, meaning outputs are set to 3V3 and inputs are 3V3-tolerant..
More
As well as simple input and output devices, the GPIO pins can be used with a variety of alternative functions, some are available on all pins, others on specific pins.
PWM (pulse-width modulation)
Software PWM available on all pins
Hardware PWM available on GPIO12, GPIO13, GPIO18, GPIO19
SPI
Data: (GPIO2); Clock (GPIO3)
EEPROM Data: (GPIO0); EEPROM Clock (GPIO1)
Serial
TX (GPIO14); RX (GPIO15)
GPIO pinout
A handy reference can be accessed on the Raspberry Pi by opening a terminal window and running the command
pinout. This tool is provided by the GPIO Zero Python library, which is installed by default on the Raspberry Pi OS desktop image, but not on Raspberry Pi OS Lite.
For more details on the advanced capabilities of the GPIO pins see gadgetoid’s interactive pinout diagram.
Permissions
In order to use the GPIO ports your user must be a member of the
gpio group. The
pi user is a member by default, other users need to be added manually.
sudo usermod -a -G gpio <username>
GPIO in Python
Using the GPIO Zero library makes it easy to get started with controlling GPIO devices with Python. The library is comprehensively documented at gpiozero.readthedocs.io.
LED
To control an LED connected to GPIO17, you can use this code:
from gpiozero import LED from time import sleep led = LED(17) while True: led.on() sleep(1) led.off() sleep(1)
Run this in an IDE like Thonny, and the LED will blink on and off repeatedly.
LED methods include
on(),
off(),
toggle(), and
blink().
Button
To read the state of a button connected to GPIO2, you can use this code:
from gpiozero import Button from time import sleep button = Button(2) while True: if button.is_pressed: print("Pressed") else: print("Released") sleep(1)
Button functionality includes the properties
is_pressed and
is_held; callbacks
when_pressed,
when_released, and
when_held; and methods
wait_for_press() and
wait_for_release.
Button + LED
To connect the LED and button together, you can use this code:
from gpiozero import LED, Button led = LED(17) button = Button(2) while True: if button.is_pressed: led.on() else: led.off()
Alternatively:
from gpiozero import LED, Button led = LED(17) button = Button(2) while True: button.wait_for_press() led.on() button.wait_for_release() led.off()
or:
from gpiozero import LED, Button led = LED(17) button = Button(2) button.when_pressed = led.on button.when_released = led.off
|
https://www.raspberrypi.com/documentation/computers/os.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Details
- Type:
Bug
- Status: Closed
- Priority:
P1: Critical
- Resolution: Done
- Affects Version/s: 5.11.1, 5.12.0 Beta 3
- Fix Version/s: 5.12.4, 5.14.0 Alpha
- Component/s: Widgets: Style Sheets
- Labels:
- Environment:Ubuntu 16.04 with GCC 64-bit
Windows 10 (1803) with MSVC 2017 32-bit
macOS Sierra 10.13.6 with clang
- Platform/s:
- Commits:21dcb96ddca357a6e8ace4b1c7252ec465e77727 (qt/qtbase/5.12)
Description
updateObjects() in qtbase/src/widgets/styles/qstylesheetstyle.cpp can cause a segmentation fault.
This happens because updateObjects processes a list of all children and grandchildren of an object. It iterates over each object and announces a StyleChange event for each one of them. If an object reacts on this StyleChange event by (among other things) deleting one of its children, the list that updateObjects received will end up with an invalid element, and because the loop will eventually reach that element the program will crash.
This 25-line program will trigger the bug:
#include <QApplication> #include <QLabel> #include <QSplitter> #include <QMainWindow> int main(int argc, char *argv[]) { QApplication a(argc, argv); QMainWindow w; QSplitter* splitter1 = new QSplitter(w.centralWidget()); QSplitter* splitter2 = new QSplitter; QSplitter* splitter3 = new QSplitter; splitter2->addWidget(splitter3); splitter2->setStyleSheet("a { b:c; }"); QLabel *label = new QLabel; label->setTextFormat(Qt::RichText); splitter3->addWidget(label); label->setText("hey"); splitter1->addWidget(splitter2); w.show(); return a.exec(); }
In this code example splitter3's QSplitter::changeEvent() will execute. When that happens, a grandchild to the QLabel, a QTextFrame, will be deleted and replaced. That's element 0 in the list. At index 6 we have a pointer to the old QTextFrame, and that's what will crash the application.
Attachments
Issue Links
- resulted in
QTBUG-75361 Widget Styles Missing
- Closed
QTBUG-77006 [REG: 5.12.3->5.12.4]: Changing a stylesheet at runtime does not effect children that are not direct children of the widget being changed
- Closed
QTBUG-75810 Stylesheet does not propagate properly in some cases
- Closed
|
https://bugreports.qt.io/browse/QTBUG-69204?gerritIssueStatus=All
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
#include <itkFEMElement3DC0LinearTetrahedron.h>
4-noded, linear, C0 continuous finite element in 3D space.
The ordering of the nodes should be defined in the following order:
(1,0,1) 3 * /|\ / | \ / | \
(0,0,0) 0 *– | –* 2 (2,0,0) \ | / \ | / |/
This is an abstract class. Specific concrete implemenations of this It must be combined with the physics component of the problem. This has already been done in the following classes:
Definition at line 62 of file itkFEMElement3DC0LinearTetrahedron.h.
Definition at line 71 of file itkFEMElement3DC0LinearTetrahedron.h.
Definition at line 70 of file itkFEMElement3DC0LinearTetrahedron.h.
Standard class typedefs.
Definition at line 67 of file itkFEMElement3DC0LinearTetrahedron.h.
Definition at line 69 of file itkFEMElement3DC0LinearTetrahedron.h.
Definition at line 68 of file itkFEMElement3DC0LinearTetrahedron.h.
Methods related to numeric integration
Definition at line 81 of file itkFEMElement3DC0LinearTetrahedron.h.
Get the Integration point and weight
Implements itk::fem::Element.
Convert from global to local coordinates
Implements itk::fem::Element.
Run-time type information (and related methods).
Reimplemented from itk::fem::ElementStd< 4, 3 >.
Reimplemented in itk::fem::Element3DMembrane< Element3DC0LinearTetrahedron >, itk::fem::Element3DC0LinearTetrahedronMembrane, itk::fem::Element3DStrain< Element3DC0LinearTetrahedron >, and itk::fem::Element3DC0LinearTetrahedronStrain.
Get the number of integration points
Implements itk::fem::Element.
Set the edge order and the points defining each edge
Implements itk::fem::Element.
Methods invoked by Print() to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from itk::fem::ElementStd< 4, 3 >.
Reimplemented in itk::fem::Element3DMembrane< Element3DC0LinearTetrahedron >, itk::fem::Element3DStrain< Element3DC0LinearTetrahedron >, itk::fem::Element3DC0LinearTetrahedronMembrane, and itk::fem::Element3DC0LinearTetrahedronStrain.
Return the shape functions derivatives in the shapeD matrix
Implements itk::fem::Element.
Methods related to the geometry of an elementReturn the shape functions used to interpolate across the element
Implements itk::fem::Element.
|
https://itk.org/Doxygen48/html/classitk_1_1fem_1_1Element3DC0LinearTetrahedron.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
#include <itkImageAlgorithm.h>
A container of static functions which can operate on Images with Iterators.
These methods are modeled after the STL algorithms. They may use special optimization techniques to implement enhanced versions of the methods.
Definition at line 53 of file itkImageAlgorithm.h.
Definition at line 61 of file itkImageAlgorithm.h.
Definition at line 60 of file itkImageAlgorithm.h.
This generic function copies a region from one image to another. It may perform optimizations on the copy for efficiency.
This method performs the equivalent to the following:
Definition at line 86 of file itkImageAlgorithm.h.
References DispatchedCopy().
Function to dispatch to std::copy or std::transform.
Definition at line 204 of file itkImageAlgorithm.h.
this is the reference image iterator implementation
Sets the output region to the smallest region of the output image that fully contains the physical space covered by the input region of the input image.
|
https://itk.org/Doxygen48/html/structitk_1_1ImageAlgorithm.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
5
Functions
Written by Jonathan Sande
Heads up... You're reading this book for free, with parts of this chapter shown beyond this point astext.
You can unlock the rest of this book, and our entire catalogue of books and videos, with a raywenderlich.com Professional subscription.
Each week, there are tasks that you repeat over and over: eat breakfast, brush your teeth, write your name, read books about Dart, and so on. Each of those tasks can be divided up into smaller tasks. Brushing your teeth, for example, includes putting toothpaste on the brush, brushing each tooth and rinsing your mouth out with water.
The same idea exists in computer programming. A function is one small task, or sometimes a collection of several smaller, related tasks that you can use in conjunction with other functions to accomplish a larger task.
In this chapter, you’ll learn how to write functions in Dart. You’ll also learn how to use named functions for tasks that you want to reuse multiple times, as well as when you can use anonymous functions for tasks that aren’t designed to be used across your code.
Function basics
You can think of functions like machines in a factory; they take something you provide to them (the input), and produce something different (the output).
There are many examples of this even in daily life. With an apple juicer, you put in apples and you get out apple juice. The input is apples; the output is juice. A dishwasher is another example. The input is dirty dishes, and the output is clean dishes. Blenders, coffee makers, microwaves and ovens are all like real-world functions that accept an input and produce an output.
Don’t repeat yourself
Assume you have a small, useful piece of code that you’ve repeated in multiple places throughout your program:
// one place if (fruit == 'banana') { peelBanana(); eatBanana(); } // another place if (fruit == 'banana') { peelBanana(); eatBanana(); } // some other place if (fruit == 'banana') { peelBanana(); eatBanana(); }
Anatomy of a Dart function
In Dart, a function consists of a return type, a name, a parameter list in parentheses and a body enclosed in braces.
void main() { const input = 12; final output = compliment(input); print(output); } String compliment(int number) { return '$number is a very nice number.'; }
12 is a very nice number.
More about parameters
Parameters are incredibly flexible in Dart, so they deserve their own section here.
Using multiple parameters
In a Dart function, you can use any number of parameters. If you have more than one parameter for your function, simply separate them with commas. Here’s a function with two parameters:
void helloPersonAndPet(String member, String pet) { print('Hello, $member, and your furry friend, $pet!'); }
helloPersonAndPet('Fluffy', 'Chris'); // Hello, Fluffy, and your furry friend, Chris!
Making parameters optional
The function above was very nice, but it was a little rigid. For example, try the following:
helloPersonAndPet();
2 positional argument(s) expected, but 0 found.
String fullName(String first, String last, String title) { return '$title $first $last'; }
String fullName(String first, String last, [String title]) { if (title != null) { return '$title $first $last'; } else { return '$first $last'; } }
print(fullName('Ray', 'Wenderlich')); print(fullName('Albert', 'Einstein', 'Professor'));
Ray Wenderlich Professor Albert Einstein
Providing default values
In the example above, you saw that the default value for an optional parameter was
null. This isn’t always the best default value, though. That’s why Dart also gives you the power to change the default value of any parameter in your function, by using the assignment operator.
bool withinTolerance(int value, [int min = 0, int max = 10]) { return min <= value && value <= max; }
withinTolerance(5) // true withinTolerance(15) // false
withinTolerance(9, 7, 11) // true
withinTolerance(9, 7) // true
Naming parameters
Dart allows you to use named parameters to make the meaning of the parameters more clear in function calls.
bool withinTolerance({int value, int min = 0, int max = 10}) { return min <= value && value <= max; }
withinTolerance(value: 9, min: 7, max: 11) // true
withinTolerance(value: 9, min: 7, max: 11) // true withinTolerance(min: 7, value: 9, max: 11) // true withinTolerance(max: 11, min: 7, value: 9) // true
withinTolerance(value: 5) // true withinTolerance(value: 15) // false withinTolerance(value: 5, min: 7) // false withinTolerance(value: 15, max: 20) // true
Making named parameters required
The fact that named parameters are optional by default causes a problem, though. A person might look at your function declaration, assume that all of the parameters to the function are optional, and call your function like so:
print(withinTolerance());
NoSuchMethodError: The method '>' was called on null.
import 'package:meta/meta.dart';
pub get
bool withinTolerance({ @required int value, int min = 0, int max = 10, }) { return min <= value && value <= max; }
Writing good functions
People have been writing code for decades. Along the way, they’ve designed some good practices to improve code quality and prevent errors. One of those practices is writing DRY code as you saw earlier. Here are a few more things to pay attention to as you learn about writing good functions.
Avoiding side effects
When you take medicine to cure a medical problem, but that medicine makes you fat, that’s known as a side effect. If you put some bread in a toaster to make toast, but the toaster burns your house down, that’s also a side effect. Not all side effects are bad, though. If you take a business trip to Paris, you also get to see the Eiffel Tower. Magnifique!
void hello() { print('Hello!'); }
String hello() { return 'Hello!'; }
var myPreciousData = 5782; String anInnocentLookingFunction(String name) { myPreciousData = -1; return 'Hello, $name. Heh, heh, heh.'; }
Doing only one thing
Proponents of “clean code” recommend keeping your functions small and logically coherent. Small here means only a handful of lines of code. If a function is too big, or contains unrelated parts, consider breaking it into smaller functions.
Choosing good names
You should always give your functions names that describe exactly what they do. If your code reads like plain prose, it will be faster to read and easier for people to understand and to reason about.
Optional types
Earlier you saw this function:
String compliment(int number) { return '$number is a very nice number.'; }
compliment(number) { return '$number is a very nice number.'; }
dynamic compliment(dynamic number) { return '$number is a very nice number.'; }
Mini-exercises
- Write a function named
youAreWonderful, with a String parameter called
name. It should return a string using
name, and say something like “You’re wonderful, Bob.”
- Add another
intparameter to that function called
numberPeopleso that the function returns something like “You’re wonderful, Bob. 10 people think so.” Make both inputs named parameters.
- Make
namerequired and set
numberPeopleto have a default of
30.
Anonymous functions
All the functions you’ve seen previously in this chapter, such as
main,
hello, and
withinTolerance are named functions, which means, well, they have a name.
First-class citizens
In Dart, functions are first-class citizens. That means you can treat them like any other other type, assigning functions as values to variables and even passing functions around as parameters or returning them from other functions.
Assigning functions to variables
When assigning a value to a variable, functions behave just like other types:
int number = 4; String greeting = 'hello'; bool isHungry = true; Function multiply = (int a, int b) { return a * b; };
Function myFunction = int multiply(int a, int b) { return a * b; };
Function expressions can't be named.
Passing functions to functions
Just as you can write a function to take
int or
String as a parameter, you can also have
Function as a parameter:
void namedFunction(Function anonymousFunction) { // function body }
Returning functions from functions
Just as you can pass in functions as input parameters, you can also return them as output:
Function namedFunction() { return () { print('hello'); }; }
Using anonymous functions
Now that you know where you can use anonymous functions, have a hand at doing it yourself. Take the multiply function from above again:
final multiply = (int a, int b) { return a * b; };
print(multiply(2, 3));
Returning a function
Have a look at a different example:
Function applyMultiplier(num multiplier) { return (num value) { return value * multiplier; }; }
final triple = applyMultiplier(3);
print(triple(6)); print(triple(14.0));
18 42.0
Anonymous functions in forEach loops
Chapter 4 introduced you to
forEach loops, which iterate over a collection. Although you may not have realized it, that was an example of using an anonymous function.
const numbers = [1, 2, 3];
numbers.forEach((number) { final tripled = number * 3; print(tripled); });
3 6 9
Closures and scope
Anonymous functions in Dart act as what are known as closures. The term closure means that the code “closes around” the surrounding scope, and therefore has access to variables and functions defined within that scope.
Function applyMultiplier(num multiplier) { return (num value) { return value * multiplier; }; }
var counter = 0; final incrementCounter = () { counter += 1; };
incrementCounter(); incrementCounter(); incrementCounter(); incrementCounter(); incrementCounter(); print(counter); // 5
Function countingFunction() { var counter = 0; final incrementCounter = () { counter += 1; return counter; }; return incrementCounter; }
final counter1 = countingFunction(); final counter2 = countingFunction();
print(counter1()); // 1 print(counter2()); // 1 print(counter1()); // 2 print(counter1()); // 3 print(counter2()); // 2
Mini-exercises
- Change the
youAreWonderfulfunction in the first mini-exercise of this chapter into an anonymous function. Assign it to a variable called
wonderful.
- Using
forEach, print a message telling the people in the following list that they’re wonderful.
const people = ['Chris', 'Tiffani', 'Pablo'];
Arrow functions
Dart has a special syntax for one-line functions, either named or anonymous. Consider the following function
add that adds two numbers together:
// named function int add(int a, int b) => a + b;
// anonymous function (parameters) => expression;
Refactoring example 1
The body of the anonymous function you assigned to
multiply has one line:
final multiply = (int a, int b) { return a * b; };
final multiply = (int a, int b) => a * b;
print(multiply(2, 3)); // 6
Refactoring example 2
You can also use arrow syntax for the anonymous function returned by
applyMultiplier:
Function applyMultiplier(num multiplier) { return (num value) { return value * multiplier; }; }
Function applyMultiplier(num multiplier) { return (num value) => value * multiplier; }
Refactoring example 3
You can’t use arrow syntax on the
forEach example, though:
numbers.forEach((number) { final tripled = number * 3; print(tripled); });
numbers.forEach((number) => print(number * 3));
Mini-exercise
Change the
forEach loop in the previous “You’re wonderful” mini-exercise to use arrow syntax.
Challenges
Before moving on, here are some challenges to test your knowledge of functions. It is best if you try to solve them yourself, but solutions are available if you get stuck in the challenge folder of this chapter.
Challenge 1: Prime time
Write a function that checks if a number is prime.
Challenge 2: Can you repeat that?
Write a function named
repeatTask with the following definition:
int repeatTask(int times, int input, Function task)
Challenge 3: Darts and arrows
Update Challenge 2 to use arrow syntax.
Key points
- Functions package related blocks of code into reusable units.
- A function signature includes the return type, name and parameters. The function body is the code between the braces.
- Parameters can be positional or named, and required or optional.
- Side effects are anything, besides the return value, that change the world outside of the function body.
- To write clean code, use functions that are short and only do one task.
- Anonymous functions don’t have a function name, and the return type is inferred.
- Dart functions are first-class citizens and thus can be assigned to variables and passed around as values.
- Anonymous functions act as closures, capturing any variables or functions within its scope.
- Arrow syntax is a shorthand way to write one-line functions.
Where to go from here?
This chapter spoke briefly about the Single Responsibility Principle and other clean coding principles. Do a search for SOLID principles to learn even more. It’ll be time well spent.
|
https://www.raywenderlich.com/books/dart-apprentice/v1.0.ea1/chapters/5-functions
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
extract_c_string returns
a pointer to an array of elements of a const character type. It is invoked
through a static method
call.
This customization point is responsible for handling it's own garbage
collecting; the lifetime of the returned C-string must be no shorter
than the lifetime of the string instance passed to the
call method.
#include <boost/spirit/home/support/string_traits.hpp>
Also, see Include Structure.
template <typename String> struct extract_c_string { typedef <unspecified> char_type; static char_type const* call (String const&); };
Notation
T
An arbitrary type.
Char
A character type.
Traits
A character traits type.
Allocator
A standard allocator type.
str
A string instance.
This customization point needs to be implemented whenever
traits::is_string is implemented.
If this customization point is implemented, the following other customization points need to be implemented as well.
|
https://www.boost.org/doc/libs/1_64_0/libs/spirit/doc/html/spirit/advanced/customize/string_traits/extract_c_string.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Hi,
I changed the attitude control loop a little bit to achieve more stability. Rate control with a PID(T1) instead of a PI controller. Here I present the necessary changes in the code to do this.
Pro:
- Much more attitude stability
- Much faster response on disturbances or new setpoints
Con:
- You need to tune the control loop again after this change
- This change is experimental, it works for me but you do it on your own risk ;)
I'm starting from Software AC 2.1. First I modified the library "PID" to save calculating time. I saved the following floating point operations per call to "get_pid": 2 x division, 4 x multiplication. You can download the modified library here. Unzip it and put the folder "PID_fast" in the "libraries" folder.
In file "ArduCopter.pde":
Add this line to the includes:
#include <PID_fast.h> // PID library
For the following 4 blocks, I post it like the diff you get in GIT. The "-" means that you should look for this line and remove it. Replace it with the following lines starting with the "+". Of cause, the lines in the code have no "-" or "+". It's just to show what to remove and what to add instead.
- g.pi_rate_roll.reset_I();
- g.pi_rate_pitch.reset_I();
+ g.pid_rate_roll.reset_I();
+ g.pid_rate_pitch.reset_I();
- g.pi_rate_roll.kP(tuning_value);
- g.pi_rate_pitch.kP(tuning_value);
+ g.pid_rate_roll.kP(tuning_value);
+ g.pid_rate_pitch.kP(tuning_value);
- g.pi_rate_roll.kI(tuning_value);
- g.pi_rate_pitch.kI(tuning_value);
+ g.pid_rate_roll.kI(tuning_value);
+ g.pid_rate_pitch.kI(tuning_value);
- g.pi_rate_yaw.kP(tuning_value);
+ g.pid_rate_yaw.kP(tuning_value);
In file "Attitude.pde":
In function "get_stabilize_roll(int32_t target_angle)":
- rate = g.pi_rate_roll.get_pi(error, G_Dt);
+ rate = g.pid_rate_roll.get_pid(error, G_Dt);
In function "get_stabilize_pitch(int32_t target_angle)":
- rate = g.pi_rate_pitch.get_pi(error, G_Dt);
+ rate = g.pid_rate_pitch.get_pid(error, G_Dt);
In function "get_stabilize_yaw(int32_t target_angle)":
- rate = g.pi_rate_yaw.get_pi(error, G_Dt);
+ rate = g.pid_rate_yaw.get_pid(error, G_Dt);
- rate = g.pi_rate_yaw.get_pi(error, G_Dt);
+ rate = g.pid_rate_yaw.get_pid(error, G_Dt);
In function "get_rate_pitch(int32_t target_rate)":
- target_rate = g.pi_rate_yaw.get_pi(error, G_Dt);
+ target_rate = g.pid_rate_yaw.get_pid(error, G_Dt);
In file "Parameters.h":
Increase the number after "static const uint16_t k_format_version" by one. Warning: This will make the APM erase your EEPROM and Log after flashing the new compiled firmware. So save your settings first! After this, you have to reload your settings and level the copter again.
Later on in the file:
- k_param_pi_rate_roll = 235,
- k_param_pi_rate_pitch,
- k_param_pi_rate_yaw,
+ k_param_pid_rate_roll = 235,
+ k_param_pid_rate_pitch,
+ k_param_pid_rate_yaw,
- APM_PI pi_rate_roll;
- APM_PI pi_rate_pitch;
- APM_PI pi_rate_yaw;
+ PID_fast pid_rate_roll;
+ PID_fast pid_rate_pitch;
+ PID_fast pid_rate_yaw;
- pi_rate_roll (k_param_pi_rate_roll, PSTR("RATE_RLL_"), RATE_ROLL_P, RATE_ROLL_I, RATE_ROLL_IMAX * 100),
- pi_rate_pitch (k_param_pi_rate_pitch, PSTR("RATE_PIT_"), RATE_PITCH_P, RATE_PITCH_I, RATE_PITCH_IMAX * 100),
- pi_rate_yaw (k_param_pi_rate_yaw, PSTR("RATE_YAW_"), RATE_YAW_P, RATE_YAW_I, RATE_YAW_IMAX * 100),
-
+ pid_rate_roll (k_param_pid_rate_roll, PSTR("RATE_RLL_"), RATE_ROLL_P, RATE_ROLL_I, RATE_ROLL_D, RATE_ROLL_IMAX * 100),
+ pid_rate_pitch (k_param_pid_rate_pitch, PSTR("RATE_PIT_"), RATE_PITCH_P, RATE_PITCH_I, RATE_PITCH_D, RATE_PITCH_IMAX * 100),
+ pid_rate_yaw (k_param_pid_rate_yaw, PSTR("RATE_YAW_"), RATE_YAW_P, RATE_YAW_I, RATE_YAW_D, RATE_YAW_IMAX * 100),
In file "config.h":
You have to add a few lines. Lines to add are indicated by "+". Other lines are for reference to find the right place.
#ifndef RATE_ROLL_I
# define RATE_ROLL_I 0.0
#endif
+#ifndef RATE_ROLL_D
+# define RATE_ROLL_D 0.0
+#endif
#ifndef RATE_PITCH_I
# define RATE_PITCH_I 0 //0.18
#endif
+#ifndef RATE_PITCH_D
+# define RATE_PITCH_D 0.0
+#endif
#ifndef RATE_YAW_I
# define RATE_YAW_I 0.0
#endif
+#ifndef RATE_YAW_D
+# define RATE_YAW_D 0.0
+#endif
In file "system.pde":
- Log_Write_Data(12, g.pi_rate_roll.kP());
- Log_Write_Data(13, g.pi_rate_pitch.kP());
+ Log_Write_Data(12, g.pid_rate_roll.kP());
+ Log_Write_Data(13, g.pid_rate_pitch.kP());
Now, all changes are done. Now save your current settings of the APM, compile and load the code... You will find 3 new parameters (RATE_RLL_D, RATE_PIT_D and RATE_YAW_D) you can edit like any other mavlink parameter. These are the D-Terms for all rate control loops.
Testing and tuning:
Be really carefull, propellers are dangerous! I do the tuning of the control loops while I hold the copter in my hand and try if it compensates the disturbances without overshot or oscillation. You should decide for yourself if cou can hold your copter in one hand while you throttle up and test the stability. Even small propellers can harm your fingers! If you want to proceed, I do it like this:
- Reload your last setup and level the copter.
- Set all parameters for stabilize STB_***_P and STB_***_I to zero.
- Set all parameters for rate control RATE_***_P, RATE_***_I and RATE_***_D to zero.
- Hold the copter with one hand, throttle up until it becomes weightless, then turn it. Nothing should happen, because all control loop parameters are zero. You have just manual throttle control.
- Increase the D-Terms and test again while holding the copter in your hand. Remember, it can't fly with most parameters set to zero. Increase the D-Term and test again, until you got some oscillation. Let's call the value you found 100 percent. Now reduce it to 50..70 percent. Do this for ROLL and PITCH.
- Proceed with the P-Terms. Increase and test until it gets unstable. Then reduce it to 50..70 percent of the original value.
- Set the I-Terms to 30..50 percent of the P-Terms. Test again. If it is unstable, reduce P and I and maybe D a little bit.
- Now go to the STABILIZE control parameters. Increase the P-Term. Now the copter should return to level position automatically. Increasing of the P-Term will lead to a faster return to level position after disturbance or also to a fast respond to the desired angle by pilot input. Find a value where the copter returns fast from a disturbance without overshot or oscillation.
- Always leave the STABILIZE I-Term STB_***_I zero in this controller configuration!
- For YAW you can leave the D-Term zero and use your previous settings for STABILIZE and RATE control. Or you tune it, but I didn't try it at YAW yet.
- If everything works well and the copter returns to level position after disturbance or pilot input without overshot or oscillation, then you can try to hover and fly carefully. Not before!
Here are some controller parameters from my Tricopter (80 cm from motor to motor, 1440 g in flight weight), the D-Terms are quite small:
STB_PIT_I,0
STB_PIT_IMAX,1500
STB_PIT_P,9
STB_RLL_I,0
STB_RLL_IMAX,1500
STB_RLL_P,9
STB_YAW_I,0
STB_YAW_IMAX,1500
STB_YAW_P,4
RATE_PIT_D,0.015
RATE_PIT_I,0.02
RATE_PIT_IMAX,1000
RATE_PIT_P,0.09
RATE_RLL_D,0.012
RATE_RLL_I,0.02
RATE_RLL_IMAX,1000
RATE_RLL_P,0.09
RATE_YAW_D,0.004
RATE_YAW_I,0.002
RATE_YAW_IMAX,500
RATE_YAW_P,0.5
It would be really nice, if other people try this modification. For me, it was a significant increase in stability.
Regards, Igor
Go ahead and try it. There is nothing like learning by doing. That is how I started.
Having said that, the double loop system is common. There are a lot of different algorithms for determining attitude, but the Attitude ==> Rotation_Rate ==> ESC type of control loop scheme is often used even when it's not obvious. Off the top of my head, Arducopter, Openpilot, Multiwii, Aeroquad all use the dual loop system.
Yes, I totally got what you have done, thanks for the explanation. So, you think one-loop controller will also work? The reason I asked is that I already designed a controller and I want to implement it on PX4 flight controller. Additionally, I have seen many controllers in different articles designed by different control methods but I'd never seen this kind off controller that is used in Arducopter.
Regards,
Maziar
The short answer is that it could be done, but is a vastly inferior result to using a double loop system. Although it is counter intuitive, dividing the control loop into two pieces is both easier and provides better results than a single loop system.
Stability requires fast response to disturbances. In a flying machine, the gyro outputs in terms of of rotation per unit time (usually radian/s or deg/s). This output from the gyro is more resistant to noise than any other sensor on the machine. This makes it very easy, and fast to compare the expected rotational rate to the measured rotational rate as indicated by the gyro and adjust the ESC output to compensate. The stability generated by this control loop is all that is required to fly in Acro mode. This is often referred to as the inner loop.
The outer control loop looks at the expected orientation of the machine and compares this to the observed orientation estimate generated by some fancy algorithm and the input of multiple sensors (usually gyro, accelerometer and magnetometer). This control loop outputs rotational rates to the inner loop to try to correct for the errors it observes.
Out side of this loop may be a control system relating to navigating in the world to get to and between way points. This is not considered part of stability control.
I hope this helps a little.
Phillip
Hi everyone,
I have a fundamental question. Why did you divide the control process into two loops? Can't we just have one controller? The inputs would be desired Euler angles and the outputs would go to ESCs? Can everyone clarify this to me?
@Roy,
palo sestak has told me where these objects are defined. Many thanks to him. Thank you all well.
Actually in your original assumptation, what you described is a trajectory tracking problem, i.e., tracking a desired attitude with a speed of 10 deg/sec. But for multicopters' control, the robot usually works close to its operating point, i.e., it's a stabilization problem. PID isn't a model-based control technique, but IMO it's the most popular one. But of course for different MAV configurations, the control structure might be different.
Thanks,
Yangbo
@Yangbo,
Sorry, I am not familiar enough with the Ardu code to tell you where those functions live, but I should think a simple search would be able to find them.
@Igor (sorry so late with my reply)
My experience is with full-size vehicles, not with paprazzi nor arducopter specifically.
All,
In general, I do not favor the "PID" approach - it assumes you know very little about what you are trying to control (which is certainly understandable from the perspective of the typical developer on projects such as this one). In reality, you know that you're trying to control a multirotor, or a fixed wing plane, or a helicopter, etc. Therefore, the dynamics are more constrained and you don't have to resort to these "black box" techniques.
- Roy
Hi Roy,
I read the discussion and comments here. I also read the PID libraries. But I have difficulities to find these functions or objects:
wrap_180, constrain, ahrs.roll_sensor, roll_rate_d_filter.apply
Because I failed to set up Eclipse on my laptop, it's really difficult for me to find many variables, functions and objects. Can you show me where they are defined?
In my opinion, the P-PID control strcture is more like a cascaded design. Two controllers are responsible for two loops, inner angular rate loop and outer attitude loop. As you assumed, the "Desired Rate" is not a real desired rate, I think that's because essentially the outer loop P controller is not able to control the attitude loop without steady error. A similar strcture to this is the motor control, which includes a current loop, a speed loop and a position loop, each loop is controlled individually.
The control code in MK is like this:
varibles_roll.e = (Dst_eula.roll - Real_eula.roll - Speed.b)*parameter_roll.Kp;
varibles_roll.integral += varibles_roll.e*parameter_roll.Ki;
varibles_roll.differential=(varibles_roll.e-varibles_roll.last_e)*parameter_roll.Kd;
varibles_roll.last_e = varibles_roll.e;
varibles_roll.output = varibles_roll.e //* parameter_roll.Kp
+ varibles_roll.integral //* parameter_roll.Ki
+ varibles_roll.differential; //* parameter_roll.Kd;
somehow similar to the P-PID control structure, with the standalone kp=1.
Thank you in advance if anybody can let me know where the objects, wrap_180, constrain, ahrs.roll_sensor and roll_rate_d_filter.apply, are defined in the PID libraries. Thank you for your help.
Yangbo
@Robert: If I got it right, you say that the I-term of the rate PIDT1 will generate overshots on disturbance all the time. From practical experiments, the P-PIDT1 work much better than the P->PI ArduCopter used bevore. And you can tune the P-PIDT1 in a way that there is no overshot. Of cause you can tune nearly each contol loop also in a way that it generates overshot.
@Roy: Maybe the paparazzi structure for stabilize mode is better. We should try it! Do you fly paparazzi and arducopter? Can you compare it directly? Or have you tried this structure in ArduCopter?
@Thomas: Thanx, but I have not implemented it into 2.3.It is the same if you leave the dampening controller STAB_D zero. If you dont want to use the DT1, set all D parameters to zero and you get the same as AC 2.1.
The vibration issue is not solved with controller settings, because the vibrations go directly into the attitude estimation.
Fast responding ESCs are an improvement, because the whole system gets faster. No matter which control structure you use.
And about tuning of the parameters: I think a lot of users quite dont understand that this is really important. You can tune it to work insuficient, to work perfectly, or to overshot, oscillation and crash. And your parameters depend on many things like weight, diameter, akku, escs, motors, propellers... so, parameters that work for one airframe perfectly can crash another.
Oy, Igor!
The 2.3 code, with some sort of implementation of the PIDT1 seems to have brought us noticeable improvement on stabilize mode. Though many users reports difficulties to really use the roll D-term, due to oscillation and vibration sensitivity.
Just wondering, have you been involved in the implementation of PIDT1 into 2.3, and is does the implementation truly reflect the PIDT1 you had in mind?
About the sensivity of vibrations/ sampling/aliasing noise and oscillations - do you believe that a well tuned low-pass filter on the D-term input would make it less sensitive?
And would more responsive ESC:s increase the potential for D-term output to do it´s job better?
Tomas
As Robert Lefebvre and I said earlier, it is not a good idea to integrate the rate feedback term as you have it shown. If you want to see what a good control loop architecture looks like, try this:
As for angular acceleration or double-D feedback, yes the Stanford guys are using a better sensor package. But, Bill Premerlani is using similar sensors on his UAVDevBoard, and he's considering using this term as well:...
Bill does not report results yet that I have seen, but I hope this community would have learned by now to take his ideas seriously!
- Roy
|
https://diydrones.com/profiles/blogs/arducopter-changing-from-pi-to-pid-to-improve-stability
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Creating and converting cubed-sphere grids to unstructured meshes
Project description
Introduction
The grid generator is from here, and I/O is handled by meshio.
This package was created for education/research purpose. Personally, I use this to study the grid convergence for data transferring between CSGrid and spherical centroidal Voronoi tessellations (SCVT).
Installation
You can easily install this package through pip, i.e.
$ pip install csgrid2unstr --user
You can, of course, install it directly from the repository:
$ git clone $ cd csgrid2unstr && python setup.py install --user
Notice that this package depends on:
Usage
As Executable Binary
Once you have installed the package, open the terminal and type:
$ csgrid2unstr -h usage: csgrid2unstr [-h] [-n SIZE] [-o OUTPUT] [-r REFINE] [-f {vtk,vtu,gmsh,off,exodus,xdmf,dolfin-xml,stl}] [-b] [-V] [-v] write CSGrid to unstr optional arguments: -h, --help show this help message and exit -n SIZE, --size SIZE Number of intervals of a square face -o OUTPUT, --output OUTPUT Output file name, w/o extension -r REFINE, --refine REFINE Level of refinements, default is 1 -f {vtk,vtu,gmsh,off,exodus,xdmf,dolfin-xml,stl}, --format {vtk,vtu,gmsh,off,exodus,xdmf,dolfin-xml,stl} Output file format, default is VTK -b, --binary Use BINARY. Notice that this flag is ignored for some formats -V, --verbose Verbose output -v, --version Check version
If you got command not found: csgrid2unstr, make sure csgrid2unstr is in your $PATH.
There are two must-provided parameters, i.e. -n (--size) and -o (--output). The former is to define the number of intervals of a square face, i.e. the number of quadrilaterals of a face is n*n, and the latter is to provide the output filename (without extension). For instance:
$ csgrid2unstr -n 20 -o demo
will construct a CSGrid of 400 quadrilaterals per face, convert the grid into an unstructured mesh and store it in demo.vtk.
You can create a serial of uniform refined grids by adding -r (--refine) switch, e.g.:
$ csgrid2unstr -n 10 -r 3 -o demo -f xdmf
will construct three CSGrids with 100, 400, and 1600 quadrilaterals per face, convert them into three unstructured meshes and store them in demo0.xdmf, demo1.xdmf, and demo2.xdfm, resp.
As Module
Using csgrid2unstr as a Python module is also simple.
from __future__ import print_function from csgrid2unstr.cubed_sphere import CSGrid from csgrid2unstr.unstr import Unstr # create a CSGrid of 25 quads per face cs = CSGrid(5) # convert it into an unstructured mesh mesh = Unstr(cs) # two attributes, points and cells, of np.ndarray print('Nodes {}-by-3'.format(len(mesh.points))) print(mesh.points) print('Cells {}-by-4'.format(len(mesh.cells))) print(mesh.cells)
License
MIT License
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/csgrid2unstr/0.0.1/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Datapane client library and CLI tool
Project description
Datapane CLI and Python client library
Datapane is a platform to rapidly build and share data-driven boards and analysis publicly on the cloud and within your organisation.
This package includes both the Datapane Python client library, and the
datapane command-line tool.
The
datapane tool provides CLI functionality into creating Datapanes and running scriped Datapanes.
Available to download for Windows, macOS, and Linux. Free software under the Apache license.
Happy hacking! :)
Docs
See
Install
$ pip3 install datapane
Configure
$ datapane login $ datapane ping
Use
API
from datapane import api api.init() ds = api.Dataset.upload_df(df) s = api.Script.upload_file("./script.ipynb") asset = api.Asset.upload_obj(my_matplotlib_figure) api.Datapane.create(ds, asset)
Command line
# create a Dataset $ datapane dataset upload ./test.csv --public # create a Script $ datapane script upload ./script.ipynb --title "My cool script" # create a Datapane $ datapane datapane create ./test.csv ./figure.png
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/datapane/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Adding Operations
In many cases, just sending events is not enough for a game. If you want to provide authorization, persistency or game-specific Operations, it's time to extend LoadBalancing.
This page shows two ways to implement new Operations and how to use them. Our sample Operation expects a message from the client, checks it and provides return values.
Server-Side, Simple Variant
Let's start on the server side, in the LoadBalancing Application. This variant is not very elegant but simple.
//Server SDK public void OnOperationRequest(OperationRequest request, SendParameters sendParameters) { // handle operation here (check request.OperationCode) switch (request.OperationCode) { case 1: { var message = (string)request.Parameters[100]; if (message == "Hello World") { // received hello world, send an answer! var response = new OperationResponse { OperationCode = request.OperationCode, ReturnCode = 0, DebugMessage = "OK", Parameters = new Dictionary<byte, object> { { 100, "Hello yourself!" } } }; this.SendOperationResponse(response, sendParameters); } else { // received something else, send an error var response = new OperationResponse { OperationCode = request.OperationCode, ReturnCode = 1, DebugMessage = "Don't understand, what are you saying?" }; this.SendOperationResponse(response, sendParameters); } return; } } }
The above code first checks the called OperationCode. In this case, it's 1. In Photon, the OperationCode is a shortcut for the name/type of an Operation. We'll see how the client call this below.
If Operation 1 is called, we check the request parameters. Here, parameter 100 is expected to be a string. Again, Photon only uses byte-typed parameters, to keep things lean during transfer.
The code then checks the message content and prepares a response. Both responses have a returnCode and a debug message. The returnCode 0 is a positive return, any other number (here: 1) could mean an error and needs to be handled by the client.
Our positive response also includes a return value. You could add more key-value pairs, but here we stick to 100, which is a string.
This already is a complete implementation of an Operation. This is our convention for a new Operation, which must be carried over to the client to be used.
Client Side
Knowing the definition of above's Operation, we can call it from the client side. This client code calls it on connect:
public void OnStatusChanged(StatusCode status) { // handle peer status callback switch (status) { case StatusCode.Connect: // send hello world when connected var parameter = new Dictionary<byte, object>(); parameter.Add((byte)100, "Hello World"); peer.OpCustom(1, parameter, true); break; //[...]
As defined above, we always expect a message as parameter 100. This is provided as parameter Dictionary. We make sure that the parameter key is not mistaken for anything but a byte, or else it won't send. And we put in: "Hello World".
The PhotonPeer method OpCustom expects the OperationCode as first parameter. This is 1. The parameters follow and we want to make sure the operation arrives (it's essential after all).
Aside from this, the client just has to do the usual work, which means it has to call PhotonPeer.Service in intervals.
Once the result is available, it can be handled like this:
public void OperationResponse(OperationResponse operationResponse) { // handle response by code (action we called) switch (operationResponse.OperationCode) { // out custom "hello world" operation's code is 1 case 1: // OK if (operationResponse.ReturnCode == 0) { // show the complete content of the response Console.WriteLine(operationResponse.ToStringFull()); } else { // show the error message Console.WriteLine(operationResponse.DebugMessage); } break; } }
Server Side, Advanced Version
The preferred way to handle Operations would be to create a class for it. This way it becomes strongly typed and issues due to missing parameters are handled by the framework.
This is the definition of the Operation, its parameters, their types and of the return values:
//new Operation Class namespace MyPhotonServer { using Photon.SocketServer; using Photon.SocketServer.Rpc; public class MyCustomOperation : Operation { public MyCustomOperation(IRpcProtocol protocol, OperationRequest request) : base(protocol, request) { } [DataMember(Code = 100, IsOptional = false)] public string Message { get; set; } // GetOperationResponse could be implemented by this class, too } }
With the Operation class defined above, we can map the request to it and also provide the response in a strongly typed manner.
public void OnOperationRequest(OperationRequest request, SendParameters sendParameters) { switch (request.OperationCode) { case 1: { var operation = new MyCustomOperation(this.Protocol, request); if (operation.IsValid == false) { // received garbage, send an error var response = new OperationResponse { OperationCode = request.OperationCode, ReturnCode = 1, DebugMessage = "That's garbage!" }; this.SendOperationResponse(response, sendParameters); return; } if (operation.Message == "Hello World") { // received hello world, send an answer! operation.Message = "Hello yourself!"; OperationResponse response = new OperationResponse(request.OperationCode, operation); this.SendOperationResponse(response, sendParameters); } else { // received something else, send an error var response = new OperationResponse { OperationCode = request.OperationCode, ReturnCode = 1, DebugMessage = "Don't understand, what are you saying?" }; this.SendOperationResponse(response, sendParameters); } break; } } }
|
https://doc.photonengine.com/zh-tw/server/v4/app-framework/adding-operations
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
web framework support customing file upload processing
Project description
NAME
x100http, web framework support customing file upload processing
SYNOPSIS
from x100http import X100HTTP app = X100HTTP() def hello_world(request): remote_ip = request.get_remote_ip() response = "<html><body>hello, " + remote_ip + "</body></html>" return response app.get("/", hello_world) app.run("0.0.0.0", 8080)
DESCRIPTION
x100http is a lite webframework designed for processing HTTP file upload.
CLASS X100HTTP
X100HTTP()
return a instance of x100http which wrapped below functions.
run(listern_ip, listen_port)
run a forking server on address listern_ip:listern_port
get(url, handler_function)
set a route acl of HTTP “GET” method.
handler_function will be called when url be visited.
handler_function must return a string as the HTTP response body to the visitor.
struct request (will explain below) will be passed to the handlder function when it is called.
post(url, handler_function)
set a route acl of HTTP “POST” method with header “Content-Type: application/x-www-form-urlencoded”.
handler_function will be called when HTTP client submit a form with the action url.
handler_function must return a string as the HTTP response body to the visitor.
struct request (will explain below) will be passed to the handlder function when it is called.
static(url_prefix, file_path, cors=allow_domain)
set a route acl for static file
Static file request with url_prefix will be routing to the file in file_path.
Default value of cors is “*”, allow all CORS request matching this route rule.
upload(url, upload_handler_class)
set a route acl of HTTP “POST” method with header “Content-Type: multipart/form-data”.
A new instance of class upload_handler_class will be created when file upload start.
struct “request” (will explain below) will be passed to upload_handler_class.upload_start().
upload_handler_class.upload_process() will be called every time when the buffer is full when file uploading.
two args will be passed to upload_handler_class.upload_process().
first arg is the name of the input in the form, second arg is the content of the input in the form.
the binary content of the upload file will be passed by the second arg.
struct “request” (will explain below) will NOT be passed to upload_handler_class.upload_finish().
upload_handler_class.upload_finish() will be called when file upload finished, this function must return a string as the HTTP response body to the visitor.
struct “request” (will explain below) will be passed to upload_handler_class.upload_finish().
set_upload_buf_size(buf_size)
set the buffer size of the stream reader while file uploading.
the unit of buf_size is byte, default value is 4096 byte.
upload_handler_class.upload_process() will be called to process the buffer every time when the buffer is full.
ROUTING
x100http route accept a url and a function/class/path.
There are three four of routes - get, post, static and upload.
app.get("/get_imple", get_simple) app.post("/post_simple", post_simple) app.upload("/upload_simple", UploadClass) app.static("/static/test/", "/tmp/sta/")
routing for HTTP GET can be more flexible like this:
app.get("/one_dir/<arg_first>_<arg_second>.py?abc=def", regex_get)
allow all domain for CORS like this:
app.static("/static/test/", "/tmp/sta/", cors="*")
CLASS X100REQUEST
A instance of class X100Request will be passed into every handler function.
get_remote_ip()
Return the IP address of the visitor.
get_body()
Return the body section of the HTTP request.
Will be empty when the HTTP method is “GET” or “POST - multipart/form-data”.
get_query_string()
Return the query string of the page was accessed, if any.
get_arg(arg_name)
args parsed from query_string when the request is sent by “GET” or “POST - multipart/form-data”.
args parsed from body when the request is sent by “POST - application/x-www-form-urlencoded”.
get_header(header_name)
Return the header`s value of the header_name, if any.
CLASS X100RESPONSE
set_body(content)
Set the response data to visitor.
Type ‘str’ and type ‘bytes’ are both accepted.
set_header(name, value)
Set the HTTP header.
HTTP ERROR 500
visitor will get HTTP error “500” when the handler function of the url he visit raise an error or code something wrong.
SUPPORTED PYTHON VERSIONS
x100http only supports python 3.4 or newer, because of re.fullmatch and os.sendfile.
EXAMPLES
get visitor ip
from x100http import X100HTTP app = X100HTTP() def hello_world(request): remote_ip = request.get_remote_ip() response = "<html><body>hello, " + remote_ip + "</body></html>" return response app.get("/", hello_world) app.run("0.0.0.0", 8080)
post method route
from x100http import X100HTTP app = X100HTTP() def index(request):" \ + "<input type="text" name="abc" />" \ + "<input type="submit" name="submit" />" \ + "</form>" \ + "</body></html>" return response def post_handler(request): remote_ip = request.get_remote_ip() abc = request.get_arg('abc') response = "hello, " + remote_ip + " you typed: " + abc return response app.get("/", index) app.post("/form", post_handler) app.run("0.0.0.0", 8080)
process file upload
from x100http import X100HTTP, X100Response class UploadHandler: def upload_start(self, request): self.content = "start" def upload_process(self, key, line): self.content += line.decode() def upload_finish(self, request): return "upload succ, content = " + self.content app = X100HTTP() app.upload("/upload", UploadHandler) app.run("0.0.0.0", 8080)
set http header
from x100http import X100HTTP, X100Response def get_custom_header(request): remote_ip = request.get_remote_ip() response = X100Response() response.set_header("X-My-Header", "My-Value") response.set_body("<html><body>hello, " + remote_ip + "</body></html>") return response app = X100HTTP() app.upload("/", get_custom_header) app.run("0.0.0.0", 8080)
more flexible routing
from x100http import X100HTTP def regex_get(request): first = request.get_arg("arg_first") second = request.get_arg("arg_second") abc = request.get_arg("abc") return "hello, " + first + second + abc app = X100HTTP() app.get("/one_dir/<arg_first>_<arg_second>.py?abc=def", regex_get) app.run("0.0.0.0", 8080)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/x100http/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Feeds
News feed
Recent files OS4Depot
IRC
IRC Channel info
IRC Window
Links
In cooperation with
Support
OS4 Feedback forum
OS4Depot Feedback forum
Amigabounty forum
OpenAmiga forum
Software
AmiCygnix forum
ABC shell forum
AmiKit forum
Cinnamon Writer forum
CodeBench forum
Digital Universe forum
Dopus 5 forum
E-UAE forum
Gnash forum
Ibrowse forum
JAmiga forum
Odyssey forum
OWB forum
Qt forum
SmartFileSystem forum
Timberwolf forum
TuneNet forum
Unsatisfactory Software forum
About
Statement of Intent
Staff Members
Poll HowTo
Article HowTo
Search
Search the site
Search members
Username:
Lost Password?
Register now!
Sections
News
User Profile
Forums
Articles
Images
Headlines
Classifieds
Polls
Who's Online
44
user(s) are online (
32
user(s) are browsing
Forums
)
Members: 0
Guests: 44
more...
Support us!
Recent OS4 Files
OS4Depot.net
4 Dec am06:48
claws-mail-src.lha - network/email
4 Dec am06:48
claws-mail.lha - network/email
2 Dec pm07:16
easytag.lha - audio/edit
2 Dec pm07:15
easytag-src.lha - audio/edit
30 Nov pm09:31
amiarcadia.lha - emulation/gamesystem
29 Nov pm02:57
roadshow_catalog_deu.lha - network/server/misc
The Amigans website Forum Index
AmigaOS4
Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
stonecracker
Profile
WWW
Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2017/10/6 7:26
#1
Just popping in
Joined:
2017/8/8 20:17
From
Canada
43
Using Microsoft Visual Studio (VS) Community 2017 as an IDE to Develop/Cross-Compile PowerPC (ppc) Amiga OS4.x (OS4) Binaries Using a Cygwin-GCC/G++-based Amiga Toolchain (cyg-adtools) on Windows 10
[First Published 2017-10-06. Updated to explicitly show where tab spacing should be in make files, plus a note to avoid spaces in file/directory names, and a note about the symlinks used. Updated to show use of "-gstabs" option in makefile instead of "-gdbg", and also a link to @kas1e's nice tutorial on using gdb on OS4. Updated to describe adding other Include and Library search paths in makefile for 3rd party or other headers/libs.]
[Updated 2019-10-26 -- Note, there are 3 related posts here that cover cross-compiling for Amiga OS4 from Windows:
1) A post on how to build a Windows-hosted toolchain ( ... hp?topic_id=7623&forum=25
)
2) A post on how to use the toolchain built by the first post, from a Windows / cygwin command line ( ... m=25&topic_id=7654&order=
)
3) The post you're reading now, on how to use the toolchain built in the first post from inside Visual Studio for Windows ( ... 9&forum=25&post_id=108233
) ]
Prelude:
This guide is long. Very long. It's not long because it's difficult or even time consuming to accomplish its goal, which is to get Visual Studio to use your nice new Cygwin-hosted adtools. It isn't. It's actually really fast/easy -- once you know what you're doing. I actually can get a new cross-compiling Amiga OS4 source project set-up/going in Visual Studio inside of 30 seconds.
The "know what you're doing" is the lengthy part, and that's what I'm hoping to impart onto you, to help save you the insane time it took me to catch every tiny little nuance of making this work, because some of it was absolutely counter-intuitive and unexpected.
To accomplish this setup for all your future projects, you need to gain some knowledge on the workings of VS and how it interacts with Cygwin and the Cygwin-hosted adtools toolchain.
So, I've attempted to craft a truly complete and entirely self-contained guide. That means it's rather verbose. But hopefully you only need this guide (plus the pre-requisite one) to get everything going. Hopefully, you won't need to read 100's of posts and chains of conflicting/out-of-date information.
Why bother?
Because VS is slick, and makes AmigaOS/OS4 coding (any coding, really) that much more enjoyable and easy. Unless/until a great native/Amiga IDE is released, cross-compiling is not only a good idea -- it's mission critical. I say this because creating a great/comfortable cross-compiling toolset a pre-requisite in my mind to keeping/attracting developers for OS4 because the more barriers we have (i.e., forcing the legions of Windows/Mac/Linux-based developers to abandon their favourite non-Amiga dev environments, just to develop for the beloved AmigaOS), the less developers we have.
I, myself, was very much in the same boat -- just trying to get either a native or cross-compiling toolchain going was a painful ordeal. I (like all other devs who've stuck around or are just now coming back to the OS) had to make a choice -- burn a bunch of hours to get this going smoothly, or stop trying and abandon a 25-year reunion with my all-time-favourite OS, or ... burn an _insane_ amount of time just getting cross-compiling toolchains and IDE's to work and write about it so others wouldn't have to.
VS and the cyg-adtools package can offer an Amiga OS4 C/C++ developer a really powerful, modern, robust, heavily developed, and as-up-to-date-as-you'd like, cross-compiling Integrated Development Environment on Windows 10. There are countless plugins constantly being developed for VS, and since VS Community 2017 (and probably beyond) are now given away by Microsoft at zero cost, VS has been exploding in popularity. A wise move, in my opinion, to help Microsoft stem their losses in the massive global development community who may otherwise have little (or zero) need to develop on, or for, Windows.
VS definitely is _not_ the lightest-weight / smallest-footprint / fastest-running / easy-to-use IDE in the world. It _is_ one of the most powerful and popular ones -- and not just for C/C++ developers anymore, and also not just for Windows developers anymore.
Really the only thing that modern developers would like that is missing from this guide is the ability to have VS and its tools act as a tightly-integrated front-end for remote debugging (ex: VS running VisualGDB or WinGDB or remote gdb on Windows, with a remotely-connected instance of gdb running in your Amiga environment). The missing piece to get remote debugging working is how to make a virtual serial or SSH connection from the Amiga environment to a Windows one. I think it's possible. I just haven't worked on the issue. If I do figure it out, I'll post another guide on that subject as well.
This guide was developed using VS Community Edition 2017. I assume that all commercial/paid-for editions of VS would work as well. Additionally, this entire setup will also probably work just fine on Visual Studio Code 2017, maybe verbatim, if using Visual Studio Code on Windows.
But, that aside, I think it's really interesting that this entire setup (or at least the concepts) could work on Visual Studio for Mac and Linux. If so, then a new guide is very much warranted as the instructions would definitely have to change given that Mac and Linux don't use Cygwin at all (they don't need to ...the gcc tools would be native), so the set-up would be somewhat different. That is, you _wouldn't_ be using VS Code 2017 on Mac/Linux with the cyg-adtools package, you'd instead be generating and using an entirely different adtools package for Mac or Linux gcc/g++, and using makefiles from Visual Studio Code to access those cross-compiler binaries. At any rate, hopefully this guide can help with that if anyone tackles that effort.
Lastly, these instructions were supposed to be part two of two: part one being how to integrate the cyg-adtools package with the Eclipse IDE on Windows. However, my attempts to make that work were so awkward and difficult that I realized it would just be faster/easier to use a good developer's text editor and a Cygwin command shell. In other words, because of my horribly bad run-ins with Eclipse's setup environment refusing to use the adtools cross toolchain in Cygwin, and being really finicky with makefiles, I found that having no IDE whatsoever was far superior to trying to get Eclipse and cyg-adtools to work. It's almost certainly _not_ Eclipse's fault per se, I actually think it's a good IDE, I just couldn't figure it out.
The setup described here, by contrast, works really well, and borrows heavily from an older article on this subject, found at:
If you _don't_ want to use an IDE at all, and instead just your favourite Windows editor, your Cygwin/UWin bash shell, and the cyg/uwin-adtools, then ignore this guide entirely and instead check-out: ... m=25&topic_id=7654&order=
Pre-Requisites:
Before using _this_ guide, you should first read/complete the _other_ guide I wrote on building the Cygwin-based "adtools" package directly from the adtools source repository on github.
All of these instructions here fundamentally assume you've followed my other guide. Check out: ... hp?topic_id=7623&forum=25
The impetus for this guide is that I discovered the exact same problem with this subject matter as in my prior one, namely there is nothing but outdated information about using Windows Integrated Development Environments (IDE)'s with any Cygwin-based AmigaOS/OS4 adtools cross compiler toolchains.
In this guide, I instruct on how to pick up where my other guide left off ... that is, I discuss how to bring my other guide's resulting Cygwin-hosted adtools cross-compiling chain into VS in an almost-seamless/almost-complete manner. Literally the only thing missing that I could see with this set-up is a functioning remote debugger to remotely run gdb on your Amiga OS4 test environment from your VS workstation.
I cover only the Cygwin toolchain here. Although my prior guide showed how to build the adtools toolchain on UWin (Bash on Ubuntu on Windows/Linux Subsystem (LXSS)/Windows Subsystem for Linux (WSL)), this guide doesn't show how to actually use that particular toolchain from inside Visual Studio. That's the subject of another guide I may yet produce (or not -- folks, feel free to take up that task).
Knowledge about GNU make/gmake/make files is helpful. This guide does, afterall, try to continue the paradigm of porting as much of the standard GCC/G++/gmake/make-based Linux C/C++ development methods/processes into the Windows environment as we can. Central to that is the ability to create some simple "make" files.
If you choose to use Visual Studio to create/edit Amiga source packages, then it's important to realize that your source package might be used by other developers (or yourself) in some development environment that is _not_ VS. So, to avoid the _guaranteed_ collisions/problems that would otherwise happen if we were not careful, these instructions specifically do NOT use the standard "makefile" naming conventions. Instead, we use the filenames "makefile.vs" and "makefile-dbg.vs" so that any developer who's expecting their standard command-line toolchain calls to work with non-VS/standard makefiles named "makefile" _won't_ get tripped up by you using VS in your development efforts.
By extension, this means if you intend to distribute your Visual Studio-developed source package to anyone, because other developers may not be using VS, you'll probably need to create both a standard make file named "makefile" AND your VS-specific "makefile.vs" and "makefile-dbg.vs" make files described herein.
Indeed, this is THE central idea of this guide: it's the VS-specific makefiles, (and some VS settings) that do the heavy lifting to force VS to actually use your cyg-adtools toolchain to create AmigaOS binaries.
And remember, the price to pay for using Visual Studio is that these makefiles and settings _must_ be created and configured for _every_ Amiga OS4.x cross-compiled project that you want to manipulate inside VS. It's a small price to pay, in my mind.
VS + cyg-adtools is, truly, slick and powerful. I don't give Microsoft much credit very often. They deserve kudos for making Visual Studio Community 2017 both excellent and free.
How-To:
Phew. Enough intro. Let's get to it.
TIP: In everything below, avoid using spaces in any directory name or file name. Save yourself some sanity because the resulting errors can be infuriating.
1) If you've not already done so, follow all the steps needed to create your Cygwin gcc/g++ setup and the Cygwin-hosted adtools toolchain from source. Go to the top of this guide and follow the link there if needed.
2) Install VS Community 2017. Once installed, run it, and add the VS "Linux Development with C++" Toolset using the Tools->Get Tools and Extensions menu option. Close VS.
3) Add the Cygwin "bin" directory to the Windows path. To do this, access the Windows 10 path settings by clicking on:
Windows Start Menu Button->Settings (the cog/wheel icon).
In the "Find a Setting" search box, type "view advanced" and the "View Advanced System Settings" auto-complete option should show. Click on it.
Then, the System Properties dialog box will display.
Click the "Environment Variables..." button.
In the System Variables sub dialog area, you should be able to scroll in that box and find the environment variable named "Path". Click on "Path" to highlight it, then click on the Edit button.
Add the "C:\cygwin64\bin" directory to the bottom of that pop-up Path Edit dialog box.
Click OK on all the dialog boxes you just launched.
You definitely want to reboot your machine to ensure this system path is now being used.
4) After reboot and login, start Visual Studio again.
5) Create a new project (File->New Project).
In the New Project dialog box, in the left-hand side Project Type selection tree (not labeled as such) select:
Installed -> Visual C++ -> General
In the middle pane, select Makefile Project.
Enter a name/directory of your project. For this guide, create a Project named "CygVsMakeProject1", and save it to VS's default $HOME\source\repos directory.
Note that VS 2017 contains "Projects" inside collections called "Solutions". A Solution can have many Projects. By default VS will create a Solution named the same as the first Project you create. That is, in VS's Solution Explorer window, you'll have a "CygVsMakeProject1" Solution that contains a single project also named "CygVsMakeProject1". Don't confuse the two in the following instructions.
6) You'll next be presented a large dialog box asking about your Debug Configuration Settings and about your Release Settings. These are 2 profiles you can use to effect 2 different types of builds, one with debug symbols inserted into the binaries and one without.
We'll definitely want to take advantage of VS's nice ability to do that, which means we'll use 2 different, but very similar, makefiles (see below) for this purpose.
In the Debug Settings dialog box:
The makefile referenced in these settings ("makefile-dbg.vs") _includes_ the compiler options for adding debug symbols to the object files/executables.
For both the "Build command line" and "Rebuild command line" fields:
make -f makefile-dbg.vs 2>&1 | sed -e 's/\(\w\+\):\([0-9]\+\):/\1(\2):/'
For the "Clean command Line" field:
make -f makefile-dbg.vs clean
For the "Output (for debugging) field:"
$(ProjectName)-dbg
Leave the other fields blank for now. Click Next.
Release Settings dialog box:
This is essentially the same settings as the Debug settings, but we need to reference a make file that _excludes_ any debug symbols from being inserted into binaries. This makefile is called "makefile.vs"
Uncheck the "Same as debug configuration" selection box.
For both the "Build command line" and "Rebuild command line" fields:
make -f makefile.vs 2>&1 | sed -e 's/\(\w\+\):\([0-9]\+\):/\1(\2):/'
For the "Clean command Line" field:
make -f makefile.vs clean
For the "Output (for debugging) field:"
$(ProjectName)
Leave the other fields blank for now. Click Finish.
7) Time to create the 2 makefiles referenced above. To understand the files you're about to create, know that in this guide, we'll be creating a small program called "easy" whose source filename is "easy.cpp". "easy" is a trivial, but really good, Amiga-specific example to prove that we have everything needed to create a true Amiga program, not just some standard portable C/C++ binary that doesn't use any AmigaOS-specific functions. The program itself is super-simple, when run, it just flashes the Workbench screen.
And now the interesting knowledge bits....
Very interestingly, our Cygwin-based makefile/make environment is a mix of both Windows/Cygwin execution spaces and command formatting. For example, the makefiles are written in pure POSIX style, complete with forward slashes for directory names, and respecting such things as Cygwin symbolic links/etc (but _not_ Windows backslashes and _not_ Windows junctions/symlinks).
Yet we still have Visual Studio running in a standard Windows execution space and it either directly or indirectly via the makefiles, calls such things as "make" and "ppc-amigaos-g++" and "rm" commands from the Windows environment -- not a Cygwin bash shell. "make" and "rm" aren't at all Windows commands, yet this works. It's why we needed to place the C:\cygwin64\bin directory into the Windows System Path variable so Cygwin could do its magic. And, despite all this mix of Windows/POSIX commands/filesystem namespaces, Windows execution spaces, etc. it all flows effortlessly. Cygwin makes all that possible/nearly automatic.
But you, good Amigan, need to keep some things straight in your head.
For Visual Studio anything -- in all VS dialog boxes, settings fields, non-makefile scripts, plugins, configuration settings, etc., use only standard Windows conventions/naming/filenaming/backslashes/"C:"/etc. (but never forward slashes or Cygwin-created symlinks)
Inside makefiles -- use only POSIX standard conventions as understood by Cygwin, like forward-slashes in directory names, and use of Cygwin/bash-created symbolic links, but never back slashes or "C:"
This is important and is easy to confuse.
This means that although in VS, you'll reference a given source tree or file or directory using standard Windows syntax, those very same files and directories that you'll be hammering away at with your cross compiler toolchain inside the makefiles must be referenced differently.
Now, back to the instructions...
In VS, with the newly-created project open and selected (remember, you must have the Project selected, not the Solution, when doing this), do the following:
Project -> Add New item -> Utility -> Text File
In the resulting Text File dialog box:
Change the default filename shown from "Text.txt" to "makefile.vs". Make sure you _don't_ have "makefile.vs.txt" as the filename. You can save that file to its default location.
Repeat these steps immediately above to add another text file, and call it "makefile-dbg.vs". You can save that file to its default location.
In the left-most pane (VS's Solution Explorer window) you should now see 2 child objects of the CygVsMakeProject1 project you just created. Those 2 child objects should be the 2 makefile text files. They should be in the Project directory, not some subdirectory like the "source" directory. If they are in a subdirectory, move them back up to the Project directory.
8) Let's edit each of the makefiles now.
Note, I'm assuming you've followed my prior post on creating the adtools chain from source, which put the binaries in the location referenced below, and have also created symlinks that are relied-upon below.
Please note!!!! Those are TABS in the second/indented line beneath each of "all" and "clean". They're not spaces, you must use a tab indent. Don't forget those!
In the VS editor pane, you should have "makefile.vs" already open, copy/paste the following into that file (remembering that you should check for, or just manually put in, a tab in those indented lines):
all:
<put a tab here> /usr/local/amiga/adtools/bin/ppc-amigaos-g++ -o easy easy.cpp -Wall -I/usr/local/amiga/adtools/ppc-amigaos/SDK/include
clean:
<put a tab here> rm easy
Similarly, in the "makefile-dbg.vs" editor window, copy/paste the following into that file (remembering that you should check for, or just manually put in, a tab in those indented lines):
all:
<put a tab here> /usr/local/amiga/adtools/bin/ppc-amigaos-g++ -gstabs -o easy-dbg easy.cpp -Wall -I/usr/local/amiga/adtools/ppc-amigaos/SDK/include
clean:
<put a tab here> rm easy-dbg
Save both the "makefile.vs" and "makefile-dbg.vs" files.
Some words about both makefiles for your own knowledge/future use.
1st, note the use of POSIX syntax throughout. This includes forward slashes and a reliance on a symlink created earlier if you followed my pre-requisite guide.
2nd, note the absolute Cygwin-based path/filename to reference the specific Amiga g++ cross compiler executable.
3rd, note the -I option, which is how I can specify the includes search path so that the amigaos-g++ compiler knows where to find the Amiga and other headers I'd like to use. The way I've set this up, Visual Studio doesn't actually communicate much with the makefiles except to invoke them; that's why they have to be so involved/detailed in things that might otherwise be passed-in as parameters in any other IDE/makefile environment. IN ORDER FOR YOU CODE TO COMPILE/LINK, YOU ABSOLUTELY MUST ENSURE THIS MAKEFILE INFORMS THE COMPILER OF YOUR INCLUDES SEARCH PATH! Use a '-I' for every directory needed. That might mean realllllly long commands with lots of long '-I' statements but that's fine.
Similarly, though not shown in this example, just like the '-I' option, you need to add '-L' options to tell the linker what directories to search for libraries to link against. Has the same syntax as the '-I' option, and use another '-L' for every directory that needs to be searched.
4th, these are stupid-simple makefiles. You can make much, much, smarter ones than this. There are lots of ways to avoid lots of long '-I' and '-L' entries for example.
But I've shown this as-is because the information here is the _minimum_ you need to have these makefiles work with VS and your Cyg-adtools. You might be able to use passed-in parameters from Visual Studio instead of stupid hard-coding of things like the include path or even filenames of anything. You might also be able to use environment variables to avoid any hard-coding of filenames/paths in a makefile. You probably can use parameterized substitutions to allow for a super-smart 100% generally re-usable makefile that you just copy/paste into 100% of your Visual Studio Amiga projects without a moment's thought. Please feel free to do so and post your improvements in comments to this guide.
9) Now, let's to the simple thing, copy/paste-in some code into a file named easy.cpp.
Create easy.cpp by again selecting the Project named CygVsMakeProject1 in the Solution Exploerer window.
Then, choose Project -> Add New Item -> Visual C++ -> C++ File (cpp)
In the filename dialog, keeping the same default directory supplied by Visual Studio, change the file name to "easy.cpp" and click "Add"
In the Solution Explorer, you should then see a single node in the "Source Files" tree, which is the one file you just added called "easy.cpp"
10) Copy/paste the following into the "easy.cpp" file's edit window.
/* easy.cpp: a complete example of how to open an Amiga function library in C/C++.
* In this case the function library is Intuition. Once the Intuition
* function library is open and the interface obtains, any Intuition function
* can be called. This example uses the DisplayBeep() function of Intuition to
* flash the screen.
*/
#include <proto/exec.h>
#include <proto/intuition.h>
struct Library *IntuitionBase;
struct IntuitionIFace *IIntuition;
int main()
{
IntuitionBase = IExec->OpenLibrary("intuition.library", 50);
// Note it is safe to call GetInterface() with a NULL library pointer.
IIntuition = (struct IntuitionIFace *)IExec->GetInterface(IntuitionBase, "main", 1, NULL);
if (IIntuition != NULL) /* Check to see if it actually opened. */
{ /* The Intuition library is now open so */
IIntuition->DisplayBeep(0); /* any of its functions may be used. */
}
// Always drop the interface and close the library if not in use.
// Note it is safe to call DropInterface() and CloseLibrary() with NULL pointers.
IExec->DropInterface((struct Interface *)IIntuition);
IExec->CloseLibrary(IntuitionBase);
}
11) Save the easy.cpp file. Use the "Save all" button in the VS toolbar, just in case you missed saving a bunch of your work to this point.
You're (finally) ready to build/compile/test.
12) Let's test this set-up!
In the profile dropdown (in the toolbar, just underneath the "Debug" and "Team" menu items), you should see the options "Release" and "Debug".
Beside that drop-down is a platform selection dropdown. Ignore that one.
Now, choose "Debug" from the profile dropdown.
Then choose Build -> Build CygVsMakeProject1
Check out the Output window at the bottom right of the VS screen.
If all is good, in a second or two, you should see...
1>/usr/local/amiga/adtools/bin/ppc-amigaos-g++ -ggdb -o easy-dbg easy.cpp -I/usr/local/amiga/adtools/ppc-amigaos/SDK/include
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
Then, from the profile drop-down, choose "Release".
And again, choose Build -> Build CygVsMakeProject1
Which should show in the Output window...
"1>/usr/local/amiga/adtools/bin/ppc-amigaos-g++ -o easy easy.cpp -I/usr/local/amiga/adtools/ppc-amigaos/SDK/include
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped =========="
Which means you now have 2 AmigaOS OS4 PowerPC executables, one named "easy" and another named "easy-dbg" in the Project's directory (ex: C:\<path to user's Windows home dir>\repos\<SolutionName>\<ProjectName>)
13) Go test "easy" and "easy-dbg" by running those executable files in your AmigaOS environment from a command shell. If the Workbench screen flashes once each time you run those programs, you're good to go!
You now have a slick cross-compiling Visual Studio 2017 environment for you to develop away.
You can update the adtools compiler chain at any time, just by following the instructions in my prior post, and if you follow those directions closely (especially the symbolic linking I mention throughout), then the only changes you'll to make to keep Visual Studio "in the loop" is to update your Windows-side symlink (the "adtoolsln" link) described below to allow VS's IntelliSense to keep using the latest header files for its syntax/content lint'ing. More on this below.
What About Existing Code?
All these examples above have created new files from scratch. But what about all those Amiga OS4 programs and existing code trees? With resources/libraries/etc?
This is rather beyond the scope of this guide. But as a pointer, you can't just copy source packages into some VS project directory and assume Visual Studio knows about the files. Look into Visual Studio's "Project -> Add Existing Item" option or "File -> Open Folder..." functionality.
What About Debugging?
As noted above, the only thing missing from this guide is a nicely-integrated, single-step, source-level, remote debugging interface.
While that's certainly true unless/until I (or someone reading this) solves the remote gdb connectivity issue between your OS4 test environment and your Visual Studio development PC, manual debugging using gdb on the OS4 environment works like a charm.
If you'd like to know how to do this, a nice shout out goes to @kas1e for his tutorial on using gdb in an Amiga environment, which you can find at:
If you follow his tutorial, you'll find that the debug versions of the binaries built from this setup here will work beautifully. Suggest that if you do use gdb, that you also copy to your Amiga OS4 test environment, all the source files used to build your OS4 binaries, into the same directory where your binaries are copied -- that way gdb can easily reference your source as needed.
And, Finally, Getting IntelliSense to Work
One of the most valuable features of Visual Studio is its IntelliSense code completion/monitoring/profiling abilities....But it's useless if it doesn't know the code/headers it's supposed to be using.
If you look at the "easy.cpp" code file in its editor in Visual Studio, you'll see lots of red underlines and warning symbols and errors. That's because we don't currently have the CygVsMakeProject1 project set-up correctly to tell IntelliSense where to find all the AmigaOS headers (well, actually, any headers used by the .cpp file in question, not just AmigaOS SDK ones).
Let's fix that.
For your sanity, in anticipation of future updates to your adtools chain, create directory symlink in Windows by launching an Administrator Command Prompt. (Find your Command Prompt item in your Windows start menu, right-click on it, then right-click again on the "Command Prompt" sub menu item, and choose "Run as Administrator").
Then:
cd c:\cygwin64\usr\local\amiga
And create a Windows/NTFS synbolic link (which you'll definitely need to redo every time you build a new version of the adtools chain so you're always pointing to the latest version of the headers).
mklink /D adtoolsln adtools-ppc-cyg64-20170623-404
[Note, use "adtoolsln" and _not_ "adtools" as your link name, because if you've followed all my instructions to this point, you already have an "adtools" symlink/junction file -- one that was created and used in the Cygwin setup to begin with].
Close that Command Prompt window.
Return to Visual Studio
Then with the CygVsMakeProject1 Project selected in the Solution Explorer:
Select Project -> Properties -> NMake
In the resulting dialog box, in the "Configuration" drop-down chooser, select "All Configurations"
Then, still in that dialog select NMake -> IntelliSense -> Include Search Path
And enter the following into the IntelliSense's Include Search Path value field:
C:\cygwin64\usr\local\amiga\adtoolsln\ppc-amigaos\SDK\include\include_h;C:\cygwin64\usr\local\amiga\adtoolsln\ppc-amigaos\SDK\clib2\include;C:\cygwin64\usr\local\amiga\adtoolsln\ppc-amigaos\SDK\newlib\include
And you'll definitely want to add any other include directories you'd like to have IntelliSense know about. Basically, whatever include directories you want in your make files for a given project, you'll probably want to have here, using Windows path/filename syntax.
But remember, all of this is just for the editor's error/warning highlights. None of these settings affect the Include search path of the actual gcc/g++ compilers. Again, the makefile include search paths probably will mirror the IntelliSense Include search paths -- albeit with different POSIX syntax.
*** That's all folks ***
In the end, it's actually quite fast/easy to do everything here. You'll probably repeat all these steps like I do inside of 30 seconds when you need them.
I just figured you'd want all the backgrounder/underlying knowledge needed so you can adjust to suit your needs in the future.
Some credits:
In the original posting which I drew a bunch of this information from (see above for full link URL), the author EDanaII, when they posted their message back in 2012, which itself was a repost, had a note of thanks at the end of the message. I'm copying that verbatim here: "I'd like to thank SG2 for his initial help in getting this working, and Hans for helping me publish the article!"
And now, my turn. I very heavily edited and of course heavily added-to the original post from EDanaII. I don't know EDanaII, but thanks go out to him/her for their work -- and additional big thanks to the original poster which EDanaII reposted from. Let's hope this helps to keep the effort going.
Search keywords:
Cross
Cross-compile
Cross-compiler
Cross-compiling
compile
compiler
compiling
adtools
Win
Windows
Bash
Ubuntu
Cygwin
Amiga Development Tools
Developer
Intel
x86
x64
PPC
PowerPC
Power PC
OS4
Ami
AmigaOS
Edited by stonecracker on 2017/10/6 12:39:00
Edited by stonecracker on 2017/10/6 14:15:22
Edited by stonecracker on 2017/10/6 23:15:17
Edited by stonecracker on 2017/10/7 0:09:01
Edited by stonecracker on 2017/10/7 19:34:09
Edited by stonecracker on 2017/10/8 8:19:26
Edited by stonecracker on 2017/10/21 9:59:47
Edited by stonecracker on 2017/10/21 10:05:23
Edited by stonecracker on 2017/10/21 22:11:13
Edited by stonecracker on 2018/10/25 14:37:47
Edited by stonecracker on 2019/10/26 21:16:27
orgin
Profile
WWW
Re: Using Visual Studio 2017 to Develop/Cross-Compile PowerPC (ppc) Amiga OS4 Code with Cygwin-Toolchain
Posted on: 2017/10/6 8:20
#2
Supreme Council
Joined:
2006/11/16 19:25
From
Sweden
3172
Wow, good work!
_________________
Vacca foeda. Sum, ergo edo
Mr Bobo Cornwater
cha05e90
Profile
WWW
Re: Using Visual Studio 2017 to Develop/Cross-Compile PowerPC (ppc) Amiga OS4 Code with Cygwin-Toolchain
Posted on: 2017/10/6 11:35
#3
Quite a regular
Joined:
2009/3/10 10:51
From
Germany
628
@stonecracker
OMG - thanks, very comprehensive!
_________________
X1000|II/G4|440ep|2000/060|2000/040|1000
angelheart
Profile
Re: Using Visual Studio 2017 to Develop/Cross-Compile PowerPC (ppc) Amiga OS4 Code with Cygwin-Toolchain
Posted on: 2017/10/6 11:59
#4
Just popping in
Joined:
2008/4/30 4:49
From
UK
170
Great news. One step closer in getting Roslyn compiled.
Overview ... rview&referringTitle=Home
stonecracker
Profile
WWW
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2017/10/7 19:35
#5
Just popping in
Joined:
2017/8/8 20:17
From
Canada
43
FYI, anyone who's read/used this guide. I've now made a number of minor bug corrections/typo corrections. If you tried/used the original version, look at the very first editorial comment to see the summary of changes.
stonecracker
Profile
WWW
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2017/10/21 10:06
#6
Just popping in
Joined:
2017/8/8 20:17
From
Canada
43
And, again, I've done a few changes. This time to the instructions on how to add multiple Include and Library search paths to your makefile.
AmigaBlitter
Profile
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2017/10/21 14:02
#7
Quite a regular
Joined:
2006/11/22 17:57
From
Italy, Rome
591
@stonecracker
Thank you for this isntructions too.
Btw, last time i used Visual Studio, the software was relatively a little package (1 DVD that i still have).
Now the entire Development System reached 150 GB!!!
_________________
Retired
EDanaII
Profile
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2018/9/16 13:53
#8
Just popping in
Joined:
2018/9/15 18:29
From
The Zone
1
@ stonecracker:
I've been looking to set up both VS and Eclipse again to do some more Amiga tinkering, so I hopped online to find my old articles on how to set them up, only to find that AmigaCoding.de is now defunct. :/ Instead, I find this article. Nice job, Stonecracker and thank you for the acknowledgement, but, as you've already pointed out, that post was a repost of someone else's work. Still, I'm glad it helped you out.
Although, I am surprised you had trouble with Eclipse. My attempt to get Eclipse running with zerohero's cross compiler toolchain, in the end, wasn't too difficult at all. I don't know too much about cyg-adtools, but I'm surprised it would be anymore difficult. You clearly saw the article about VS on AmigaCoding, so I'm assuming you saw the other article there too. In case you didn't, here's a link through the wayback machine to that article: ... 62932c0ded8666&topic=17.0
Personally, I enjoyed using Eclipse for my coding. Not as robust an IDE when compared to VS, but good enough and I didn't have to mess with make files at all, which made me a much happier programmer.
All that said, I don't know much about cyg-adtools, but I'm assuming they're PPC only? I prefer coding for 68k AmigaOS, myself, so... still, nice guide. May prove very useful when I get myself set up again. :)
retro
Profile
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2018/10/17 16:03
#9
Just popping in
Joined:
2008/12/3 13:26
131
so will this mean that os 4.1 will suport C# or C.sharp ???
stonecracker
Profile
WWW
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 2018/10/18 16:37
#10
Just popping in
Joined:
2017/8/8 20:17
From
Canada
43
Great question.
TL;DR: If Mono framework gets ported to OS4 on PPC, then, yes, the stack of methods/techniques/approaches/software that I show in this and my related posts would almost certainly work -- verbatim -- to target/cross-compiled/ported .Net/C# code to OS4 -- with or without Visual Studio.
Long-ish answer:
Visual Studio isn't the issue. .Net/C# support is.
Visual Studio is just a (very good) IDE with built-in .Net/C# support because MS decided they wanted to support C# out-of-the-box. VS itself, though, can support almost any language, native compiler, cross-compiler, remote-cross-compiler-on-Linux, and/or framework imaginable.
Now, .Net/C# support for OS4 is non-existent at the moment, and probably won't ever come directly via MS as they will almost certainly never support OS4.
However, the amazingly feature-rich/complete/open source .Net clone aka Mono Framework (heavily supported by MS) with its fantastic .Net JIT engine, and C# compiler, is definitely a strong possiblity.
I say this because Mono/C# has already been ported to OSX/x86, Linux/x86, Linux/ARM64, plus a number of little-endian PPC POSIX environments, see ... ported-platforms/powerpc/
Given the strong support for Mono on both big- and little-endian PPC OS's (including Linux), it's probably quite doable as a port to OS4.
If that happens, then, yes, we can not only have C# code just cross-compile to OS4 but we can have an ocean of .Net code run on OS4. We would even be able to natively compile C#/.Net code on OS4.
All of which would be independent of the IDE involved.
But, lastly, the question of Visual Studio support of .Net/C# on OS4 would be, yes, it would work as just another Mono target like Linux or OSX on PPC or x86.
The catch: all the things needed (C11/C++14/full POSIX multi-threading/etc) to get all modern software (browsers/office suites/etc.) ported to OS4 are also needed to get Mono/.Net/C# working on OS4.
Of course, if we had the entire oceans of GNU C11/C++14-reliant software, and .Net/Mono software, available for OS4, all easily cross-compiled via the latest Visual Studio or other Windows IDE .... just, wow.
stonecracker
Profile
WWW
Re: Using Visual Studio 2017 to Cross-Compile PowerPC Amiga OS4 Code via Cygwin-based adtools Toolchain
Posted on: 10/26 20:56
#11
Just popping in
Joined:
2017/8/8 20:17
From
Canada
43
At AmiWest 2019 (can't believe it's my 3rd!!! time here now, 1st time I was editing my Cross-Compiling OS4 tomes), have heard-tell that there may be something broken with these instructions.
Can anyone inform me of any breakage? I'll see if I can make updates/corrections and update my original post if needed.
Top
Previous Topic
Next Topic
Register To Post
[
Advanced Search
]
-- Select a Forum --
[Software]
-- AmigaOS4
-- Amiga Classic
-- Amiga Emulation
[Hardware]
-- AmigaOS4
-- Amiga Classic
[Amigans website]
-- Amigans information
-- Amigans feedback
[Other]
-- Too Hot to handle
-- Free for all
-- Polls
-- OS4Depot feedback
-- New Member Introduction
-- Amiga General Forum
-- Amigabounty
-- Open Amiga
[Games]
-- AmigaOS4
-- Amiga Classic
[Software Support]
-- AmiCygnix
-- Amiga Bourne Compatibe Shell
-- Official AmigaOS4 feedback
-- AmiKit
-- Cinnamon Writer
-- CodeBench
-- Digital Universe
-- Dopus 5
-- E-UAE
-- Gnash
-- IBrowse
-- JAmiga
-- Odyssey
-- OWB
-- SmartFileSystem
-- Timberwolf
-- TuneNet
-- Unsatisfactory Software
-- Qt
[Development]
-- AmigaOS4
-- Amiga Classic
The XOOPS Project
|
https://www.amigans.net/modules/xforum/viewtopic.php?post_id=116105
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Access control in Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen1 implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. This article summarizes the basics of the access control model for Data Lake Storage Gen1.
Access control lists on files and folders
There are two kinds of access control lists (ACLs), Access ACLs and Default ACLs.
Access ACLs: These control access to an object. Files and folders both have Access ACLs.
Default ACLs: A "template" of ACLs associated with a folder that determine the Access ACLs for any child items that are created under that folder. Files do not have Default ACLs.
Both Access ACLs and Default ACLs have the same structure.
Note
Changing the Default ACL on a parent does not affect the Access ACL or Default ACL of child items that already exist.
Permissions
The permissions on a filesystem object are Read, Write, and Execute, and they can be used on files and folders as shown in the following table:
Short forms for permissions
RWX is used to indicate Read + Write + Execute. A more condensed numeric form exists in which Read=4, Write=2, and Execute=1, the sum of which represents the permissions. Following are some examples.
Permissions do not inherit
In the POSIX-style model that's used by Data Lake Storage Gen1, permissions for an item are stored on the item itself. In other words, permissions for an item cannot be inherited from the parent items.
Common scenarios related to permissions
Following are some common scenarios to help you understand which permissions are needed to perform certain operations on a Data Lake Storage Gen1 account.
Note
Write permissions on the file are not required to delete it as long as the previous two conditions are true.
Users and identities
Every file and folder has distinct permissions for these identities:
- The owning user
- The owning group
- Named users
- Named groups
- All other users
The identities of users and groups are Azure Active Directory (Azure AD) identities. So unless otherwise noted, a "user," in the context of Data Lake Storage Gen1, can either mean an Azure AD user or an Azure AD security group.
The super-user
A super-user has the most rights of all the users in the Data Lake Storage Gen1 account. A super-user:
- Has RWX Permissions to all files and folders.
- Can change the permissions on any file or folder.
- Can change the owning user or owning group of any file or folder.
All users that are part of the Owners role for a Data Lake Storage Gen1 account are automatically a super-user.
The owning user
The user who created the item is automatically the owning user of the item. An owning user can:
- Change the permissions of a file that is owned.
- Change the owning group of a file that is owned, as long as the owning user is also a member of the target group.
Note
The owning user cannot change the owning user of a file or folder. Only super-users can change the owning user of a file or folder.
The owning group
Background
In the POSIX ACLs, every user is associated with a "primary group." For example, user "alice" might belong to the "finance" group. Alice might also belong to multiple groups, but one group is always designated as her primary group. In POSIX, when Alice creates a file, the owning group of that file is set to her primary group, which in this case is "finance." The owning group otherwise behaves similarly to assigned permissions for other users/groups.
Because there is no “primary group” associated to users in Data Lake Storage Gen1, the owning group is assigned as below.
Assigning the owning group for a new file or folder
- Case 1: The root folder "/". This folder is created when a Data Lake Storage Gen1 account is created. In this case, the owning group is set to an all-zero GUID. This value does not permit any access. It is a placeholder until such time a group is assigned.
- Case 2 (Every other case): When a new item is created, the owning group is copied from the parent folder.
Changing the owning group
The owning group can be changed by:
- Any super-users.
- The owning user, if the owning user is also a member of the target group.
Note
The owning group cannot change the ACLs of a file or folder.
For accounts created on or before September 2018, the owning group was set to the user who created the account in the case of the root folder for Case 1, above. A single user account is not valid for providing permissions via the owning group, thus no permissions are granted by this default setting. You can assign this permission to a valid user group.
Access check algorithm
The following pseudocode represents the access check algorithm for Data Lake Storage Gen1 accounts.
def access_check( user, desired_perms, path ) : # access_check returns true if user has the desired permissions on the path, false otherwise # user is the identity that wants to perform an operation on path # desired_perms is a simple integer with values from 0 to 7 ( R=4, W=2, X=1). User desires these permissions # path is the file or folder # Note: the "sticky bit" is not illustrated in this algorithm # Handle super users. if (is_superuser(user)) : return True # Handle the owning user. Note that mask IS NOT used. entry = get_acl_entry( path, OWNER ) if (user == entry.identity) return ( (desired_perms & entry.permissions) == desired_perms ) # Handle the named users. Note that mask IS used. entries = get_acl_entries( path, NAMED_USER ) for entry in entries: if (user == entry.identity ) : mask = get_mask( path ) return ( (desired_perms & entry.permmissions & mask) == desired_perms) # Handle named groups and owning group member_count = 0 perms = 0 entries = get_acl_entries( path, NAMED_GROUP | OWNING_GROUP ) for entry in entries: if (user_is_member_of_group(user, entry.identity)) : member_count += 1 perms | = entry.permissions if (member_count>0) : return ((desired_perms & perms & mask ) == desired_perms) # Handle other perms = get_perms_for_other(path) mask = get_mask( path ) return ( (desired_perms & perms & mask ) == desired_perms)
The mask
As illustrated in the Access Check Algorithm, the mask limits access for named users, the owning group, and named groups.
Note
For a new Data Lake Storage Gen1 account, the mask for the Access ACL of the root folder ("/") defaults to RWX.
The sticky bit
The sticky bit is a more advanced feature of a POSIX filesystem. In the context of Data Lake Storage Gen1, it is unlikely that the sticky bit will be needed. In summary, if the sticky bit is enabled on a folder, a child item can only be deleted or renamed by the child item's owning user.
The sticky bit is not shown in the Azure portal.
Default permissions on new files and folders
When a new file or folder is created under an existing folder, the Default ACL on the parent folder determines:
- A child folder’s Default ACL and Access ACL.
- A child file's Access ACL (files do not have a Default ACL).
umask
When creating a file or folder, umask is used to modify how the default ACLs are set on the child item. umask is a 9-bit value on parent folders that contains an RWX value for owning user, owning group, and other.
The umask for Azure Data Lake Storage Gen1 is a constant value set to 007. This value translates to
The umask value used by Azure Data Lake Storage Gen1 effectively means that the value for other is never transmitted by default on new children - regardless of what the Default ACL indicates.
The following pseudocode shows how the umask is applied when creating the ACLs for a child item.
def set_default_acls_for_new_child(parent, child): child.acls = [] for entry in parent.acls : new_entry = None if (entry.type == OWNING_USER) : new_entry = entry.clone(perms = entry.perms & (~umask.owning_user)) elif (entry.type == OWNING_GROUP) : new_entry = entry.clone(perms = entry.perms & (~umask.owning_group)) elif (entry.type == OTHER) : new_entry = entry.clone(perms = entry.perms & (~umask.other)) else : new_entry = entry.clone(perms = entry.perms ) child_acls.add( new_entry )
Common questions about ACLs in Data Lake Storage Gen1
Do I have to enable support for ACLs?
No. Access control via ACLs is always on for a Data Lake Storage Gen1 account.
Which permissions are required to recursively delete a folder and its contents?
- The parent folder must have Write + Execute permissions.
- The folder to be deleted, and every folder within it, requires Read + Write + Execute permissions.
Note
You do not need Write permissions to delete files in folders. Also, the root folder "/" can never be deleted.
Who is the owner of a file or folder?
The creator of a file or folder becomes the owner.
Which group is set as the owning group of a file or folder at creation?
The owning group is copied from the owning group of the parent folder under which the new file or folder is created.
I am the owning user of a file but I don’t have the RWX permissions I need. What do I do?
The owning user can change the permissions of the file to give themselves any RWX permissions they need.
When I look at ACLs in the Azure portal I see user names but through APIs, I see GUIDs, why is that?
Entries in the ACLs are stored as GUIDs that correspond to users in Azure AD. The APIs return the GUIDs as is. The Azure portal tries to make ACLs easier to use by translating the GUIDs into friendly names when possible.
Why do I sometimes see GUIDs in the ACLs when I'm using the Azure portal?
A GUID is shown when the user doesn't exist in Azure AD anymore. Usually this happens when the user has left the company or if their account has been deleted in Azure AD.
Does Data Lake Storage Gen1 support inheritance of ACLs?
No, but Default ACLs can be used to set ACLs for child files and folder newly created under the parent folder.
Where can I learn more about POSIX access control model?
- POSIX Access Control Lists on Linux
- HDFS permission guide
- POSIX FAQ
- POSIX 1003.1 2008
- POSIX 1003.1 2013
- POSIX 1003.1 2016
- POSIX ACL on Ubuntu
- ACL using access control lists on Linux
See also
Feedback
|
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-access-control
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
"Brian Elmegaard" <be at et.dtu.dk> wrote in message news:3A40813B.C04D21C at et.dtu.dk... > I am not a computer scientist, just an engineer knowing how to program in > Fortran, Pascal,... So I after reading some of the python material on the web, I > decided to give it a try and is now using the language some, finding it fun and > easy. I'm a statistician who discovered the same thing about four years ago. > On usenet I have now learned from skilled scientists, By their own evaluation? Think instead 'CS theologians' or even 'religious fanatics'. > that Python has some deficiencies regarding scope rules, and I am not capable > of telling them why it has not (or that it has). Deficiency is in the eye of the beholder. This religious discussion has been going on for at least since I started listening; probably has since the beginning of Python; and will probably continue until Python dies. Consider not getting too involved. > In Aaron Watters 'The wwww of python' I have read that Python uses lexical > scoping with a convenient modularity. But, other people tell me it does not. > Some say it uses dynamic scoping, some say it uses it own special > 'local-global-builtin' scope. What is right? I have discovered that while good computer scientists may develop a definsible terminology that they use consistently each unto themselves, they do not always agree. This seems to one of those areas where language is not settled. > The above mainly is a theoretical question. A question like 'what is the optimal asymptotic efficiency of a sorting algorithm based on comparisions' has an answer: n log n. The scope discussion does not seem to. > A more practical example which I agree seems a bit odd is: > >>> x=24 > >>> def foo(): > ... y=2 > ... print x+y > ... > >>> def bar(): > ... x=11 > ... foo() > ... > >>> bar() > 26 When a function encounters a name that is not in its local scope, where should it look to find a value to substitute (given that there are no explicit pointers)? There are three general answers: nowhere (raise an exception or abort); somewhere in the definition environment; or somewhere in the usage environment. Python's current answer is in the namespace of the defining module (and then in the builtin namespace attached to that module). The only things I find 'odd' (and definitely confusing at first) is the use of 'global' for the module namespace (and I suppose there is some history here) and the sometimes claim that there are two rather than three possible scopes. For odd behaviour, consider the following possibility: plane_geometry.py: _dimension = 2 def SegSectTri2d(line_segment, triangle): "does line_segment intersect triangle in xy-plane" # uses _dimension and depends on it being 2 ... space_geometry.py import plane_geometry _dimension = 3 def SegSectTri3d(line_segment, triangle): "does line_segment intersect triangle in xyz-space" # uses _dimension and depends on it being 3 # uses plane_geometry.SegSectTri2d() for part of its calculation Now, I would be disappointed if the plane geometry functions did not work when called from space geometry functions because the two modules happened to use the same spelling for the private dimension constant. Terry J. Reedy
|
https://mail.python.org/pipermail/python-list/2000-December/048457.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
. [Below it says that there are two types: cell and string.]
- language design decisions were made by ITB CompuPhase. It is designed for low-level embedded devices and is thus very small and very fast.
Assignment
Variables can be re-assigned data after they are created. For example:
new a, Float:b, bool:c; a = 5; b = 5.0; c = true;
Arrays
An array is a sequence of data in a sequential list. Arrays are useful for storing multiple pieces of data in one variable, and often greatly simplify many tasks..
Usage
Using an array is just like using a normal variable. The only difference is the array must be indexed. Indexing an array means choosing the element which you wish to use.
For example, here is an example of the above code using indexes:
new.
Usage
Strings are declared almost equivalently to arrays. For example:
new String:message[] = "Hello!"; new String:clams[6] = "Clams";
These are equivalent to doing:
new String:message[7], String:
new String:text[] = "Crab"; new:
new clams[] = "Clams"; //Invalid, needs String: type new.
Natives
Natives are builtin functions provided by SourceMod. You can call them as if they were a normal function. For example, SourceMod has the following function:
native FloatRound(Float:num);
It can be called like so:
new num = FloatRound(5.2); //Results in num = 5:
new a = 5 * 6; new b = a * 3; //Evaluates to 90 new:
new a = 5; new b = a++; // b = 5, a = 6 (1) new c = ++a; // a = 7, c = 7 (2)
In (1) b is assigned a's old value before it is incremented to 6, but in (2) c is assigned a's new:
new:
SumArray(const array[], count) { new. */ SearchInArray(const array[], count, value) { new index = -1; for (new:
SumEvenNumbers(const array[], count) { new sum; for (new:
new A, B, C; Function1() { new B; Function2(); } Function2() { new:
Function1() { new:
Function1(size) { new new.
decl.
Notes
This example is NOT as efficient as a decl:
new String:blah[512] = "a";
Even though the string is only one character, the new operator guarantees the rest of the array will be zeroed as well.
Also note, it is valid to explicitly initialize a decl ONLY with strings:
decl String:blah[512] = "a";
However, any other tag will fail to compile, because the purpose of decl is to avoid any initialization:
decl Float:blah[512] = {1.0};:
MyFunction(inc) { static:
MyFunction(inc) { if (inc > 0) { static counter; return (counter += inc); } return -1; }
|
https://wiki.alliedmods.net/index.php?title=Introduction_to_SourcePawn_1.7&oldid=9735
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of Resolved status.
Section: 20.2.2 [utility.swap] Status: Resolved Submitter: Orson Peters Opened: 2015-11-01 Last modified: 2016-03-09
Priority: 2
View all other issues in [utility.swap].
View all issues with Resolved status.
Discussion:
The noexcept specification for the std::swap overload for arrays has the effect that all multidimensional arrays — even those of build-in types — would be considered as non-noexcept swapping, as described in the following Stackoverflow article.Consider the following example code:
#include <utility> #include <iostream> int main() { int x[2][3]; int y[2][3]; using std::swap; std::cout << noexcept(swap(x, y)) << "\n"; }
Both clang 3.8.0 and gcc 5.2.0 return 0.The reason for this unexpected result seems to be a consequence of both core wording rules (6.4.2 [basic.scope.pdecl] says that "The point of declaration for a name is immediately after its complete declarator (Clause 8) and before its initializer (if any)" and the exception specification is part of the declarator) and the fact that the exception-specification of the std::swap overload for arrays uses an expression and not a type trait. At the point where the expression is evaluated, only the non-array std::swap overload is in scope whose noexcept specification evaluates to false since arrays are neither move-constructible nor move-assignable. Daniel: The here described problem is another example for the currently broken swap exception specifications in the Standard library as pointed out by LWG 2456. The paper N4511 describes a resolution that would address this problem. If the array swap overload would be declared instead as follows,
template <class T, size_t N> void swap(T (&a)[N], T (&b)[N]) noexcept(is_nothrow_swappable<T>::value);
the expected outcome is obtained.Revision 2 (P0185R0) of above mentioned paper will available for the mid February 2016 mailing.
[2016-03-06, Daniel comments]
With the acceptance of revision 3 P0185R1 during the Jacksonville meeting, this issue should be closed as "resolved": The expected program output is now 1. The current gcc 6.0 trunk has already implemented the relevant parts of P0185R1.
Proposed resolution:
|
https://cplusplus.github.io/LWG/issue2554
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
You'll need to install a library to make Arduino IDE support the module. This library includes drivers for the MMA8491_AccelTilt_Module library is responsible for reading the accelerometer and tilt data.
To use the library on Arduino IDE, add the following #include statement to the top of your sketch.
#include <Turta_AccelTilt_Module.h>
Then, create an instance of the Turta_AccelTilt_Module class.
Turta_AccelTilt_Module accel
Now you're ready to access the library by calling the accel instance.
To initialize the module, call the begin method.
begin()
This method configures the I2C bus and GPIO pins to read sensor data.
Returns the G value of the X axis.
double readXAxis()
Parameters
None.
Returns
Double: G Value of the X axis.
Returns the G value of the Y axis.
double readYAxis()
Parameters
None.
Returns
Double: G Value of the Y axis.
Returns the G value of the Z axis.
double readZAxis()
Parameters
None.
Returns
Double: G Value of the Z axis.
Returns the values of all axes in a single shot.
void readXYZAxis(double x, double y, double z)
Parameters
Double: x out
Double: y out
Double: z out
Returns
None
Returns the tilt state of all axes.
void readTiltState(bool xTilt, bool yTilt, bool zTilt)
Parameters
Bool: xTilt out
Bool: yTilt out
Bool: zTilt out
Returns
None
You can open the example from Arduino IDE > File > Examples > Examples from Custom Libraries > Turta Accel & Tilt Module. There are two examples of this sensor.
If you're experiencing difficulties while working with your device, please try the following steps.
Problem: When using the single shot reading function, Y-axis returns 0 G. Cause: There is a software communication error on the I2C bus. Solution: It's a known issue, and we're working on to fix this bug. Until then, please use the single reading functions. It's not a malfunction.
|
https://docs.turta.io/modular/accel-tilt/iot-node
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Linux
2017-09-15
NAME
getunwind - copy the unwind data to caller’s buffer
SYNOPSIS
#include <syscall.h> #include <linux/unwind.h>
long getunwind(void *buf, size_t buf_size);
Note: There is no glibc wrapper for this system call; see NOTES.
DESCRIPTION
Note: this function is obsolete..
COLOPHON
This page is part of release 5.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
|
https://reposcope.com/man/en/2/getunwind
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.