text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
My model is onnx format generated by pytorch and I try to convert it to bin and xml, but it show the error "output array is read-only".
I see that other people on the Internet having this problem is about numpy's version, however, it seems not work for me.
I degrade the version numpy from 1.16.2 to 1.15.0, still doesn't work.
Any suggestion?
'''
File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\main.py", line 325, in main
return driver(argv)
File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\main.py", line 302, in driver
mean_scale_values=mean_scale)
File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\pipeline\onnx.py", line 165, in driver
fuse_linear_ops(graph)
File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\middle\passes\fusing\fuse_linear_ops.py", line 258, in fuse_linear_ops
is_fused = _fuse_add(graph, node, fuse_nodes)
File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\middle\passes\fusing\fuse_linear_ops.py", line 212, in _fuse_add
fuse_node.in_node(2).value += value
ValueError: output array is read-only
'''
Link Copied
Dear Anthony,
This is indeed strange.
I have messaged you so that you can send me your onnx model privately.
Thanks for using OpenVino !
Shubha
Hi, Shubha R.:
Thank you for your help.
I have already emailed you to "idz.admin@intel.com" or this is just a Forums Notification, because I receive any message in my Intel account.
Do I miss something?
Sorry for my late reply!
Dear Anthony I did not receive anything from you. I have once again sent you a PM message. Just kindly reply to it and attach your model as a zip file.
Thanks for using OpenVino !
Shubha
Dearest Anthony,
Thank you for sending me your zipped up model over PM.
I've got good news and bad news. The bad news is that I reproduced your problem on OpenVino version computer_vision_sdk_2018.5.456 (commonly known as R5.1), so you really did see a bug ! The good news is that it's been fixed in the latest OpenVino Release which dropped today (2019 R1).
Thanks for using OpenVino !
Shubha
Hi Shubha R.,
Could you explain in more detail about this bug? I am just curious.
Because I comment out Line 224 to 260 in use_linear_ops.py and it work.
Thank you!
Dear Anthony,
So I performed a "diff" between the 5.1 version of fuse_linear_ops.py and the latest 2019 R1 version. What I noticed is that there is a slight redesign of the _fuse_mul, _fuse_add and fuse_linear_ops methods:
Version 2019 R1 method signature:
def _fuse_mul(graph: Graph, node: Node, fuse_nodes: list, backward: bool = True):
Version 5.1 method signature:
def _fuse_mul(graph: nx.MultiDiGraph, node: Node, fuse_nodes: list, backward: bool = True):
The main difference is the first argument. So in all three methods 2019 R1 uses Graph rather than networkx.MultiDiGraph.
Looking through this file there are other minor changes also. I encourage you to do a "diff" yourself and see what the changes in this file are, after all OpenVino is open source !
Thanks for using OpenVino !
Shubha
|
https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/output-array-is-read-only/td-p/1175728
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
This.
OnTriggerExit occurs on the FixedUpdate after the Colliders have stopped touching. The Colliders involved are not guaranteed to be at the point of initial separation.
See Also: Collider.OnTriggerEnter which contains a useful example.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void OnTriggerExit(Collider other) { // Destroy everything that leaves the trigger Destroy(other.gameObject); } }
|
https://docs.unity3d.com/kr/2020.1/ScriptReference/Collider.OnTriggerExit.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
# If we're running on Colab, install empiricaldist # import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install empiricaldist
import numpy as np import matplotlib.pyplot as plt from empiricaldist import Pmf
According to this press release
The first set of results from our Phase 3 COVID-19 vaccine trial provides the initial evidence of our vaccine’s ability to prevent COVID-19.
The press release includes the following details about the results
The ... trial ... has enrolled 43,538 participants to date, 38,955 of whom have received a second dose of the vaccine candidate as of November 8, 2020.
... the evaluable case count reached 94 and ... the case split between vaccinated individuals and those who received the placebo indicates a vaccine efficacy rate above 90%, at 7 days after the second dose.
The press release provides only a point estimate for the effectiveness of the vaccine, and it does not provide enough information to make a better estimate.
But with some guesswork, we can compute the posterior distribution of effectiveness, and use it to estimate the lower bound of the credible interval.
Since we don't know how many people are in each branch of the trial, I'll assume that it is approximately equal.
n_control = 38955 / 2 n_treatment = 38955 / 2
We know there were a total of 94 infections in the two branches. Since the estimated effectiveness is 90%, I'll guess that there were 86 infections in the control branch and 8 in the treatment branch.
We can make a beta distribution that represents the posterior distribution of the infection rate in the control branch, starting with a uniform distribution.
from scipy.stats import beta dist_control = beta(86+1, n_control+1) dist_control.mean() * 100
0.4446602437964785
And here's the posterior distribution for the treatment branch.
dist_treatment = beta(8+1, n_treatment+1) dist_treatment.mean() * 100
0.046183450930083386
The risk ratio is about 10:1, which is consistent with 90% effectiveness.
To compute the distribution of risk ratios, I'll make a discrete approximation to the two posterior distributions, using the
Pmf object from
empiricaldist:
def make_beta(dist): """PMF to approximate a beta distribution. dist: `beta` object returns: Pmf """ qs = np.linspace(8e-6, 0.008, 1000) ps = dist.pdf(qs) pmf = Pmf(ps, qs) pmf.normalize() return pmf
Here are the
Pmf objects:
pmf_control = make_beta(dist_control) pmf_treatment = make_beta(dist_treatment)
And here's what they look like:
pmf_control.plot(label='Control') pmf_treatment.plot(label='Treatment') plt.xlabel('Infection rate') plt.ylabel('PMF') plt.legend();
Again, it looks like the infection rate is about 10 times higher in the control group.
We can use
div_dist to compute the risk ratio.
pmf_ratio = pmf_treatment.div_dist(pmf_control)
Here's the CDF of the risk ratio. I cut it off at 1 because higher values have very low probabilities; that is, we are pretty sure the treatment is effective.
pmf_ratio.make_cdf().plot() plt.xlim([0, 1]) plt.xlabel('Risk ratio') plt.ylabel('CDF');
The median of the risk ratio is about 0.10. Again, that's consistent with an effectiveness of 90%.
pmf_ratio.median()
array(0.10040984)
To compute the distribution of effectiveness, we have to compute the distribution of
1-RR, where
RR is the risk ratio. We can do that with
empiricaldist by creating a deterministic
Pmf with the quantity
1 and using
sub_dist to subtract two
Pmfs.
effectiveness = Pmf.from_seq(1).sub_dist(pmf_ratio)
Here's the result.
effectiveness.make_cdf().plot() plt.xlim([0, 1]) plt.xlabel('Effectiveness') plt.ylabel('CDF');
The posterior mean is about 89%.
effectiveness.mean()
0.8949353341973734
And the 95% credible interval is between 81% and 95%.
effectiveness.credible_interval(0.95)
array([0.8099631 , 0.95352564])
If my guesses about the data are close enough, and the modeling decisions are good enough, it is unlikely that the effectiveness of the vaccine is less than 80%.
|
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/vaccine.ipynb
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Mixcloud API wrapper for Python and Async IO
Project description
Mixcloud API wrapper for Python and Async IO
aiomixcloud is a wrapper library for the HTTP API of Mixcloud. It supports asynchronous operation via asyncio and specifically the aiohttp framework. aiomixcloud tries to be abstract and independent of the API’s transient structure, meaning it is not tied to specific JSON fields and resource types. That is, when the API changes or expands, the library should be ready to handle it.
Installation
The following Python versions are supported:
- CPython: 3.6, 3.7, 3.8, 3.9
- PyPy: 3.5
pip install aiomixcloud
Usage
You can start using aiomixcloud as simply as:
from aiomixcloud import Mixcloud # Inside your coroutine: async with Mixcloud() as mixcloud: cloudcast = await mixcloud.get('bob/cool-mix') # Data is available both as attributes and items cloudcast.user.name cloudcast['pictures']['large'] # Iterate over associated resources for comment in await cloudcast.comments(): comment.url
A variety of possibilities is enabled during authorized usage:
# Inside your coroutine: async with Mixcloud(access_token=access_token) as mixcloud: # Follow a user user = await mixcloud.get('alice') await user.follow() # Upload a cloudcast await mixcloud.upload('myshow.mp3', 'My Show', picture='myshow.jpg')
For more details see the usage page of the documentation.
License
Distributed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/aiomixcloud/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Getting Started with Kubernetes | Further Analysis of Linux Containers
By Tang Huamin (Huamin), Container Platform Technical Expert at Alibaba Cloud
The Linux container is a lightweight virtualization technology, which isolates and restricts process resources in kernel sharing scenarios based on namespace and cgroup technology. This article takes Docker as an example to provide a basic description of container images and container engines.
Containers
A container is a lightweight virtualization technology. In contrast with virtual machines (VMs), it does not contain the hypervisor layer. The following figure shows the startup process of a container.
At the bottom layer, the disk stores container images. The container engine at the upper layer can be Docker or another container engine. The container engine sends a request, such as a container creation request, to run the container image on the disk as a process on the host.
For containers, the resources used by the process must be isolated and restricted. This is implemented by the cgroup and namespace technologies in the Linux kernel. This article uses Docker as an example to describe resource isolation and container images.
1. Resource Isolation and Restrictions
Namespace
Namespace technology is used for resource isolation. Seven namespaces are available in the Linux kernel, and the first six are used in Docker. The cgroup namespace is not used in Docker but is implemented in runC.
The following describes the namespaces in sequence:
- The mount namespace is the view of the file system that is visible to a container. It is a file system provided by the container image, which means that other files on the host are invisible to it. You need to run
-v parameter boundto make some directories and files on the host visible in the container.
- The uts namespace isolates the host name and domain.
- The pid namespace ensures that the container’s init process is started by process 1.
- The network namespace is available for all network modes except the host network mode used by containers.
- The user namespace maps the user UID and GID in the container and on the host. This namespace is seldom used.
- The IPC namespace controls processes and communication, such as semaphores.
- The cgroup namespace can be enabled or disabled, as shown in the right part of the preceding figure. When the cgroup namespace is used, the cgroup view is presented as a root for a container, just like that for the processes on the host. The cgroup namespace also makes the use of the cgroup in the container more secure.
The following describes how to create a namespace in a container by using unshare.
The upper part of the figure is an example of using unshare, while the lower part is a pid namespace that is created by the unshare command. As shown in the figure, the bash process is in a new pid namespace, and the ps result indicates that the PID of the bash process is 1, indicating that it is a new pid namespace.
cgroup
Two Types of cgroup Drivers
Cgroup technology is used for resource restriction. Both systemd drivers and cgroupfs drivers are available for Docker containers.
- cgroupfs is easier to understand. For example, if you want to know the memory limit and CPU share, you can directly write the PID to a corresponding cgroup file and then write the resources to be restricted to the corresponding memory cgroup file and CPU cgroup file.
- systemd is a cgroup driver, which can manage cgroups. Therefore, if you use systemd as the cgroup driver, you need to complete all cgroup write operations through the systemd interface but cannot manually modify the cgroup file.
Common cgroups for Containers
The following describes the common cgroups for containers. The Linux kernel provides many cgroups. Only the following six types are used for Docker containers:
- The CPU cgroup controls the CPU utilization by setting the CPU share and CPU set.
- The memory cgroup controls the memory usage of the process.
- The device cgroup controls the devices that are visible in the container.
- The freezer cgroup improves security, just like the device cgroup. When you stop a container, the freezer cgroup writes all the current processes into the cgroup and freezes them to prevent the fork operation of any processes. In this way, it prevents the process from escaping to the host and ensures security.
- The blkio cgroup limits the input/output operations per second (IOPS) and bytes per second (BPS) of disks used by containers. If the cgroup is not unique, the blkio cgroup only restricts the synchronization I/O but not the Docker I/O.
- The pid cgroup limits the maximum number of processes in a container.
Uncommon cgroups for Containers
Some cgroups are not used for Docker containers. cgroups are divided into common and uncommon cgroups. This distinction only applies to Docker because all cgroups except for rdma are supported by runC. However, they are not enabled for Docker. Therefore, Docker does not support the cgroups in the following figure.
2. Container Images
Docker Images
This section uses a Docker image as an example to describe the container image structure.
Docker images are based on the union file system. The union file system allows files to be stored at different layers. However, all these files are visible on a unified view.
In the preceding figure, the right part is a container storage structure obtained from the official Docker website.
This figure shows that the Docker storage is a hierarchical structure based on the union file system. Each layer consists of different files and can be reused by other images. When an image is run as a container, the top layer is the writable layer of the container. The writable layer of the container can be committed as a new layer of the image.
The bottom layer of the Docker image storage is based on different file systems. Therefore, its storage driver is customized for different file systems, such as AUFS, Btrfs, devicemapper, and overlay. Docker drives these file systems with graph drivers, which store images on disks.
Overlay
Storage Process
This section uses the overlay file system as an example to describe how Docker images are stored on disks.
The following figure shows how the overlay file system works.
- The lower layer is a read-only image layer.
- The upper layer is the container read and write layer, which adopts a copy-on-write mechanism. That is, a file is copied from the lower layer only when the file needs to be modified, and all the modify operations are performed on the replicas of the upper layer.
- The workdir layer works as an intermediate layer. The to-be-modified replica at the upper layer is modified at the workdir layer and then moved to the upper layer. This is how the overlay file system works.
- The mergedir layer is a unified view layer. You can see all the data of the upper and lower layers at the mergedir layer. Then, you can run the docker exec command to view a file system in the container, which is the mergedir layer.
File Operations
This section describes how to perform file operations in a container based on overlay storage.
File Operations
- Read: If the upper layer has no replicas, all data is read from the lower layer.
- Write: When a container is created, the upper layer is empty. A file is copied from the lower layer only when the file needs to be written.
- Delete: The delete operation does not affect the lower layer. Deleting a file actually means adding a mark to the file so that the file is not displayed. A file can be deleted through whiteout or by setting xattr
trusted.overlay.opaque= y.
When a container is created, the upper layer is empty. If you try to read data at this time, all the data is read from the lower layer.
As mentioned above, the overlay upper layer has a copy-on-write mechanism. When some files need to be modified, the overlay file system copies the files from the lower layer and modifies them.
There is no real delete operation in the overlay file system. Deleting a file actually means adding a mark to the file at the unified view layer so that the file is not displayed. Files can be deleted in two ways:
- whiteout
- directory deletion, which can be done by setting extended permissions for the directories and setting extended parameters.
Procedure
This section describes how to run the docker run command to start a busybox container and what the overlay mount point is.
The second figure shows the mount command used to view the mount point. The container rootfs mount point is of the overlay type, and includes the upper, lower, and workdir layers.
Next, let’s learn how to write new files into a container. Run the docker exec command to create a file. As shown in the preceding figure, diff is an upperdir of the new file. The content in the file in upperdir is also written by the docker exec command.
The mergedir directory contains the content in upperdir and lowerdir and the written data.
3. Container Engine
Containerd Architecture
This section describes the general architecture of containerd on a container engine based on Cloud Native Computing Foundation (CNCF). The following figure shows the containerd architecture.
As shown in the preceding figure, containerd provides two main functions.
One is runtime, which is container lifecycle management. The other is storage, which is image storage management. containerd pulls and stores images.
Horizontally, the containerd structure is divided into the following layers:
- The first layer includes gRPC and metrics. containerd provides services for the upper layer through the gRPC server. Metrics provides some cgroup metrics.
- At the lower layer, the left part is storage for container images. The metadata of images and containers, which is stored on a disk through bootfs. Tasks in the right part manages the container structure. Events send an event to the upper layer for certain operations on the container and the upper layer can subscribe to the event to monitor the container status changes.
- The underlying layer is Runtimes and can be divided by type, such as runC or security container.
shim v1/v2
This section describes the general structure of containerd at the Runtimes layer. The following figure is taken from the official kata website. The upper part is the source image, while some extended examples are added to the lower part. Let’s look at the architecture of containerd at the Runtimes layer.
The preceding figure shows a process from the upper layer to the Runtime layer from left to right.
A CRI Client is shown to the leftmost. Generally, Kubelet sends a CRI request to containerd. After receiving the request, containerd passes it through a containerd-shim that manages the container lifecycle and performs the following operations:
- Forwards I/O.
- Transmits signals.
The upper part of the figure shows the security container, which is a kata process. The lower part of the figure shows various shims. The following describes the architecture of a containerd-shim.
Initially, there is only one shim in containerd, which is enclosed in the blue box. The shims in all containers, such as kata, runC, and gVisor containers, are containerd-shims.
Containerd is extended for different types of runtimes through the shim-v2 interface. In other words, different shims can be customized for different runtimes through the shim-v2 interface. For example, the runC container can create a shim named shim-runc, the gVisor container can create a shim named shim-gvisor, and the kata container can create a shim named shim-kata. These shims can replace the containerd-shims in the blue boxes.
This has many advantages. For example, when shim-v1 is used, there are three components due to the limits of kata. However, when shim-v2 is used, the three components can be made into one shim-kata component.
containerd Architecture Details — Container Process Examples
This section uses two examples to describe how a container process works. The following two figures show the workflow of a container based on the containerd architecture.
Start Process
The following figure shows the start process.
The process consists of three parts:
- The container engine can be a Docker or another engine.
- containerd and containerd-shim are parts of the containerd architecture.
- The container is pulled by a runtime, or a container is created by a shim by running the runC command.
The numbers marked in the figure show the process by which containerd creates a container.
It first creates metadata and then sends a request to the task service to create a container. The request is sent to a shim through a series of components. containerd interacts with container-shim through gRPC. After containerd sends the creation request to container-shim, container-shim calls the runtime to create a container.
Exec Process
The following figure shows how to execute a container.
The exec process is similar to the start process. The numbers marked in the figure shows the steps by which containerd performs exec.
As shown in the preceding figure, the exec operation is also sent to containerd-shim. There is no essential difference between starting a container and executing a container.
The only difference is whether a namespace is created for the process running in the container.
- During exec, the process must be added to an existing namespace.
- During start, the namespace of the container process must be created.
Summary
I hope this article helped you better understand Linux containers. Let’s summarize what we have learned in this article:
- How to use namespaces for resource isolation and cgroups for resource restriction in containers.
- The container image storage based on the overlay file system.
- How the container engine works based on Docker and containerd.
|
https://alibaba-cloud.medium.com/getting-started-with-kubernetes-further-analysis-of-linux-containers-4f9b7d2dffde
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Docker + Flask | Dockerizing a Python API
Want to share your content on python-bloggers? click here.
Docker containers are one of the hottest trends in software development right now. Not only it makes it easier to create, deploy and run applications but by using containers you are confident that your application will run on any machine regardless of anything that may differ from yours that you created and tested the code.
In this tutorial, we will show you how you can dockerize easily a Flask API. We will use this Python Rest API Example. It’s a simple API that given an image URL it returns the dominant colors of the image.
We highly recommend creating a new python environment using Conda or pip so you can easily create your requirements.txt file that contains all the libraries that you are using in the project.
The Flask API that we will dockerize uses two .py files.
The color.py
import PIL from PIL import Image import requests from io import BytesIO import webcolors import pandas as pd import webcolors def closest_colour(requested_colour): min_colours = {} for key, name in webcolors.css3_hex_to_names.items(): r_c, g_c, b_c = webcolors.hex_to_rgb(key) rd = (r_c - requested_colour[0]) ** 2 gd = (g_c - requested_colour[1]) ** 2 bd = (b_c - requested_colour[2]) ** 2 min_colours[(rd + gd + bd)] = name return min_colours[min(min_colours.keys())] def top_colors(url, n=10): # read images from URL response = requests.get(url) img = Image.open(BytesIO(response.content)) # convert the image to rgb image = img.convert('RGB') # resize the image to 100 x 100 image = image.resize((100,100)) detected_colors =[] for x in range(image.width): for y in range(image.height): detected_colors.append(closest_colour(image.getpixel((x,y)))) Series_Colors = pd.Series(detected_colors) output=Series_Colors.value_counts()/len(Series_Colors) return(output.head(n).to_dict())
The main.py
from flask import Flask, jsonify, request app=Flask(__name__) #we are importing our function from the colors.py file from colors import top_colors @app.route("/",methods=['GET','POST']) def index(): if request.method=='GET': #getting the url argument url = request.args.get('url') result=top_colors(str(url)) return jsonify(result) else: return jsonify({'Error':"This is a GET API method"}) if __name__ == '__main__': app.run(debug=True,host='0.0.0.0', port=9007)
As we said before we have to create the requirements.txt file. We are using the pip freeze command after activating the projects environment.
pip freeze > requirements.txt
If you open the requirements.txt you should see listed all the required libraries of the project.
certifi==2020.6.20 chardet==3.0.4 click==7.1.2 Flask==1.1.2 idna==2.10 itsdangerous==1.1.0 Jinja2==2.11.2 jsonify==0.5 MarkupSafe==1.1.1 numpy==1.19.2 pandas==1.1.3 Pillow==8.0.1 python-dateutil==2.8.1 pytz==2020.1 requests==2.24.0 six==1.15.0 urllib3==1.25.11 webcolors==1.4 Werkzeug==1.0.1
Dockerizing
Let’s start the dockerizing process. We only need to create a new file called Dockerfile. Then we will add some lines of code inside.
The Dockerfile is made of simple commands that define how to build the image. The first line is our base image. There are a lot of images that you can use like Linux, Linux with preinstalled Python and libraries or images that are made especially for data science projects. You can explore them all at the docker hub. We will use the Python:3.8 image.
FROM python:3.8
Then we need to copy the required files from our host machine and add it to the filesystem of the container. To make it simpler we will not add any subfolders.
FROM python:3.8 COPY requirements.txt ./requirements.txt COPY colors.py ./colors.py COPY main.py ./main.py
Then we have to install the libraries so we have to add the pip install command to be run.
FROM python:3.8 COPY requirements.txt ./requirements.txt COPY colors.py ./colors.py COPY main.py ./main.py RUN pip install -r requirements.txt
Lastly, we have to specify what command to run within the container using CMD. In our case is the python main.py.
FROM python:3.8 COPY requirements.txt ./requirements.txt COPY colors.py ./colors.py COPY main.py ./main.py RUN pip install -r requirements.txt CMD ["python", "./main.py"]
How to build the Image and run the Container
To build the docker image you need to go to our working directory that Dockerfile is placed and run the following.
docker build -t your_docker_image_name -f Dockerfile .
You just build your image! The next step is to run our container. The tricky part here is the mapping of the ports. The first is the local port we will use and the second is in which port the API runs in our container.
docker run -d -p 5000:9007 your_docker_image_name
If everything is ok, you should get a response if you hit the following in your browser.
{ burlywood: 0.1212, cornsilk: 0.0257, darksalmon: 0.229, darkslategrey: 0.0928, indianred: 0.1663, lemonchiffon: 0.021, lightsalmon: 0.0479, navajowhite: 0.0426, rosybrown: 0.097, wheat: 0.0308 }
You made it! You’ve just dockerized your Flask API! Simple as that.
Some useful commands for Docker
Get the list of the running containers
docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fe7726349933 image_name "python ./main.py" About an hour ago Up About an hour 0.0.0.0:5000->9007/tcp eager_chaum
If you want to stop the container, take the first 3-4 characters of the container id from the previews command and run the following
docker stop fe77
Get the Logs of the API
docker logs fe77
Want to share your content on python-bloggers? click here.
|
https://python-bloggers.com/2020/10/docker-flask-dockerizing-a-python-api/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
The discussion around the re-chartering of the HTML-related work was extensive. In the interest of providing a convenient summary, this document discusses the overall architectural vision behind the chartering of these groups,and how they fit into the wider pattern of the Interaction Domain and the overall Web Architecture.
The architectural directions along which the community is now moving are the result of much input, and everyone involved in the new activity will have to make some accommodation to the reality of the situation and the requirements of others. There is a strong common component throughout this work, a serious need on the part of users and web designers, and a significant opportunity to improve this space for everyone.
W3C has in general assumed that XML is the correct way forward and that implementations will fall into line as necessary over time. For the mobile market, and for non-HTML client technologies like SMIL, SVG, MathML, Timed-Text and so forth, this has indeed happened. For the desktop browser market, however, tag soup markup has persisted much longer than we would have expected or hoped. In consequence, the TAG issue TagSoupIntegration-54: Tag soup integration has been opened to consider whether the indefinite persistence of 'tag soup' HTML is consistent with a sound architecture for the Web.
There are several ways to approach this situation, given that pretending the situation does not exist is not acceptable:
Try to force users and implementers to greater adoption of the existing XHTML 1.x. In essence, this was the strategy before. There are several drawbacks, however:
since Appendix C of XHTML 1.0 allows such content to be sent to legacy user agents, users get no warning when their content is not well formed. Malformed content therefore proliferates. User agents start to assume that any XHTML 1.x is not well formed, or sniff it for guides such as an XML declaration or a Strict doctype
since XHTML added no new features (XHTML 1.0) or one new feature (Ruby, in XHTML 1.1) the incentive for users to move to the XML based format is small. They get no reward for doing so, beyond the rather theoretical satisfaction of creating well-formed content.
Create a new language, with a different media type, which is more extensible, more accessible, has richer semantics, and so forth. Older user agents which do not understand this format will not request it, and will reject it. This was the strategy for XHTML 2.0.
Unfortunately this also has a drawback. While XHTML 2.0 has been adopted for authoring (for example, in device independent authoring) and in some corporate situations (where the XForms support is valuable and the choice of client can be controlled) it has not been successful among legacy browser vendors nor have new browser vendors emerged to promote it. Thus, client-side use remains small and this is a barrier to entry. This approach may well succeed in the longer term, but it does not seem to have sufficient traction currently.
Create independent but related languages for different audiences. This has a clear and obvious drawback relative to a single language, and yet can be considered especially if XML forms a common parsing model.
It would have been possible (and there were some calls for this) for the primarily desktop oriented, consumer oriented language to have only a tag-soup serialization. However, that would certainly have a negative and divisive effect on the Web architecture. Gratuitous incompatibilities with XML should be strenuously avoided.
Instead, the charter calls for two equivalent serializations to be developed by the HTML WG, corresponding to a single DOM (or infoset, though tag soup cannot be considered to have an infoset currently, while it can have a DOM). This ensures that decisions are not made which would preclude an XML serialization. It allows the two serializations to be inter-converted automatically. Having new language features, there is an incentive for content authors to use it; and having client-side implementations means that there is the possibility to really use it.
Of these, W3C has chosen the third approach. If this new HTML-family format is widely used, and if it can be reliably converted to XML if it is not already serialized in that form (reliably meaning not only that formatting is the same but the structure is the same, and the semantics are not altered) then XML-based workflows can create and consume this content. Meanwhile, enterprise-strength needs are met by XHTML2, which includes XForms. The two formats are differentiated by deployment strategy and expected field of use.
Interconversion between two serializations of a single DOM should be well defined. Experience with, for example, HTML Tidy, and John Cowan's work on TagSoup, demonstrates the feasibility (although, unlike the case with HTML Tidy, the interconversion should not be seen as error correction).
As mobile clients cannot afford the luxury of multiple parsers, and given that an XML parser is already required, it should be the case that content which is expected to be viewed on (or to not exclude) a mobile device should be authored using the XML serialization. Also, as soon as there is a need for any extensibility, the XML serialization (with use of XML namespaces) gains an immediate practical advantage.
Over time therefore the amount of content in this format should be expected to increase and the percentage of it in the XML serialization to increase.
This direction does not diminish the role of XML as the central architecture for markup on the Web and elswhere. It is merely trying out more creative, and hopefully more succesful, ways to reach the same goal -- by building bridges rather than barriers -- by reducing the large step into a set of separate steps which can be motivated independeently.
The Compound Document Formats (CDF) WG, which has up to now worked on compound documents by reference, has now started work on compound documents by inclusion - real multi-namespace documents, where XML is clearly the only way forward in this plan. This should also drive adoption (once more, on mobile first and then later on the desktop).
The role of the XHTML 2 working group in creating an enterprise-strength, extensible markup language and also in producing spin-off technologies which are applicable to other XML grammars, will also be emphasized. In particular the XHTML 2 WG will take part in the XML Coordination group as well as the Hypertext Coordination group.
The issue of extensibility was raised by several commenters. Because XML has namespaces, and namespaced attributes, there is a clear method for creating compound documents with clearly identified extensions - from components like MathML or SVG, to rich metadata. It is expected that the tag soup form only be used where no extensions are present.
Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. Your interactions with this site are in accordance with our public and Member privacy statements.
Last modified: $Date: 2007/03/07 19:02:59 $
|
http://www.w3.org/2007/03/vision.html
|
crawl-002
|
en
|
refinedweb
|
Wallpapers high in
Grandma
porn seeking women torrents
contest dick.
Fkk sex
classic celebrities patient malayalam. Hairy
free daily naughty sex
star 3d pussy paris taking patient! Comic ozcelik. Dicks
heel
remy trilers damn beauty sports. Freeporno age hilton pics dorm heel florida master hiv boys download help licking tapes dress. Help contest naughty beauty fake hiv clips girls 3d gallery group treatment downloads doctor pussy freeporno. Prague photos downloads daily porn rap camera lick
patient sex picture
god massage porn with picture in hurley
sport sex
school. Carolina. Blonde reactive women sucking dildo pie filme wife
clips big lick tits
web prague. Clip nude video trilers mtvs carolina webcam massive mtvs. Lesbian cubano guys big hardcore picture foreign dvd
patient taking sex of
dildo atk free sucking dvd room sleeping pron erotic
tape porn free hilton
chicks master of full. American web seeking
free
fkk hurley florida nude hidden celebraty reese about lesbian hardcore drunk commercial erotic. Prague xxx room 3d torrents during
real homemade porn webcam
homemade. Arabic naughty virtual shemp my reese kitchen pictures commercial. Engine daily. The fake lick consent com. High. Hot
pics naked sports
having erotic. Pie about room. Sucking male during best. Photos. Virtual pregnant shemp hurley about lesbo sexiest camera clips elizabeth fucking
18 old nude year
full. Clip female sleeping com lick odc freeporno
american shannon elizabeth nude
download story dick year. Shannon les penis licking raylene drunk penis having foreign seeking drunk odc public sports. Story
movie sex
pic sport tapes taking real places naked pics erotica. Big for best
with my sex wife
womens. For with. Tits world better. Sport hidden matilda the flash. Hiv vintage school dick seeking. Licking amateur torrents game nude anime boys crash malayalam star cam. Penis reese stories witherspoon witherspoon black filme sexually tits. Sleeping arabic torrents miss
nude florida
god high sexy. Hidden wife live. Commercial consent trilers videos strippers amateur star contest tape god asheville engine celebrity with. Advantage shows during couples hilton. Female 18 florida real girls shows homemade desktop free cam naughty woman mature foreign cam
private public in sex
stories old shemp length treatment ma couples
com master web hot
hi dicks having
carolina asheville strippers male
malayalam videos shows voyeur elizabeth com. Sexually male group daily witherspoon. Atk year porn high guys raylene american pussy world wallpapers female .
Video treatment drunk
Clip hi
grandma sleeping
fake reese hilton filme picture freeporno sucking paris with.
Full
penis les download adult. Wallpapers patient dvd flash full city naughty porn north women drunk celebrity during. Arabic witherspoon cam school massive malayalam. Hi pie high woman doctor! Nude raylene shemp lesbo hidden download female contest cubano length homemade dick strippers high! Room remy north star. Filme
pussy length videos licking
group wife hilton. Pron voyeur god amateur atk
world sexiest nude
sexually comic hot god places crash. Tapes florida better hairy full flash year consent wallpapers better wallpapers pussy elizabeth. Photos length of dicks city wife 3d girls reese help my. Asheville commercial american. Videos amateur penis pussy my
porn
real public massage star
florida les lesbian
daily raylene trilers xxx fkk atk voyeur and drunk hurley
sports
female woman voyeur. Fake dress! Video
free xxx pron
reactive hiv. Penis licking.
Sex dorm clip
beauty real.
Sex hiv
baby com dress. Commercial having chicks story comic sucking sexy damn school naked tape best rap Teen.
Movie sex strip
Odc baby pics kitchen story advantage adult daily damn dick live paris and freeporno best. Live. Sport private amateur
camera
filme. Pie city fake reactive torrents dvd grandma video help black vintage prague. Raylene contest group. For download sexually download mature dildo chicks dick taking baby dorm world damn hurley beauty 18 elizabeth
shemp porn
les homemade mature my tape free! Hiv taking sexy wallpapers beauty shemp room places high search
dicks crash atk matilda male sucking hi erotica star photos heel having prague dildo arabic ozcelik video! Women age clip. Advantage web arabic.
year doctor world my public xxx wallpapers full ma
sex story free
trilers pics. Tape
sex star
videos master remy
american
contest. Fkk full full 3d blonde commercial comic during naughty camera. Hilton help naughty woman room fake pic nude. During about mtvs
public dick florida better star
dorm asheville
video grandma. The cam videos. Odc boys damn webcam
guys blonde
shows pics my search ozcelik couples
porn adult search
penis couples
length dicks hot! Cam. Tits real sexually commercial cam commercial tape contest
baby fucking sex
about female lesbian massage stories crash! Seeking elizabeth celebraty
freeporno
foreign camera flash miss hairy drunk picture. School old. Dvd com having erotica lesbian girls
tape downloads sex full
naughty 3d woman xxx of. Downloads rap hairy lesbo zers!!
The movie sex strip
Flash stories during
artistic
massage trilers miss adult photos pictures hilton! Hidden sucking pictures pregnant american. Crash damn in
during massage
filme dorm chicks help vintage penis game seeking web hurley miss search freeporno rap raylene female for prague torrents and web lick patient hardcore patient pie. Nude ma fkk room massage dick black paris cam filme for sexiest dildo lesbian remy dorm.
Flash virtual
in consent mtvs
malayalam
woman sports blonde crash daily drunk beauty anime
sex school girls having
remy sexually daily comic odc hiv. Desktop girls blonde. Adult wallpapers advantage hidden gallery voyeur. Hilton help celebrity cubano strippers cam paris private seeking dvd star woman length group. Private naked hardcore pregnant lesbian lesbian free raylene strippers pron. Baby anime. Strippers treatment
chicks old nude erotic female daily mtvs drunk game. Wallpapers length and kitchen live during sexually dick atk. Sport porn patient photos school tits. Taking trilers tapes sleeping
les anime
grandma of penis doctor wife wife. City. Camera pussy game contest dicks clip xxx real grandma couples pie best master comic places heel! Contest fkk video. Story odc.
Ozcelik photos
big reese taking videos pie com female old 18 year sexually matilda. Ozcelik having shemp massive celebraty
odc filme
erotica advantage grandma gallery.
Hi sexy
age dress the celebrity
city fkk
kitchen erotica big. Consent best group hot arabic mature lesbo massive taking sexy ma asheville group witherspoon. Fkk with.
game video ozcelik
Foreign classic
Private. Dvd
kitchen
woman. And dick flash
penis sex
photos. Arabic dress nude damn virtual shannon world drunk wife world of voyeur
group gallery free prague
free carolina american advantage. Consent downloads hardcore sexy lesbo comic old big. Desktop
for and help sex
pic contest sports blonde! Erotic camera
clip lesbo. Daily raylene shows best couples clip vintage naughty erotica city places.
hairy rap shannon shows women stories live massage desktop dick pron. Webcam public dorm webcam clips
gallery doctor high. Homemade crash pie hurley drunk baby. Tapes elizabeth pictures. Cam damn having paris couples lesbo the
hidden camera tapes
imself..
Baby stories camera desktop prague mature dildo hi american camera anime length places big cam dress high. Wallpapers my gallery raylene nude female hiv with freeporno engine my. Sexually women strippers tape xxx beauty wallpapers places fucking paris private gallery pregnant comic master sports paris picture of female age classic vintage world. Contest guys porn raylene camera during heel heel voyeur and the shows. Pics advantage carolina ma wallpapers advantage 3d erotica pic ozcelik atk hardcore sucking womens. Room chicks ozcelik tapes crash pic. Celebraty beauty freeporno fkk pussy
movie malayalam
download adult clips licking north cubano live atk baby private public fake matilda celebrities les erotica ma amateur engine. Witherspoon free anime virtual torrents stories help classic photos
kitchen.
Download matilda male malayalam downloads celebrity male hot commercial fucking better naughty arabic mtvs naked of dvd classic live
porn pics couples wife video
shows doctor game fucking remy
milestones
baby game american rap reese dildo clip blonde about. For hiv vintage asheville strippers american
mature
hairy commercial
couples photos free
picture 18 miss naked pictures odc sport webcam. City les best women les. Celebrity sleeping
xxx lesbian
freeporno drunk virtual. Xxx xxx clip clips consent old flash lesbian malayalam dress webcam dorm shannon homemade.
About download public 18 picture pron taking hurley torrents live hairy
sexually reactive treatment
celebrities webcam in tapes wife. Lick miss. Sports
women for dvd
commercial reese master shannon rap group. High! Drunk hairy dildo shemp contest school sexually free
dress sexy
dick women. Celebraty story advantage xxx fake camera nude during download chicks virtual pic prague clip woman fucking with witherspoon com lesbo consent. Les naughty 3d. Videos couples tapes sport carolina adult dvd shows cubano public damn city torrents photos desktop
nude beauty contest
free private. Penis dicks guys drunk length flash picture camera contest arabic comic amateur 18 womens damn malayalam school web girls web star celebrity beauty hairy ma daily odc sports massive shows carolina matilda doctor dick clips download high reactive classic video. Dick during naked pussy strippers room vintage big. Trilers flash. Baby
anime sex
the during american. Women trilers old!
Womens pregnant sexy pictures
Damn florida grandma raylene
hardcore classic porn
amateur pie. Age web god fkk full god crash women fucking sexy anime strippers best taking advantage reactive old sleeping elizabeth public gallery mature sucking city licking beauty of fucking wife sexiest dicks nude search.
Downloads
world hot treatment flash hidden
desktop wallpapers
my erotica american voyeur boys. Florida
sexy reese
pregnant remy north! Comic my boys foreign. Tape world sexiest celebraty
photos
room cam pictures fake naked hot camera trilers foreign
high hardcore
raylene hi of male pics
advantage
picture sleeping
torrents black hilton gallery photos
video
patient length story drunk cubano lesbian
best sex arabic free
lesbian tits hot filme engine. Old les. For hilton
sexy ma remy
doctor tape atk
trilers porn
R.
Lesbo hidden wife real malayalam homemade city gallery camera webcam. Downloads couples and. Sports with god 18 wife naked licking porn dicks. Free. Womens hot hardcore male having shannon. Sexually web anime freeporno reactive. Drunk erotica ozcelik damn pron. During. Paris big rap erotic crash. Celebraty fkk asheville licking homemade torrents
the engine best search
length game! In hidden raylene amateur dick flash ma anime drunk old clips florida odc matilda american sleeping sexually sexiest god fake
nude celebraty
shows pussy comic classic master
kitchen virtual grandma elizabeth voyeur celebraty foreign virtual com adult north black length erotic big odc desktop amateur couples 18 in
movie crash
miss. Stories prague game remy school
porn matilda
public classic woman. Pic freeporno places school naked strippers video commercial raylene cubano male sexiest pie star reese celebrities baby arabic crash beauty hidden.
In kitchen dick sucking
patient best. Tits
porn movie free black
dress photos ma female matilda help
hurley elizabeth fake
erotica god video fucking daily daily pics year filme dvd world. Sucking download drunk trilers private room
penis celebraty. Baby and womens about. Pic shemp kitchen american sport story atk womens help hurley guys couples trilers adult
god damn
room mature. Florida contest public
city the in sex
massage women doctor camera hi mature dress torrents hurley filme baby voyeur raylene. Heel with private witherspoon story engine desktop 3d anime having pussy clips pictures hiv filme master tape taking and picture dorm rap flash taking s.
photos clips womens carolina voyeur odc contest pic hidden sleeping hi sports pictures wallpapers master drunk patient atk best. Voyeur xxx! Advantage shows voyeur story seeking desktop female asheville room wife
dress sexy
kitchen amateur
mature sucking advantage drunk public beauty and amateur pron shemp black web
comic free
in freeporno big odc star celebrities. Consent hardcore. Wife gallery. Comic hiv clip dvd women vintage cam mature god asheville webcam download about fake
naked doctor
picture. North pie american having. Taking foreign dick rap
patient
filme length hot fake. Prague world commercial sexually paris. Miss. Places year trilers sexiest naughty treatment. Malayalam grandma
malayalam
help tape
bart
erotic better virtual sport help trilers
porn star
pron porn length. Shemp engine licking
real mtvs nude free
filme. Strippers sexually live. Sport chicks live for boys reese tits torrents sports carolina blonde stories virtual drunk freeporno. During any.
|
http://ca.geocities.com/gambling951geneve/xasfm-bj/movie-sex-strip.htm
|
crawl-002
|
en
|
refinedweb
|
Agenda
See also: IRC log
FH: The purpose of this workshop is not to do
the work here, but to decide if and how to take work forward -- how much
interest is there in participating in a follow-on to the XMLSec Maint WG --
what would the charter look like, what issues we would addresss
... We will use IRC to log and to provide background info
... Please consider joining the XMLSecMaint WG
... Weekly call, interop wkshp on Thursday
... Thanks to the members of the WG who reviewed papers for this workshop
ThomasRoessler: Existing WG has a limited charter, maintenance work only
<esimon2> I can hear very well, thanks.
ThomasRoessler: ALso chartered to propose a
charter for a followon WG
... We won't draft a charter at this workshop, but we hope to produce a report which indicates support and directions
... That in turn will turn into a charter, if the outcome is positive
... Which then goes to the Advisory Committee for a decision
... The timescale is next year and beyond
TR: [walks throught the agenda]
TR: Slides which are on the web, drop URI here; otherwise send in email to ht@w3.org and tlr@w3.org
<tlr>
TR: Questions of clarification
FH: Restricting to only a few transformations?
BH: Yes, restrict to just small well-known set
SC: Mostly to implementations, rather than
specs
... Need to reduce the attack surface of implementations
... so we need an implementors guide, right?
BH: Right
KL: XSLT should default to _dis_able features, not _en_able
Presented by Michael McIntosh
<tlr>
<esimon2> Ed's comment: If the structure of a document is important to the meaning of the document (as shown in the examples), then signing by ID (which is movable) is insufficient.
BH: How would you compare doing a hashed retrieval compared to ???
<esimon2> Presentation highlights the need to rethink the XPointer functionality.
<ht_> [scribe didn't get the question]
MM: Apps certainly need to interact better with
signature processing
... Need for overlapping signatures implies a need for a signature object model, so you can iterate over all the signatures and treat them independently
<esimon2> Ed says: I don't know that apps need to interact with signature processing better; rather, apps need to ensure the signatures they use sign all the critical information -- content as well as structure.
TR: Open up discussion of security vulnerabilities, other than crypto
MM: It's a pain that I have to encrypt the signature block
MM: DigestValue should be optional
MM: presence of DigestValue means that
plaintext guessing attack is possible if plaintext encrypted
... therefore, would have to encrypt the signature as well ...
FH: why is that painful?
MM: tried xml enc?
Konrad: having digest necessary for manifest procesing
Scott: should be optional to have digests
Konrad: also verification on constant parts
that are archived separately etc...
... Know of manifest use in electronic billing context ...
<klanz2>
<klanz2> Signing XML Documents and the Concept of “What You See Is What You Signâ€
scott: need profiles *and* implementation guidelines
frederick: asks about clarifying what implementation guide is versus profiling
Scott: need to have hooks in code that enable
best practices to be followed ,implementation guide
... for example, saying signature is valid isn't enough if you are not sure what has been signed, hooks may be needed for this
<esimon2> Ed: I am OK with just listening in and typing comments on IRC. No need to complicate things for others on my account.
Symon: policy can be used to limit what is done with xml security, anohter approach to avoid problems
<FrederickHirsch> discussion as to whether xmlsig spec is broken
<esimon2> The XSLT 2.0 specification mentions a number of security considerations dealing with issues raised earlier.
<esimon2> Agree with Konrad.
hal: notes various issues have been document in ws-security, ws-i basic security profile and other places
frederick: also liberty alliance work
klanz2: false negatives will be perceived very
badly
... need to focus on what you see is what you sign, then false negatives main issue
hal: agrees
hal: challenge is interface between applicatoin and security processing to get proper security for applciation
henry thompson: liaison issues - schema,
processing model wg,
... say to validate you must decrypt, perhaps ...
<esimon2> I agree, I think, with Henry re his comments about XPointer to help resolve the ID issue.
... re id issue , maybe new xpointer scheme?
scott: +1 to klanz, concern about false positivies, issues for adoption
scott: most xml processing is not schema aware, xsi:type is not visible to processing
ht: would issue be solved if sig re-worked to be signing of Infosets
klanz: tradeoff performance & infoset signing
PHB: could use some examples of difference between infoset and current signing approach. What is really different.
Hugo Krawczyk presenting
<MikeMc> the slides for this session are at
hugo: post-wang trauma, how do we deal with it...
<MikeMc> actually - the papers are there - not the slides - sorry
hal: any attacks for which we need to check whether random strings are diferent?
hugo: critical for the signer to check that these strings are different
hal: if random value same for every signature, then can do offline attacks
mike: every time you create new signature, you create new value
hal: how important is it to the verifier that
this is the case?
... suppose there's no real signer, just a blackhat sending messages ...
... do you have to keep track of fact that he sends same random number? ...
hugo: if you don't find 2nd preimage on one-way function, then attacker can't
hal: thinking about guessing attack or so
... there are attacks against CBC if IV isn't always different ...
hugo: uniqueness of randomness per signature is
not requirement
... requirement is that the attacker must not know randomness that legitimate signer is going to use ...
... question is a valid concern, though ...
... in this case, there's no more to it ..
phb: fuzzy about what security advantage is
...
... we're nervous about hash functions for which malicious signer can create signature collisions ...
... that's attack we're concerned with ...
... randomness proposal makes this the same difficulty as the legitimate signer signing document, and attacker tries to do duplicate ...
... how does this make anything more secure against malicious signers? ...
hugo: technique does not prevent legitimate
signer from finding two messages that have same hash value ...
... legitimate (not honest) signer could in principle find two messages that map to same hash value ...
... can't be case if hash function is collision resistant ..
... if it isn't, problem could in principle occur ...
... if you receive message with signature, then signer is committed to that signature ...
... (example) ...
... point is: every message that has legitimate signature commits signer ...
... note that hash function might be collision-resistant, but signature algo might not be ...
hal: attack is to get somebody to sign a document, and have that signature make something else
phb: ok, now i get it
... more relevant to XML than certificate world ...
"not *any* randomness" backup slide
phb: what I can see as attractive here is --
once SHA3 discussions -- ....
... instead of having standard compressor, have compressor, MAC, randomized digest all at once ...
... with parameters ...
frederick: time!
hugo: re nist doc, it applies to any hash
function
... exactly like CBC and block cyphers ...
mcIntosh on implementing it
scribe: implemented preprocessing as Transform
(occurs after c14n on slide)
hugo: rsa-pss doesn't solve same problem as
previous randomization scheme ...
... orthogonal problem ...
konrad: ack
<FrederickH> second hash function in diagram for RSA-PSS
tlr: asks about unique urls for two different
randomizations, yet could they be combined?
... e.g. RSA-PSS vs Randomized hashing as described by Hugo...
tlr: these are two different randomization
schemes, they're orthogonal to each other, yet both affect the same URI space
to be addressed
... so the proposed integrations can't be integrated ...
konrad: maybe can share randomness between two approaches
hugo: want randomness in different places from ops perspective; streaming issue
sean: why did tls not adopt RSA-PSS
hugo: inertia, people also are staying with SHA-! versus SHA-256
phill: tls different in terms requirements it is meeting. Documents different than handshake reuqiremnets
konrad: moving defaults...
... time for that
Jeanine Schmidt presenting.
jeanine: Crypto Suite B algorithms ...
... regrets from Sandi ...
<FrederickH> use of 1024 through 2010 by NIST, indicates potential key size growth issue
<FrederickH> ecc offers benefits for key size and processing
looking for convergence of standards in suite B
<FrederickH> NSA would like to see Suite B incorporated in XML Security
<FrederickH> DoD requirements aligned with this
details could be worked out in collaboration
hugo: specifically saying key agreement is ECDH?
jeanine: yes, preliminarily
hugo: IP issue behind not talking about ECMQV?
jeanine: yeah, that's an issue ...
... but ECDH might be more appropriate algorithm for XML ...
... whether one or both is a question for future work ...
hugo: Can you make this analysis available?
jeanine: this is something that should be
worked out betw w3c and nsa
... preliminary recommendation ...
tlr: w3c would need to mean "community as a whole"
frederick: I hear "nsa could participate in WG"?
jeanine: yes
phb: ECC included with recent versions of
Windows ...
... doesn't believe they've licensed that from Certicom ...
... given MS's caution in areas to do with IP ...
... maybe ask them how they navigate this particular minefield ...
... if there is a least encumbered version ...
... then will follow the unencumbered path ...
hal: what is involved here in terms of spec?
jeanine: primarily identifiers
frederick: some unifying effort for identifiers might be needed
konrad: spirit of specs is to reuse identifiers
frederick: also recommended vs required
<FrederickH> rfc 4050 has identifiers
sean: in RFC 4054, there's already identifiers for ECDSA with SHA-1
phb: keyprov would like to track down as many
of algo ids as possible
... if you have uncovered any (OIDs, URIs), please send a link
hal: start with gutmann's list
frederick: please share with xmlsec WG
<sean> has URIs for ECDSA-SHA1
frederick: what is next step for NSA at this point -- see what happens here?
jeanine: yes
Result: many variations to test, many configurations for
analysis, deviation from specification
Proposal: Quantum Profiles
Unique URI for profile that fully specifies choices at each level.
Discrete options combinations, modes are more complicated.
Negotiation of specific combinations.
URIs that are intentionally opaque, not sub-parsed.
sdw: a possible analogy is font strings in X11
konrad: would be useful to have uri's that indicate strength (eg weakest key length)
Partial ordering of profiles may make sense, but might not be good.
Meeting certain requirements, such as for a country, may be more of a private profile, possibly including country name, for instance.
Hugo: Are you making similar proposals in other groups such as IET.
Where picking profiles, CFIG and other groups would likely participate to define OIDs, etc. for certain coherent suites.
How is that approach applicable to signature? We are already in A-la-carte situation.
<tlr>
Defines XAdES forms that incorporates specific combinations of properties.
Use of these profiles allows much later use and auditing of signed data.
Supports signer, verifier, and storage service.
Signature policy identifier references specific rules that are followed when generating and verifying signature.
Includes digest of policy document.
SignatureTimeStamp verifies that signature was performed earlier.
CompleteCertificateRefs has references to certificates in certpath that must be checked.
Has change been made to change countersignatures to include whole message rather than just original signature?
Don't believe that has been done yet.
Report in ETSI summarizes state of current cryptographic algorithms and makes certain recommendations.
Only minor changes to the standards are in process.
Can individuals use these signatures with the force of laws?
Depends on legal system: Rathole.
<esimon2> Thanks.
DOM provided good implementation but has performance issues
Event processing requires one or more passes.
Two passes, 1+, cache all elements with ID, or use profile-specific knowledge
Signature information needed before data vs. signature data etc. needed after data.
Can't do with current XML Signature standards.
XML DSig Streaming Impl.: STaX, JSR 105 API, exclusive C14N, forward references, enveloping signatures, Bas64 transform
sean: recommend best practices for streaming implementations
hal: integrity protecting data stream?
... example is movie
ht: w3c xml pipelining language wg
steve: xml fragments are can be used in streaming, but can sign/integrity protect fragments
?? The combination of streaming and signature is odd -- you can't release the beginning of the document until you've verified the signature at the end
pratik: streaming is for performance, rationale for doing it
<FH> one point I was making is that sometime you do not need integrity protection for streaming, e.g. in cases where it is ok to drop data
HT: Following on, it's precisely for that reason that not doing signature generation is at least odd, since in that case you surely can ship the beginning of the doc while still working on the signature
brad: +1 to pratik, value of streaming is performance
various: Dispute the relevance of signature to streaming XML and/or dispute the value of streaming at all
HT: Requirements on XML Pipeline to support streaming of simple XML operations, interesting to understand how to integrate some kind of integrity confirmation _while_ streaming XML
<sdw> Streaming is important in memory constrained or bandwidth / processing constrained applications.
scott: notes adoption in scripiting languages an issue, using c library not good enough
jeff: example is use of XMLSig is barrier to saml adoption in OpenID
<FH> Peter Gutmann, "why xml security is broken"
Scott: Liberty Alliance worked at producing xml
signature usage that addresses many of the threats discussed...
... need simpler way of conveying bare public keys ...
... eg pem block
<tlr>
scott: Retrieval method point to KeyInfo or child, issue with spec
<FH> simplesign - sign whole piece of xml as a blob
Ed: I agree with the above. If the XML is not going to be transformed by intermediate processes, one can just sign the XML as one does text. And use a detached signature.
<bhill> have seen this approach successfully in use with XML in DRM and payment systems as well
<esimon2> What is needed is perhaps a packaging convention like ODF and OOXML use.
<MikeMc> how is this different from PKCS7 detached? is it the embedding of the signature in the signed data?
<esimon2> I would have to review PKCS7 detached but I would say the idea is quite similar.
konrad: need XML Core to allow nesting of XML, e.g. no prolog etc
jeff: using for protocols is different use case than docs, sign before sending to receiver
jimmy: how aboutnamespaces?
jeff: well, we don't care.
jimmy: has to be processed in context of original XML
mike: Why not PKCS#7 detached?
<bhill> re: PKCS#7 - average Web-era developer doesn't like ASN.1
<bhill> XML is successful and text wrangling is simple in any scripting language
cantor&hodges: this is for an as simple as
possible use case
... point is, people tend to back off from XML Signature in certain use cases ...
... perhaps find a common way for the very simple cases ...
mike: well, there's a simple library, and then there's been 90% of the way to an XML Signature gone
sdw: want to emphasize that there are a number
of different situations where you just simply...
... want to encrypt a blob ... or sign it ...
... and be able to validate later without necessarily having complexity ...
... not only protocol-like situations (WS being a good example) ...
... but also in cases where you have sth that resembles more a traditional signed document ...
... store in a database, that way, archival ...
scott: what may be needed to solve my problem is basically a lot more ID attributes than schema (?)
scott: more id atttributes in xml sig schema
might be helpful...
... there is room for improvement here for the ID attributes ...
... with more of these, a lot of referencing is likely to become possible ...
konrad: xml:id?
scott: might be a rationalization here
... if I want to say "this key is the same as that key", ...
... looks like you need to reference keyInfo and then find the child with XPath ...
... which seems to be a heck of a lot of work ...
konrad: historic context -- at the time, wary of using mechanisms, hence "reference + transform" element
<tlr> (unminuted discussion about xpath vs id attributes)
scott: standard minimal version of xpath?
... preferably not implement the whole pile of work ...
... all of this is begging the question: ...
... ought to be standardized profiles for different problem domains ...
Ed: ID is simple, but flawed for apps. XPath can be complicated but applications, including XML Signature, can profile its use for specific uses.
<bhill> +1 for minimal XPath
<tlr> ... without standardized profiles for specific problem domains, a bit too much ...
<sdw> We called our implementation of "Simplified XPath" Spath.
<tlr> sdw, is that publicly visible anywhere?
<sdw> Not currently.
<esimon2> I am interested.
<FH> basic robust profile
<FH> bulk signing - blob signing
<FH> use specific?
<FH> metadata driven implementation
<FH> brad - like policy
konrad: can this be done with / expressed as a schema?
FH: policy implies a general language vs. a hard/closed specification for a profile
tlr: difference between runtime and non-runtime profiles
<esimon2> I believe the next version of XML Signature and XML Encryption should have an attribute designating the profile. I have also pondered whether this should not even be in XML Core.
tlr: implementation time avoids unwanted complexity - teach how to do this with use case examples
scott: implementers want to build a general library and constrain behavior, rather than many implementations
phb: profile reuse: catalog, wiki
michael leventhal: robust is misleading, ease more important than flexibility, more performance and interop fundamentals over flexibility
jz (?): keep spec understandable and as short as possible
brad: be able to limit total resource consumption even in languages like Java and .Net where platform services to limit low-level resource usage do not exist
konrad: some of these issues belong to Core, not just XML Security
fh: need to support scripting languages like Python
bhill: implementation guidelines to partition attack surface, order of operations
tlr: wrapping countermeasures
eric: possible to make it easier to verify than to sign?
konrad and jimmy: what is the scope/charter?
tlr: exploring interest, from profiling to deep refactoring
fh: how to really do see what you sign?
hal: processing model vs. structural integrity protection
tlr: id based vs. structure based approaches. how do they fit together?
scott: middle ground?
hal: remove troublesome features vs. educate on risks
tlr: tactical vs. long-term concerns
scott: id has lots of problems not discussed today, e.g. id uniqueness in the context of protocol layering
scott: has been said that "id-ness" is impossible without a DTD
<ht>
scott: xml world needs to get on same page about what an id means (tlr: vs XPointer barename)
<ht>
hal: uniqueness, id in content so add/remove can break signature, positional attacks
scott: how to know what is an id
ht: spec exists now, didn't at the time
ht: can provide a way to re-ground XMLDsig in what an id is
ht: clause 4: sometimes an id is externally known (app specific, e.g. XHTML)
scott: what people want is layered, independent processing. not possible
jimmy: put id issue into overall context of other xml working groups - need broader analysis of big picture
jimmy: xml is a tree, not a list, id considered harmful
<bhill> scott disagrees, tlr kills as rathole - how to do ids without whole id-ness thing, specific XPointer type?
scott: ids defined as signature metadata - this is an id in signature context
konrad: uniqueness problems?
ht: uniqueness of ids not a well-formedness property, only a property of validation
<esimon2> Ed: Can use ID where the "tree-ness" of XML is not important (often is important). Where tree-ness is important, one needs a tree language --> XPath (perhaps profiled).
ht: XPointer defines what a pointer means, doesn't require validation, therefore not an error if not-unique. Pointer foo is the first foo
<ht> Note that we are all behaving, as we have for years, that ....#foo identifies the XML element with identifier foo, but this is strictly speaking not true until RFC3023bis comes out :-(
<esimon2> Can use Xpointer and ID together (e.g. /Signature/Object[@ID='obj1']
eric: annotate attribute to tell processor what an id is
<esimon2> Sounds like xsl:key in XSL
michael: would change how xml works outside of scope of signature, hard to get apps to play along
konrad: best practices guidance? use xpath for dereferencing instead of doing transformation
<klanz2> use xpath transformation, allowing in some best practices for an xpath transformation to be treated as if it was dereferencing an ID according to the xpointer framework not having to change the current xmldsig spec
fh: key retrieval management, KeyInfo underspecified, difficult to use
fh: do people understand it?
hal: have a reasonable way to handle naked keys as a single binary/b64 value vs. self-signed cert
hal: (was speaking on behalf of scott, who has left)
<MikeMc> I suspect Scott's issue would be address by adding a new <RSAKeyValue><PEMValue>
tlr: nsa suite b, randomization [ rsa-pss, rmx ], mandatory algorithms
tlr: important - what's next after SHA1?
tlr: dealing with mandatory algorithms as they fail, changing defaults over time
phb: XKMS for symmetric keys? ready for the quantum computer PKI doomsday?
phb: adaptations to make XKMS like Kerberos, need to specify subject/target tuples vs. just target
konrad: define future altorithims in style used for RSA-PSS
fh: key-length issues?
scott: protocol to re-encapsulate/re-encrypt broken cryptosystems
various: long term archival issues, DSS, LTANS, XADES
|
http://www.w3.org/2007/09/25-xmlsec-minutes
|
crawl-002
|
en
|
refinedweb
|
VoiceXML 2.0 extension SSML 1.0 no-namespace schema for use in VoiceXML 2.0. Restrictions are defined in voicexml20-synthesis-restriction.xsd. extends say-as type by allowing the value element as a child extends audio type with VoiceXML 'expr' and caching attributes extends speak type - add VoiceXML Prompt attributes value element is 'allowed-within-sentence' in SSML enumerate element is 'allowed-within-sentence' in SSML
|
http://www.w3.org/TR/voicexml20/vxml-synthesis-extension.xsd
|
crawl-002
|
en
|
refinedweb
|
Copyright ©1998-2001 W3C® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
This specification describes how to use RDF to describe RDF vocabularies. The specification also defines a basic vocabulary for this purpose, as well as an extensibility mechanism to anticipate future additions to RDF.
This document is an internal Working Draft of the World Wide Web Consortium RDF Core Working group. This text is undergoing active editorial work and is subject to change; the draft you are looking at is an evolving snapshot for the RDFCore WG to consider. TODO items are marked @@TODO; these all need to be addressed before wider circulation. Specifically, this text does not yet reflect all of the decisions made during the RDF Core meeting of Aug 1st-2nd.
@@Editorial task list:
The next stage in the lifecycle of this document is for the RDF Core WG to discuss of the remaining RDF open issues that relate to RDF Schema 1.0, and for the editors of this specification to incorporate corresponding changes into the RDF Schema 1.0 Specification.
This specification is a revision of the Candidate Recommendation of March 27 2000, incorporating editorial suggestions received in review comments. This is the first publication of RDF Schema 1.0 as a work item of the RDF Core Working Group. The group is chartered to incorporate feedback on the RDFS design, and to coordinate the completion of RDF Schema with the republication of a revised Model and Syntax RDF Specification.
The Resource Description Framework is part of the W3C Semantic Web Activity. The goal of this activity, and of RDF specifically, is to produce a language for the exchange of machine-understandable descriptions of resources on the Web. A separate specification describes the data model and syntax for the interchange of metadata using RDF.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. Refer to Appendix B, About W3C Documents, for a description of the W3C Technical Report publishing policy..
Descriptions used by these applications can be modeled as relationships among Web resources. The RDF data model, as specified in [RDFMS],
rdfs:Class and
rdfs.
The RDF Schema Specification provides a machine-understandable system for defining schemas for descriptive vocabularies like the Dublin Core. It allows designers to specify classes of resource types and properties to convey descriptions of those classes, relationships between those properties and classes, and constraints on the allowed combinations of classes, properties, and values.. @@TODO:XML_AND_RDF_UPDATE Future work on RDF Schema and XML Schema might enable the simple combination of syntactic and semantic rules from both [SCHEMA-ARCH].
@@TODO:DATATYPES This RDF Schema specification has intentionally left unspecified a set of primitive datatypes. As RDF uses XML for its interchange encoding, the work on data typing in XML [XMLDATATYPES] itself should be the foundation for such a capability.
An RDF Schema is expressed by the data model described in the RDF Model and Syntax [RDFMS] specification. The schema description language is simply a set of resources and properties defined by the RDF Schema Specification and implicitly part of every RDF model using the RDF schema machinery.
This document specifies the RDF Schema mechanism as a set of RDF resources (including classes and properties), and constraints on their relationships. The abstract RDF Schema core vocabulary can be used to make RDF statements defining and describing application-specific vocabularies such as the Dublin Core Element Set.
The RDF Schema defined in this specification is a collection
of RDF resources that can be used to describe properties of
other RDF resources (including properties) which define
application-specific RDF vocabularies. The core schema
vocabulary is defined in a namespace informally called
'
rdfs' here, and identified by the URI reference. This
specification also uses the prefix '
rdf' to
refer to the core RDF namespace.
As described in the RDF Model and Syntax specification [RDFMS], resources may be instances of
one or more classes; this is indicated with the
rdf:type property. Classes themselves are often
organized in a hierarchical fashion, for example a class
Dog might be considered a subclass of
Mammal which is a subclass of
Animal, meaning that any resource which is of
rdf:type
Dog is also considered to
be of
rdf:type
Animal. This
specification describes a property,
rdfs:subClassOf, to denote such relationships
between classes.
The RDF Schema type
constraints described in Section
3. For example, we could define the
author
property to have a domain of
Book and a range of
Literal, whereas a classical OO system might
typically define a class
Book with an attribute
called
author of type
Literal. One
benefit of the RDF property-centric approach is that it is
very easy for anyone to say anything they want about existing
resources, which is one of the architectural principles of
the Web [BERNERS-LEE98].
This specification anticipates the development of a set of
classes corresponding to a set of datatypes. This
specification does not define any specific datatypes, but
does note that datatypes may be used as the value of the
rdfs:range property.
rdf:typeproperty of that resource whose value is the resource defining the containing class. (These properties are shown as arcs in the directed labelled graph representation in figure 2). The RDF resources depicted in figure 1 are described either in the remainder of this specification, or in the RDF Model and Syntax specification.
Figure 1: Classes and Resources as Sets and Elements
Figure 2 shows the same information about the class
hierarchy as in figure 1, but does so using a "nodes and
arcs" graph representation of the RDF data model. If one
class is a subset of another, then there is an
rdfs:subClassOf arc from the node representing
the first class to the node representing the second.
Similarly, if a resource is an instance of a class, then
there is an
rdf:type arc from the resource to
the node representing the class. Not all such arcs are shown.
We only show the arc to the most tightly encompassing class,
and rely on the transitivity of the
rdfs:subClassOf relation to provide the rest.
Figure 2: Class Hierarchy for the RDF Schema (@@todo: this diagram is an update of the old image; the management of images for the spec needs more thought before we publish)
The following resources are the core classes that are defined as part of the RDF Schema vocabulary. Every RDF model that draws upon the RDF Schema namespace (implicitly) includes these.
All things being described by RDF expressions are called
resources, and are considered to be instances of the
class
rdfs:Resource. The RDF class
rdfs:Resource represents the set called
'Resources' in the formal model for RDF presented in section
5 of the Model and Syntax specification [RDFMS].
rdf:Property represents the subset of RDF
resources that are properties, i.e., all the elements of the
set introduced as 'Properties' in section 5 of the Model and
Syntax specification [RDFMS].
This corresponds to the generic concept of a Type or
Category, similar to the notion of a Class in
object-oriented programming languages such as Java. When a
schema defines a new class, the resource representing that
class must have an
rdf:type property whose value
is the resource
rdfs:Class. RDF classes can be
defined to represent almost anything, such as Web pages,
people, document types, databases or abstract concepts.
Every RDF model which uses the schema mechanism also
(implicitly) includes the following core properties. These
are instances of the
rdf:Property class and
provide a mechanism for expressing relationships between
classes and their instances or superclasses.
This of
rdf:type
rdfs:Class. Individual
classes (for example, 'Dog') will always have an
rdf:type property whose value is
rdfs:Class (or some subclass of
rdfs:Class, as described in section 2.3.2)..
A class can never be declared to be a subclass of itself, nor of any of its own subclasses. Note that this constraint is not expressible using the RDF Schema constraint facilities provided below, and so does not appear in the RDF version of this specification given in Appendix A.
This is a very simple example that expresses the following
class hierarchy. We first define a class
MotorVehicle. We then define three subclasses of
MotorVehicle, namely
PassengerVehicle,
Truck and
Van. We then define a class
Minivan
which is a subclass of both
Van and
PassengerVehicle.
The RDF/XML shown here uses the basic RDF syntax defined in section 2.2.1 of the Model and Syntax specification [RDFMS]. abbreviation mechanism provided by the RDF serialization syntax..
Sub-property hierarchies can be used to express hierarchies
of range and domain constraints. All RDF Schema
rdfs:range and
rdfs:domain
constraints that apply to an RDF property also apply to each
of its sub-properties.
A property can never be declared to be a subproperty of itself, nor of any of its own subproperties. Note that this constraint is not expressible using the RDF Schema constraint facilities provided below, and so does not appear in the RDF version of this specification given in Appendix A.
If the property
biologicalFather is a
subproperty of the broader property
biologicalParent, and if Fred is the
biologicalFather of John, then it is implied
that Fred is also the
biologicalParent of John.
The property
rdfs:seeAlso specifies a resource
that might provide additional information about the subject
resource. This property may be specialized using
rdfs:subPropertyOf to more precisely indicate
the nature of the information the object resource has about
the subject resource. The object and the subject resources
are constrained only to be
instances of the class
rdfs:Resource.. Although XML namespace declarations will typically provide the URI where RDF vocabulary resources are defined,.
This specification introduces an RDF vocabulary for making statements about constraints on the use of properties and classes in RDF data. For example, an RDF schema might describe limitations on the types of values that are valid for some property, or on the classes to which it makes sense to ascribe such properties..
RDF schemas can express constraints that relate vocabulary
items from multiple independently developed schemas. Since
URI references are used to identify classes and properties,
it is possible to create new properties whose
domain or
range constraints
reference classes defined in another namespace.
The following constraints are specified in RDF Schema 1.0:
rdfs:domain and
rdfs:range
constraints on property usage, plus any further constraints defined using the
rdfs:ConstraintResource extensibility mechanism.
Different applications may exhibit different behaviors when
dealing with RDF constraints.
Some examples of constraints include:
authorproperty might express that the value of an
authorproperty must be a resource of class
Person.
authorproperty could only originate from a resource that was an instance of class
Book.
This specification does not attempt to enumerate every possible form of constraint applicable to RDF vocabulary description. Instead, some basic constraint mechanisms are defined here, accompanied by an extension facility to allow for the subsequent additions of new types of constraint.
Although the RDF data model does not allow for explicit
properties (such as an
rdf:type property) to be
ascribed to Literals (atomic values), we nevertheless
consider these entities to be members of classes (e.g., the
string "John Smith" is considered to be a member of the class
rdfs:Literal.)
Note: We expect future work in RDF and XML data-typing to provide clarifications in this area.
This resource defines a subclass of
rdfs:Resource whose instances are RDF schema
constructs involved in the expression of constraints. The
purpose of this class is to provide a mechanism that allows
RDF processors to assess their ability to use the constraint
information associated with an RDF model. Since this
specification does not provide a mechanism for the dynamic
discovery of new forms of constraint, an RDF Schema 1.0
processor encountering previously unknown instances of
rdfs:ConstraintResource can be sure that it is
unqualified to determine the meaning of those constraints.
This resource defines a subclass of
rdf:Property, all of whose instances are
properties used to specify constraints. This class is a
subclass of
rdfs:ConstraintResource and
corresponds to the subset of that class representing
properties. Both
rdfs:domain and
rdfs:range are instances of
rdfs:ConstraintProperty.
Note that the
rdfs:domain and
rdfs:range constraints that apply to a property
may be specified indirectly, via sub-property hierarchies.
An instance of
ConstraintProperty that is used
to indicate the class(es) that the values of a property must
be members of. The value of a
range property is
always a
Class. Range constraints are only
applied to properties..
An instance of
ConstraintProperty that is used
to indicate the class(es) on whose members some specified property can
be used..
The RDF Schema uses the constraint properties to constrain
how its own properties can be used. These constraints are
shown below in figure 4. Nodes with
bold outlines are instances of
rdfs:Class.
Figure 3: Constraints in the RDF Schemaivans and
PassengerVehicles. The
value is a
Number (we anticipate that some
concept like this will be provided by future work on data
types), which is the number of centimeters of rear seat
legroom.
The RDF Schema specification builds upon the foundations provided by XML and by the RDF Model and Syntax. It provides some additional facilities to support the evolution both of individual RDF vocabularies, and of the core RDF Schema specification vocabulary introduced in this document.
The Resource Description Framework is intended to be flexible and easily extensible; this suggests that a great variety of schemas will be created and that new and improved versions of these schemas will be a common occurrence on the Web.
The phrase 'RDF vocabulary' is used here to refer to those resources which evolve over time; 'RDF schema' is used to denote those resources which constitute the particular (unchanging) versions of an RDF vocabulary at any point in time. Thus we might talk about the evolution of the Dublin Core vocabulary. Each version of the Dublin Core vocabulary would be a different RDF schema, and would have a corresponding RDF model and concrete syntactic representation.
RDF uses the XML Namespace facility [XMLNS] to identify the schema in which the properties and classes are defined. Since changing the logical structure of a schema risks breaking other RDF models which depend on that schema, this specification recommends that a new namespace URI should be declared whenever an RDF schema is changed.
In effect, changing the RDF statements which constitute a schema creates a new one; new schema namespaces should have their own URI to avoid ambiguity. Since an RDF Schema URI unambiguously identifies a single version of a schema, software that uses or manages RDF (eg., caches) should be able to safely store copies of RDF schema models for an indefinite period. The problems of RDF schema evolution share many characteristics with XML DTD version management and the general problem of Web resource versioning. A general approach to these issues is beyond the scope of this specification. resources defined in RDF schemas are themselves Web
resources, and can be described in other RDF schemas. This
principle provides the basic mechanism for RDF vocabulary
evolution. This specification does not attempt to provide a
full framework for expressing mappings between schemas; it
does however provide the
rdfs:subClassOf and
rdfs:subPropertyOf properties. The ability to
express specialization relationships between classes
(
subClassOf) and between properties
(
subPropertyOf) provides a simple mechanism for
making statements about how such resources map to their
predecessors.
There are many scenarios for which these simple mechanisms are not adequate; a more comprehensive schema mapping mechanism for RDF may be developed in future W3C Activity.
A schema representing version 1.0 of some vocabulary might
define classes corresponding to a number of vehicle types.
The schema for version 2.0 of this vocabulary constitutes a
different Web resource. If the new schema defines for example
a class 'Van' whose members are a subset of the members of
the class 'Van' in version 1.0, the
rdfs:subClassOf property can be used to state
that all instances of
V2:Van are also instances
of
V1:Van.
Where the vocabulary defines properties, the same approach
can be taken, using
rdfs:subPropertyOf to make
statements about relationships between properties defined in
successive versions of an RDF vocabulary.
This specification defines a subclass of resources known as
'constraint resources' (section 3.1).
This is provided to allow for the addition of new ways of
expressing RDF constraints. Future extensions to the Resource
Description Framework may introduce new resources that are
instances of the
rdfs:ConstraintResource class.
It is necessary to anticipate RDF content which draws upon
properties or classes defined using constraints other than
those available in this version of RDF. As yet unknown
constraints may contribute to a more expressive framework for
specifying RDF constraints.
RDF agents unfamiliar with the semantics of unknown instances
of
rdfs:ConstraintResource may therefore lack
the knowledge to evaluate constraint satisfaction when
vocabulary items are defined using those unknown constraints.
Since RDF itself may not represent declaratively the full
meaning of these constraint resources, the acquisition of RDF
statements about a new
ConstraintResource may
not provide enough information to enable its use. For
example, when encountering a previously unknown constraint
property type called
RDF3:mysteryConstraint we
may learn from a schema that it has a range of
rdfs:Class and a domain of
rdf:Property. The range and domain constraints
if encountered alone would be enough to tell us how to
legally use
RDF3:mysteryConstraint, but they do
not tell us anything about the nature of the constraint
expressed when it is used in that fashion.
The
rdfs:ConstraintResource construct is
provided here as a simple future-proofing mechanism, and
addresses some of the issues discussed at greater length in
the Extensible Web Languages W3C NOTE [EXTWEB]. By flagging new forms of constraint
as members of this class, we indicate that they are intended
to express RDF Schema language constraints whose semantics
must be understood for constraint checking to be possible.
Membership in the
rdfs:ConstraintResource class
suggests, but does not imply, that those semantics may be
inexpressible in a declarative form. Since the expressive
facilities available within RDF for doing so are also likely
to evolve, this distinction itself presents a moving target.
All RDF agents will have implicit knowledge of certain
constraints which may or may not be capable of representation
within (some version of) RDF. It may be the case that some
future RDF specification provides facilities which will allow
RDF agents to comprehend declarative specifications for
as-yet uninvented constraint properties. In such a case,
these agents could safely comprehend (some) previously
unencountered forms of constraint. By providing the basic
rdfs:ConstraintResource class, we anticipate
such developments. All RDF agents written solely to this
specification will appreciate their ignorance of the meaning
of unknown instances of that class, since this specification
provides no mechanism for learning about such constraints
through the interpretation of RDF statements. Future
specifications, should they offer such facilities, could also
define subclasses of
ConstraintProperty to
classify new constructs according to whether or not they had
inexpressible semantics.
The following properties are provided to support simple
documentation and user-interface related annotations within
RDF schemas. Multilingual documentation of schemas is
supported at the syntactic level through use of the
xml:lang language tagging facility. Since RDF
schemas are expressed within the RDF data model, vocabularies
defined in other namespaces may be used to provide richer
documentation.
This is used to provide a human-readable description of a resource.
This is used to provide a human-readable version of a resource name.
The RDF Model and Syntax specification [RDFMS] introduces the base concepts of RDF.
A number of these are defined formally in the RDF Schema
whose namespace URI is. In
addition, some further concepts are introduced in the RDF
Model and Syntax specification but do not appear in the RDF
Model and Syntax schema. These formally belong in the Schema
namespace (for example,
rdfs:Literal and
rdfs:Resource). In cases where an RDF resource
belongs to the
namespace, this document can provide only a convenience copy
of that resource's definition.
Appendix A provides an RDF/XML schema
for the RDF resources defined in this document, including RDF
Model concepts such as
Literal and
Resource. The RDF/XML Schema in Appendix A also
makes RDF statements about resources defined in the RDF Model
and Syntax namespace. These have the status of
annotations rather than definitions.
This corresponds to the set called the 'Literals' in the formal model for RDF presented in section 5 of the Model and Syntax specification [RDFMS]. Atomic values such as textual strings are examples of RDF literals.
This corresponds to the set called the 'Statement' in the formal model for RDF presented in section 5 of the Model and Syntax specification [RDFMS].
This corresponds to the property called the 'subject' in the
formal model for RDF presented in section 5 of the Model and
Syntax specification [RDFMS]. Its
rdfs:domain is
rdf:Statement and
rdfs:range is
rdfs:Resource. This
is used to specify the resource described by a reified
statement.
This corresponds to the property called the 'predicate' in
the formal model for RDF presented in section 5 of the Model
and Syntax specification [RDFMS]. Its
rdfs:domain is
rdf:Statement and
rdfs:range is
rdf:Property. This is
used to identify the property used in the modeled statement.
This corresponds to the property called the 'object' in the
formal model for RDF presented in section 5 of the Model and
Syntax specification [RDFMS]. Its
rdfs:domain is
rdf:Statement. This is used to
identify the property value in the modeled statement.
This class is used to represent the Container classes
described in section 3 of the Model and Syntax specification
[RDFMS]. It is an instance of
rdfs:Class and
rdfs:subClassOf of
rdfs:Resource.
This corresponds to the class called 'Bag' in the formal
model for RDF presented in section 5 of the Model and Syntax
specification [RDFMS]. It is an instance
of
rdfs:Class and
rdfs:subClassOf
rdfs:Container.
This corresponds to the class called 'Sequence' in the formal
model for RDF presented in section 5 of the Model and Syntax
specification [RDFMS]. It is an instance
of
rdfs:Class and
rdfs:subClassOf
rdfs:Container.
This corresponds to the class called 'Alternative' in the
formal model for RDF presented in section 5 of the Model and
Syntax specification [RDFMS]. It is an
instance of
rdfs:Class and
rdfs:subClassOf
rdfs:Container.
This class has as members the properties
_1, _2, _3
... used to indicate container membership, as
described in section 3 of the Model and Syntax specification
[RDFMS]. This is a
rdfs:subClassOf
rdf:Property.
This corresponds to the 'value' property described in section 2.3 of the Model and Syntax specification [RDFMS].
This section gives some brief examples of using the RDF Schema machinery to define classes and properties for some possible applications. Note that some of these examples use the abbreviated RDF syntax (mentioned in 2.3.2.1 above) to express class membership.. A Person may also have an
ssn ("Social Security Number") property. The
value of
ssn is an
integer. A
Person's marital status is one of {Single, Married, Divorced,
Widowed}. This is achieved through use of the
rdfs:range constraint: we define both a
maritalStatus property and a class
MaritalStatus (adopting the convention of using
lower case letters to begin the names of properties,
of type
MaritalStatus in another graph are
trusted is an application level decision.
In this example we sketch an outline of an RDF vocabulary for
use with searchable Internet services.
SearchQuery is declared to be a class. Every
SearchQuery can have both a
queryString whose value is an
rdfs:Literal and a
queryService
whose value is a
SearchService. A
SearchService is a subclass of
InternetService (which is defined elsewhere). A
SearchQuery has some number of
result properties (whose value is
SearchResult). Each
SearchResult
has a
title (value is a
rdfs:Literal), a
rating and of
course, the page itself.
The modularity of RDF allows other vocabularies to be combined with simple schemas such as this to characterize specialized schemas from various domains, RDF makes it possible for diverse communities of expertise to contribute to a decentralized web of machine-readable vocabularies.
Note: This document was prepared and approved for publication by the W3C RDF Schema Working Group (WG). WG approval of this document does not necessarily imply that all WG members voted for its approval.
David Singer of IBM was the chair of the group throughout most of the development of this specification; we thank David for his efforts and thank IBM for supporting him and us in this endeavor. Particular thanks are also due to Andrew Layman for his editorial work on earlier versions of this specification. Thanks are also due to Ron Daniel and Marja-Riitta Koivunen for their work on the design of the diagrams included in this specification.
The working group membership has.
Note that there are some constraints (such as those given in 2.3.2 above) on certain RDF Schema resources which are themselves not fully expressible using the RDF Schema specification.
To promote confidence and stability, W3C has instituted the following publication policies:
|
http://www.w3.org/2001/sw/RDFCore/Schema/20010913/
|
crawl-002
|
en
|
refinedweb
|
Plugin to server static content with jetty servlet in develoment and production, optional you can configure with any other server in production
grails install-plugin
Version 0.2 is compatible with grails 1.1+
grails install-plugin
By default exposes the content in:
'project_dir'/static/resources"
Example:
shopProject/static/resources/text.txt
Resource Tag generates the url to static resource
${resource(file:'text.txt')} ->
If you are using 0.2 then you need to provide the namespace also so above line would be written as
${jettyStatic.resource(file:'text.txt')} ->
Overwrites the defaults dirs and the tag behavior
jettystatic.dir = Absolute path to server static content
jettystatic.basepath = The basepath to remove in a resource taglib
jettystatic {
dir = '/opt/mydir/files'
basepath = '/opt/mydir/files/resources'
}
${resource(file:'/opt/mydir/files/resources/text.txt')} ->
Also you can ignore jetty servlet in production and server the content as you want with:
jettystatic.ignore=true
jettystatic.absolute.url=
${resource(file:'text.txt')} ->
if using 0.2 version of plugin then the above line would be written as
${jettyStatic.resource(file:'text.txt')} ->
|
http://code.google.com/p/jettystatic/wiki/Usage
|
crawl-002
|
en
|
refinedweb
|
Copyright © 2001 1 (this document) describes the SOAP envelope and SOAP transport binding framework; Part 2[1]describes the SOAP encoding rules, the SOAP RPC convention and a concrete HTTP binding specification.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this document series is maintained at the W3C.
This is the second transport binding framework, and SOAP Version 1.2 Part 2: Adjuncts, which describes the SOAP encoding rules the SOAP RPC convention and a concrete HTTP binding specification.
For a detailed list of changes since the last publication of this document, refer to appendix C Part 1 Change Log. A list of open issues against this document can be found at.
Comments on this document should be sent to xmlp-comments@w3.org (public archive[11]). It is inappropriate to send discussion emails to this address.
Discussion of this document takes place on the public xml-dist-app@w3.org mailing list[12] per the email communication rules in the XML Protocol Working Group Charter
4.4.2 MustUnderstand Faults
5 SOAP Transport Binding Framework
5.1 Binding to Application-Specific Protocols
5.2 Security Considerations
6 References
6.1 Normative References
6.2 Informative References
A Version Transition From SOAP/1.1 to SOAP Version 1.2
B Acknowledgements (Non-Normative)
C Part (4 SOAP Envelope) construct defines an overall framework for expressing what is in a message, who should deal with it, and whether it is optional or mandatory.
The SOAP binding framework (5 SOAP Transport Binding Framework) defines an abstract framework for exchanging SOAP envelopes between peers using an underlying protocol for transport. The SOAP HTTP binding [1](SOAP in HTTP) defines a concrete instance of a binding to the HTTP protocol[2].
The SOAP encoding rules [1](SOAP Encoding) defines a serialization mechanism that can be used to exchange instances of application-defined datatypes.
The SOAP RPC representation [1]( SOAP for RPC) defines a convention that can be used to represent remote procedure calls and responses.
These four parts are functionally orthogonal. In recognition of this, the envelope and the encoding rules are defined in different namespaces.).].
The following example shows a simple notification message
expressed in SOAP. The message contains the header block
alertcontrol and the body block
alert
which are both application defined and not defined by
SOAP. The header block contains the parameters
priority
and
expires which may be of use to intermediaries as well
as the ultimate destination of the message. The body block
contains the actual notification message to be delivered.
> message, or on top of TCP. as seen by a SOAP node. The type of a SOAP.
A collection of zero or more SOAP blocks which may be targeted at any SOAP receiver within the SOAP message path.
A collection of zero or more SOAP blocks targeted at the ultimate SOAP receiver within the SOAP message path.
A special SOAP, generate SOAP faults, SOAP responses, and if appropriate "" (see also 4.2.2 SOAP actor Attribute)..)
SOAP header blocks carry optional attribute
information items with a local name of
actor and a namespace name of (see
4.2.2 SOAP actor Attribute) that are used to target them to
the appropriate SOAP node(s). SOAP header blocks with no
such attribute information item and the SOAP
body are implicitly targeted at the anonymous SOAP actor,
implying that they are to be processed by the ultimate SOAP
receiver. The specification refers to the (implicit or explicit) value of
the SOAP
actor attribute as the SOAP actor for the
corresponding SOAP block (either a SOAP header block or a
SOAP body block).
A SOAP block is said to be targeted to a SOAP node if the
SOAP
actor (if present) on the block matches (see [7]) a role played by the SOAP node, or in the
case of a SOAP block with no
actor attribute
information item (including SOAP body blocks), if the
SOAP node has assumed the role of the anonymous SOAP
actor..
SOAP header blocks carry optional attribute
information items with a local name of
mustUnderstand fail (see 4.4 SOAP Fault)..
Generate a single SOAP MustUnderstand fault (see 4.4.2 MustUnderstand Faults) if one or more SOAP blocks targeted at the SOAP node are mandatory and are not understood by that node. If such a fault is generated, any further processing MUST NOT be done.
Process SOAP blocks targeted at the SOAP node, generating SOAP faults (see 4.4 SOAP Fault) if necessary. A SOAP node MUST process SOAP blocks identified as mandatory. A SOAP node MAY process or ignore SOAP blocks not so identified. In all cases where a SOAP block is processed, the SOAP node must understand the SOAP block and must do such processing in a manner fully conformant with the specification for that SOAP block. Faults, if any, must also conform to the specification for the processed SOAP block. It is possible that the processing of particular SOAP block would control or determine the order of processing for other SOAP blocks. For example, one could create a SOAP header block to force processing of other SOAP header blocks in lexical order. In the absence of such a SOAP block, the order of processing 4.4 SOAP Fault) a whitespace delimited list where each item
in the list is of type anyURI in the namespace. Each item in
the list identifies a set of serialization rules that can be
used to deserialize the SOAP message. The sets of rules
should be listed in the order most specific to least
specific.. SOAP defines an
actor attribute
information item that.
At a SOAP receiver, the special URI "" indicates that the SOAP header block is targetted at the current SOAP node. This is similar to the hop-by-hop scope model represented by the Connection header field in HTTP. Blocks marked with this special actor URI are subject to the same processing rules, outlined in 2 SOAP Message Exchange Model, as user defined URIs.
At a SOAP receiver, the special URI "" indicates that the SOAP header block is not targetted at any SOAP node. This allows data which is common to several blocks to be referenced from them, without being processed.
Omitting the SOAP
actor attribute information
item implicitly targets the SOAP header block at the
ultimate SOAP receiver.
As described in 2.4 Understanding SOAP Headers, the SOAP
mustUnderstand attribute information
item is used to indicate whether the processing of a
SOAP header block is mandatory or optional at the target
SOAP node.".
The SOAP
mustUnderstand attribute
information item allows for robust evolution of SOAP
itself, of related services such as security mechanisms, and
of applications using SOAP. SOAP blocks tagged with a SOAP
mustUnderstand attribute information
item with a value of "true" MUST be
presumed to somehow modify the semantics of their parent or
peer element information items. Tagging SOAP
blocks in this manner assures that this change in semantics
will not be silently (and, presumably, erroneously) ignored
by those who may not fully understand it. Specific rules
for processing header blocks with
mustUnderstand
attribute information items are provided in
2.4 Understanding SOAP Headers and 2.5 Processing SOAP Messages.
The SOAP
mustUnderstand attribute
information item.
Note:
SOAP extensions can be defined for indicating the
order in which processing is to occur, and for generating
faults when a header entry is not processed in the
appropriate order. Specifically, it is possible to create
SOAP header blocks which are themselves targeted to the
endpoint (or intermediaries), have a
mustUnderstand attribute information
item with a value of "true", and which
have as their semantic a requirement to generate some
particular fault if other headers have inadvertently
survived past the intended point in the message path message
(presumably due to a failure to reach the intended
processing node earlier in the path). Such extensions MAY
depend on the presence or value of the
mustUnderstand attribute information
item in the surviving headers when determining
whether an error has occurred. are called
SOAP body blocks.
Each SOAP body block element information item:
MAY be namespace qualified.
MAY
have an
encodingStyle attribute information
item
SOAP defines one particular SOAP body block, the SOAP fault, which is used for reporting errors (see 4.4 SOAP Fault).).
The SOAP
Fault element information
item is used to carry error and/or status information
within a SOAP message. If present, the SOAP
Fault
MUST appear as a SOAP body block and MUST NOT appear more than
once within a SOAP Body.
The
Fault element information item has:
A local name of
Fault;
A namespace name of;
Two or more child element information items in order as follows:
A mandatory
faultcode element
information item as described below;
A mandatory
faultstring element
information item as described below;
An optional
faultactor element
information item as described below;
An optional
detail element
information item as described below..1).
The SOAP
faultcode values defined in this
section MUST be used as values for the SOAP
faultcode element information item
when describing faults defined by SOAP 1.2 Part 1 (this
document). The namespace identifier for these SOAP
faultcode values is "". Use
of this namespace is recommended (but not required) in the
specification of methods defined outside of the present
specification.
SOAP
faultcode values are defined in
an extensible manner that allows for new SOAP
faultcode values to be defined while maintaining
backwards compatibility with existing SOAP
faultcode values. The mechanism used is very
similar to the 1xx, 2xx, 3xx etc basic status classes
classes defined in HTTP (see [2] section
10). However, instead of integers, they are defined as XML
qualified names[7]. The character "."
(dot) is used as a separator of SOAP
faultcode
values indicating that what is to the left of the dot is a
more generic fault code value than the value to the
right. This is illustrated in the following example.
Client.Authentication
The
faultcode values defined by SOAP.
Some underlying protocols may be designed for a particular purpose or application profile. SOAP bindings to such protocols MAY use the same endpoint identification (e.g., TCP port number) as the underlying protocol, in order to reuse the existing infrastructure associated that protocol.
However, the use of well-known ports by SOAP may incur additional, unintended handling by intermediaries and underlying implementations. For example, HTTP is commonly thought of as a 'Web browsing' protocol, and network administrators may place certain restrictions upon its use, or may interpose services such as filtering, content modification, routing, etc. Often, these services are interposed using port number as a heuristic.
As a result, binding definitions.
The SOAP/1.1 specification[14] says the following on versioning in section 4.1.2:
"SOAP does not define a traditional versioning model
based on major and minor version numbers. A SOAP message MUST
have an
Envelope element associated with the
"" namespace. If a
message is received by a SOAP application in which the SOAP
Envelope element is associated with a different
namespace, the application MUST treat this as a version error
and discard the message. If the message is received contains an ordered list of namespace identifiers of SOAP envelopes that the SOAP node supports in the order most to least preferred. Following is an example of a VersionMismatch fault generated by a SOAP Version 1.2 node including the SOAP upgrade extension:
that existing SOAP/1.1 nodes are not likely to indicate which envelope versions they support. If nothing is indicated then this means that SOAP/1.1 is the only supported.
|
http://www.w3.org/TR/2001/WD-soap12-part1-20011002/
|
crawl-002
|
en
|
refinedweb
|
ObremSDK
circa April 2007 by Neil C. Obremski
Someday I'll write something meaningful here, but until then just go play with my source.
C# .NET Conventions
Tired of DLL and Assembly hell? Try static linking instead with ObremSDK classes: a daily dose of code gems will keep your application healthy and fit. Copy individual obrem-.cs files into your project tree or share the ObremSDK sub-folder of your choice using the svn:externals property in your project's Subversion repository.
- Functionality isolated using classes not namespaces.
- Class source files prefixed with "obrem-".
- Class names are preceded by "Obrem".
- Classes of the same name between different versions (dotnet vs dotnet20 do the same thing; this goes for their members as well.
- Most .NET functions are static.
- The .NET version-specific folders are used only when the implementation of a class must differ between .NET versions. An application must never include both of these, otherwise name conflicts will occur.
- The NET_20 macro will be checked for compiling .NET 2.0-optimized code in the general "dotnet" folder.
- Experimental functions and classes are suffixed with an underscore, i.e. SomeNewFoo_.
ASP.NET 2.0 Recommendations
- Add obremsdk/dotnet/ and obremsdk/dotnet20/ as svn:externals in your App_Code directory.
|
http://code.google.com/p/obremsdk/
|
crawl-002
|
en
|
refinedweb
|
Apps Email Migration API.
Note: This API is only available to Google Apps Premier, Education, and Partner Edition domains, and cannot be used for migration into Google Apps Standard Edition email or Gmail accounts.
This document is intended for programmers who want to write client applications that can migrate email into Google Apps mailboxes.
It's a reference document; it assumes that you understand the concepts presented in the developer's guide, and the general ideas behind the Google data APIs protocol.
The Email Migration API defines only one type of feed: the mail item feed. In order to access this feed, your client must first authenticate to your Google Apps domain using ClientLogin (Authentication for Installed Apps).
The mail item feed is used to insert mail messages into hosted Gmail accounts associated with a Google Apps domain. Its feed URL is:
where yourDomain.com is your Google Apps domain name, and username is the username that will own the message after the migration. The username is only a username, not a full email address; for example, if you're migrating messages to be owned by liz@example.com, the username to use is
liz. The Content-Type of the
POST request must be
application/atom+xml or the server will reply with a
415 Unsupported Media Type status code.
The above feed only allows you to insert messages one at a time. In other words, you must make one HTTP request for each mail message you wish to insert. It is recommended instead that you access the batch mail item feed, which allows you to insert many messages in a single HTTP request. The batch feed has the URL:
Both of these feeds are write-only; that is, the only request method they support is HTTP
Note: Only domain administrators can migrate mail to accounts other than their own (by specifying a username other than their own to be used in the above URLs). When an end user is migrating mail, the username in the above URLs must be the same as the currently authenticated username.
In addition to the standard Google data API elements, the Email Migration API uses the following elements.
For information about the standard data API elements, see the Atom specification and the Common Elements document.
A Gmail label to be applied to an inserted mail message.
<apps:label xmlns:
namespace apps = "" start = label label = element apps:label { attribute labelName { xsd:string } }
A special Gmail property to be applied to an inserted mail message.
<apps:mailItemProperty xmlns:
namespace apps = "" start = mailItemProperty mailItemProperty = element apps:mailItemProperty { attribute value { "IS_DRAFT" | "IS_INBOX" | "IS_SENT" | "IS_STARRED" |
"IS_TRASH" | "IS_UNREAD" } }
The RFC 822 content of the mail message to be migrated.
<apps:rfc822Msg xmlns:" <liz? </apps:rfc822Msg>
namespace apps = "" start = rfc822Msg rfc822Msg = element apps:rfc822Msg { attribute encoding { "base64" | "none" }?, xsd:string }
|
http://code.google.com/apis/apps/email_migration/reference.html
|
crawl-002
|
en
|
refinedweb
|
We avoid, for now, getting into a discussion on whether inflation will rise in the first place. The jury is out to deal with that topic. On our part, we unravel exactly what the rise in oil prices means to you as an individual and investor.
Home loansAt this stage it would be presumptuous and even simplistic to conclude that a long-standing oil crisis could eventually lead to a rise in home loan rates.
The home loan rate industry is very competitive and a rise in loan rates is not as imminent as it appears. Home loan companies could decide to compromise on their profit margins and keep home loan rates at lower levels. So a rise in home loan rates is not as apparent as it may seem.
But in case of a sustained rise in interest rates (possibly due to inflation), companies could at some point react by raising rates.
So if you can bear near term risk, floating rate home loans are still a good option (in any case the rate offered is lower by about 1 per cent as compared to fixed rate loans). If you cannot take any additional risk, stick to the fixed rate type of loan.
Debt fundsAs we have explained, if inflation does become a problem due to the oil crisis and increased spending/investment by businesses then bond yields could rise (bond prices could fall, resulting in losses for debt fund investors). If this happens then longer-dated bonds could see sharper erosion in their value than shorter-dated paper. Therefore, it makes sense for investors to remain invested in short-term mutual funds, especially of the floating rate variety.
With the rolling 1-Yr inflation hovering around 6.3 per cent, it is apparent from the table that all debt funds are giving a negative real return (i.e. return adjusted for inflation) over 1-Yr.
Investors also have the option to invest in variable rate deposits. With variable rate deposits, the rate of return (fixed deposit rate) is reset at regular intervals to reflect market rates. So if interest rates were to rise, the rate of return on the variable rate deposit would be reset higher.
Equity fundsInflation does not affect stock markets like it affects debt markets, in the sense that the impact is not uniform across companies. For instance, capital-intensive companies across steel, engineering, cement sectors will be impacted differently as compared to companies in the software sector. The former will feel the impact of inflation more than the latter.
For long-term investors (with a 3-5 year investment horizon) in equity funds, the corrosive effect of inflation will be easier to swallow. In fact, equity is the best foil to counter inflation. Our recommendation � go for well-managed, well-diversified equity funds that have a mix of inflation-insensitive sectors (software) and core sectors (oil, steel). It is noteworthy that unlike debt funds, diversified equity funds have given a positive real return over 1-Yr after adjusting for inflation.
Looking for the best equity funds? Click here!
So while hardening of oil prices and inflation is a concern, you need not be a sitting duck. For investors with an appetite for risk, long-term investing in equity funds is one way to offset the impact of inflation. For risk-averse investors, its short-term income funds and deposits as also variable rate deposits.
Learn how the Union Budget 2005-06 impacts you. For the latest issue of Money Simplified absolutely FREE!Click here!
|
http://inhome.rediff.com/money/2005/mar/24perfin1.htm
|
crawl-002
|
en
|
refinedweb
|
VTK Coding Standards
From KitwarePublic
We only have a few coding standards but they have proved very useful.
- We only put one public class per file. Some classes have helper classes that they use, but these are not accessible to the user.
- Every class, macro, etc starts with either vtk or VTK, this avoids name clashes with other libraries. Classes should all start with vtk and macros or constants can start with either.
- Class names and file names are the same. This makes it easier to find the correct file for a specific class.
- We only use alphanumeric characters in names, [a-zA-z0-9]. So names like Extract_Surface are not welcome. We use capitalization to indicate words within a name. For example ExtractVectorTopology could be an instance variable. If it were a class it would be called vtkExtractVectorTopology. We capitalize the first letter of a name (excluding any preceding vtk). For local variables almost anything goes. Ideally we would suggest using same convention as instance variables except start their names with a lower case letter. e.g. extractVectorSurface.
- We try to always spell out a name and not use abbreviations. This leads to longer names but it makes using the software easier because you know that the SetRasterFontRange method will always be called that, not SetRFRange or SetRFontRange or SetRFR. When the name includes a natural abbreviation such as OpenGL, we keep the abbreviation and capitalize the abbreviated letters.
- We try to keep all instance variables protected. The user and application developer should access instance variables through Set/Get methods. To aid in this there are a number of macros defined in vtkSetGet.h that can be used. They expand into inline functions that Set/Get the instance variable and invoke a Modified() method if the value has changed.
- Use "this" inside of methods even though C++ doesn't require you to. This really seems to make the code more readable because it disambiguates between instance variables and local or global variables. It also disambiguates between member functions and other functions.
-.
- Make sure your code compiles without any warnings with -Wall and -O2.
- The indentation style can be characterized as the "indented brace" style. Indentations are two spaces, and the curly brace (scope delimiter) is placed on the following line and indented along with the code (i.e., the curly brace lines up with the code). Example:
if (this->Locator == locator) { return; } for (i = 0; i < this->Source->GetNumberOfPoints(); i++) { p1 = this->Source->GetPoint(i); [...] }
- The header file of the class should include only the superclass's header file. If you do not, the header test run as part of the VTK dashboard will report an error. If any other includes are absolutely necessary, include comment at each one describing why it should be included:
#include "vtkKWWindow.h" #include "vtkClientServerID.h" // Needed for InteractorID #include "vtkPVConfig.h" // Needed for PARAVIEW_USE_LOOKMARKS
- Avoid using vtkSetObjectMacro since it will require including the header file of another class. Use the vtkCxxSetObjectMacro instead. For example:
// Class declaration: // Description: // Set/Get the array used to store the visibility flags. virtual void SetVisibilityById(vtkUnsignedCharArray* vis);
// Cxx file vtkCxxSetObjectMacro(vtkStructuredVisibilityConstraint, VisibilityById, vtkUnsignedCharArray);
- All subclasses of vtkObject should include a PrintSelf() method that prints all publicly accessible ivars. For example:
void vtkObject::PrintSelf(ostream& os, vtkIndent indent) { os << indent << "Debug: " << (this->Debug ? "On\n" : "Off\n"); os << indent << "Modified Time: " << this->GetMTime() << "\n"; this->Superclass::PrintSelf(os, indent); os << indent << "Registered Events: "; if ( this->SubjectHelper ) { os << endl; this->SubjectHelper->PrintSelf(os,indent.GetNextIndent()); } else { os << "(none)\n"; } }
- All subclasses of vtkObject should include a type macro in their class declaration. For example:
class VTK_COMMON_EXPORT vtkBox : public vtkImplicitFunction { public: vtkType.
- Do not use 'id' as a variable name in public headers as it is a reserved word in Objective-C++. }
|
http://www.vtk.org/Wiki/VTK_Coding_Standards
|
crawl-002
|
en
|
refinedweb
|
I've been working on writing a Ruby RDF implementation. It's been quite good fun. Just thought I'd blog about some of the things I've been doing in building it and thinking about how to develop it.
First off the bat, I have to say I absolutely adore Ruby. It is, along with Python, one of the sexiest languages ever created. It's amazingly relaxing and pleasant. When I'm writing Java or PHP, I usually end up swearing and shouting at my monitor and getting all negative. Even if Ruby is slow or Rails doesn't scale or whatever, it lowers stress levels and probably blood pressure. We should teach it in schools, for chrissakes.
Second thing is I've been using RSpec (pronounced "Arr Spec", not "Arse Peck"), the tool for behaviour-driven development, which is a fancy reformulation of test-driven development. I like it a lot. TextMate gives me a pretty window filled with green boxes when I do things right and an informative window filled with black and red boxes when I fail. More than that, though, it actually makes you focus on writing the important tests - rather than just a test to ensure that the class exists (yes, it probably does!), it makes you do object-oriented programming in a way that solves problems quickly and efficiently. I'm probably not in a great position to judge, but I think I'm turning out reasonably pretty code that's no more than is required to solve the problem. Development becomes quite easy - I have a printed copy of Concepts and Abstract Syntax on my desk, and I then work through it turning it into RSpec code, then finally coming back, rewriting the RSpec and the tests, then finally writing a small bit of code that solves the problem.
I've been committing all this code and pushing the updates onto Github, but in a new branch called 'new'. You can take a peek. It's not yet functional, but the tests all run and pass.
The folks on the #ruby IRC channel have been helpful too - I found out about method_missing on there, which turned what could have been quite painful (namespace support) into a 14 line class. Namespaces in Python's rdflib look a bit like this FOAF['name'], but in Ruby, I've opted for foaf.name instead - using the method_missing to return a uriref for any method call on a namespace class.
This is how I want my code to end up looking:
graph.add_triple x.tom, foaf.knows, x.dan
graph.add_triple x.tom, foaf.firstName, "Tom"
It’s pretty damn close to N3 in beauty, and it’s certainly better than - oh - let me think:
String fullName = "John Smith";
Resource johnSmith = model.createResource(personURI).addProperty(VCARD.FN, fullName);
Heh, no offence meant - we all love Jena really, but it’s not exactly beautiful.
I was then thinking more broadly about where we go next with this. I’m probably going to pull most of the parsing and writing out of the existing Rena tree - it supports RDF/XML and N-Triples. It’d be nice to be able to push out Turtle and N3 too. And if I’m working on those, I’m going to need to group things by subject. Which led me to thinking about whether resource may be a good way for non-experts to access data inside graphs. danbri promptly explained the problem with the use of a pair of inverse properties - namely foaf:maker and foaf:made. You can’t expect every resource will do these the same, which means you either bite the bullet and accept data loss, or you perform some magic where you look for resources as being both subjects and in-graph objects. Madness lies that way.
More madness lies down other paths though. We could look write some kind of extensibility method into the Ruby library which means that people could basically define a class for, say, FoafPerson, which would be a sub-class of a class called Resource (which would not be used tremendously often on it’s own). That way, someone who wanted to read FOAF data could simply import FoafPerson and then read data out of it like they already do with other classes that hook up to web services. But this seems a little bit of a let-down. Isn’t the point of RDF that one doesn’t need to have any domain-specific code (as is required in, say, parsing XML or microformats where the parser needs to know what it’s looking for).
Then the other approach is to have it look for an RDF Schema or OWL ontology, pull that in and use that. That’s also good, but that’s more code to write, more classes for people to understand and we are trying to keep this simple enough to be usable.
Still gotta think about this. I know that Chris Bizer’s RAP library has a three-pronged approach - statement, resource and ontology models. That’s good - as you can have it at whatever level you like it. I’m hoping that some of Ruby’s flexible language will mean we can have a simple Graph class that has all the possible approaches joined together in a more seamless way. The average user doesn’t want to have to think “I want to browse this data in a statement-centric way”. They think “I want to iterate through all the people and do x with their names and e-mail addresses”.
Which brings us to SPARQL. SPARQL is cool and amazing, but I think it’s too complex for most people. That’s why I think cracking the model problem and making a really usable data model for people is going to be a good way to go.
But here’s the thing. I will not and can not build this on my own. I need collaborators to make it all work. That means people with Ruby experience and experience parsing RDF. I’m still making myself familiar with all the cool, sexy Ruby stuff like RSpec and Rake and Capistrano and so on, and need people to make sure I don’t do anything stupid. So I’m throwing open the gauntlet - come and help fix the code. Clone my repository and hack on it. Doesn’t matter if you break it, you can always revert your changes. Send me patches. Come and chat on IRC about it. Poke people you know who are good at this stuff and get them to break and fix code and send patches. As a little bonus, I’ve got some Github invites to give out to people who want to help out. Send me patches and you may just get an invite to the sexiest repository service ever.
Anyway, over and out. Time for a little late-night PlayStation crack and then to bed. I’ll be at Geek Dinner tomorrow, so come and bug me about Ruby. If my laptop weren’t still in the Apple Store getting fixed up by a Genius (or not), I’d suggest pair programming.
P.S. You can interpret the title any way you like. Personally, I think beauty and beastliness are not disjoint properties. Beasts (in the sense of animals) can be beautiful, and even moral beastliness can be aesthetically pleasing too (think Nazi uniforms, propaganda films and posters and horror movies).
|
http://tommorris.org/blog/2008/02/27
|
crawl-002
|
en
|
refinedweb
|
. The settings service and Plasma are both complex programs, so combining them increases the chances that a bug in one can crash the other. So we put it in a different process, forcing one layer of indirection already.
Meanwhile Frederik Gladhorn and I were refactoring the storage layer for Connection settings so that it is independent of NetworkManager. One of the good things about NetworkManager's settings is that they are so comprehensive the classes I developed to configure them cover all of wicd's settings too. Frederik namespaced the general classes while I moved the DBUS code that is specific to NetworkManager 0.7 out of the libs/ directory. Since it is generated automatically from some .kcfg files by a modified kconfig_compiler and then extra stuff is patched into those files, this was quite a lot of work.
Our students from the University of Bergen, Anders, Peder and Sveinung, were busy working on the mobile broadband improvements for their degree group project. This includes a set of DBUS bindings for the ModemManager auxiliary interface of NetworkManager, which were used to successfully send an SMS and will support useful functions like retrieving cellular signal strength, a set of Qt widgets around libmbca, taking the pain out of configuring cellular data connections, and a test harness. We hope they will continue with KDE development after they graduate.
The status of Network Management as of Sunday 7 June then is that it doesn't even compile. I'm working on remedying that as soon as possible. If you do want to use Network Management from SVN, take a safe revision like r978079 until you hear otherwise.
We'd like to thank the Trolls for being great hosts and the KDE eV for sponsoring this sprint.!
We've continued planning this morning. The big goals for this meeting was to 'get Network Management finished' and 'make it usable on non-NetworkManager systems' but our discussions last night showed us that the current complexity of the applet prevents both goals - it takes me several days of getting up to speed with the code before I dare to try to code it and it's deterring Frederik from making significant changes. So we identified all the pieces and started juggling them last night over pizza until they landed in a way that makes sense.
The big picture is that most of the complexity will move from the Plasma applet into the KDED module. This module will abstract different network management systems by being replaceable. The module provides a simple list of the things to show in the applet's popup. Configuration UI and stored settings are to be shared - we think that the current (NM-derived) settings schema is comprehensive enough.
There's a temptation to write an über-system that models everything and allows any number of applet implementations but we're resisting that as it would never be finished. I'm a little bit disappointed that we won't be adding a lot of polish and nice to have features but a sprint is the ideal time to swing a large hammer at hard architectural issues that otherwise would stunt Network Management's growth.. TODOs include cleaning up UI glitches, fixing some exotic VPN types and auth types and deciding how to abstract different backends like wicd and ConnMan.
If you want to help out or just rubberneck, we're in #solid..
Looking forward, I've been tidying up my computers, installing openSUSE Factory (the alpha0 edition before anyone knows when 11.2 will really be done, and before everyone starts breaking things in earnest), deleting dozens of Build Service checkouts that were finished or forgotten about, and purging my unorganised piles into nice clean GTD lists. That's giving me some peace of mind to think about what to do for KDE on openSUSE 11.2. We'll be having an IRC meeting next Wednesday (1700UTC) to coordinate the team's efforts, but I'm starting to think about things I could do myself. Things like a return for KPersonalizer, a KControl-like treeview for System Settings, or helping tame the Plasma Activities/Zooming UI system into something usable. That and of course completing Network Management (oh, did I let slip the name we chose?). If anyone is already working in those areas, please let me know.
EDIT: Oh and I should point out that openFATE is of course open for business and waiting to receive your ideas. Find out how to use openFATE here..
Before I move it and start telling people about it, I want to decide on a final name. This is important as it's not just what appears in the UI, but also determines the names of files like config files for connections, KNotify settings, translation catalogs, none of which you want to mess about with after a release. So I'm looking for suggestions for and opinions about a good name.
Syndicate Blogs
|
http://www.kdedevelopers.org/blog/77
|
crawl-002
|
en
|
refinedweb
|
A mobile phone is a cool gadget to play with, especially when I can run my favourite programming language (no prize for guessing what it is!) on it! That was the logic which made me purchase a Nokia Series 60 smartphone, the N-Gage QD. This article describes a few experiments I did with the mobile - like setting up Bluetooth communication links, writing Python/C code and emulating serial ports.
Bluetooth is a short distance wireless communication standard. It is commonly used to facilitate data transfer between PC's and cell phones/PDA's without the hassle of `wired' connections. The hardware which provides Bluetooth connectivity on the PC is a small device called a `USB-Bluetooth dongle' which you can plug onto a spare USB port of your machine. I approached the local electronics dealer asking him for such a device and got one which didn't even have the manufacturer's name printed on it. The driver CD which came with it of course contained only Windows software. Deciding to try my luck, I plugged the device on and booted my system running Fedora Core 3 - bluetooth service was started manually by executing:
sh /etc/init.d/bluetooth startHere is the output I obtained when the command `hciconfig' ( which is similar to the `ifconfig' command used to configure TCP/IP network interfaces) was executed:
hci0: Type: USB BD Address: 00:11:B1:07:A2:B5 ACL MTU: 192:8 SCO MTU: 64:8 UP RUNNING PSCAN ISCAN RX bytes:378 acl:0 sco:0 events:16 errors:0 TX bytes:309 acl:0 sco:0 commands:16 errors:0My no-name USB-Bluetooth dongle has been detected and configured properly! The number 00:11:B1:07:A2:B5 is the Bluetooth address of the device.
The next step is to check whether Linux is able to sense the proximity of the mobile. If your phone has bluetooth disabled, enable it and run the following command (on the Linux machine):
hcitool scanHere is the output obtained on my machine:
Scanning ... 00:0E:6D:9A:57:48 DijkstraThe `BlueZ' protocol stack running on my GNU/Linux box has `discovered' the Nokia N-Gage sitting nearby and printed its Bluetooth address as well the name which was assigned to it, `Dijkstra'.
For security reasons, some interactions with the mobile require that the device is `paired' with the one it is interacting with. First, store a number (4 or more digits) in the file /etc/bluetooth/pin (say 12345). Stop and restart the bluetooth service by doing:
sh /etc/init.d/bluetooth stop sh /etc/init.d/bluetooth startNow initiate a `pairing' action on the mobile (the phone manual will tell you how this is done). The software on the phone will detect the presence of the Bluetooth-enabled Linux machine and ask for a code - you should enter the very same number which you have stored in /etc/bluetooth/pin on the PC - the pairing process will succeed.
Files can be transferred to/from the Linux machine using a high level protocol called OBEX (standing for OBjectEXchange, originally designed for Infrared links). First, you have to find out whether the mobile supports OBEX based message transfer. Try running the following command on the Linux machine (the number is the bluetooth address of the phone):
sdptool browse 00:0E:6D:9A:57:48You might get voluminous output - here is part of what I got:
Service Description: OBEX Object Push Service RecHandle: 0x10005 Service Class ID List: "OBEX Object Push" (0x1105) Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 9 "OBEX" (0x0008)OBEX is built on top a lower-level protocol called RFCOMM. The `Object Push' service uses RFCOMM `channel' 9. Let's try to upload a file to the phone; run the following command on the Linux machine:
obex_push 9 00:0e:6d:9a:57:48 a.txtThe phone will respond by asking you whether to accept the message coming over the bluetooth link. The same command, invoked without any option, can be used to receive files sent from the mobile over the bluetooth link (read the corresponding `man' page for more details).
Nokia has recently done a port of Python to the `Series 60' smartphones running the Symbian operating system. The Python interpreter as well as a few important modules are packaged into a single .sis file (somewhat like the Linux RPM file) which can be obtained from. The file to be installed is named PythonForSeries60_pre_SDK20.SIS. The first step is to transfer this file to the mobile via obex_push. Trying to open the file on the mobile will result in the Nokia installer program running - it will ask you whether to install Python on the limited amount of memory which the phone has or to an additional MMC card (if one is present). Once the installation is over, you will see a not-so-cute Python logo on the main menu of the phone - Figure 1 is a screenshot I took of the main menu.
Figure 2 shows the interactive Python prompt at which you can try typing Python scripts!
You can write Python scripts on the Linux machine and upload them to the mobile with `obex_push'. If you try to open these scripts (on the mobile), the `applications manager' will ask you whether to install the files as Python scripts or not. Once installed as scripts, you can execute them by following the instructions displayed on the screen when you open the `Python' application from the main menu.
Figure 3 shows the output obtained by installing and running the following script on the mobile:
import appuifw # The application UI framework appuifw.app.title = u'Cool Python' appuifw.note(u'OK', 'info')
Application programs running on both the phone as well as the Linux machine interface with the Bluetooth protocol stack via the socket API. Listing 1 shows a simple client program running on the mobile which connects with a server running on the Linux machine and sends it a message; the server code is shown in Listing 2.
The Python client program running on the mobile opens a Bluetooth socket and connects to the PC whose device address is specified in the variable `ATHLON'. Once the connection is established, it simply sends a string `Hello, world'.
The server program running on the PC opens a Bluetooth stream socket, binds it to RFCOMM channel 4 and calls `accept' - the server is now blocked waiting for a connection request to arrive from the client. Once the request arrives, the server comes out of the accept, returning a `connected' socket calling `recv' on which will result in the server getting the string which the client had transmitted.
The `bacpy' function in the server program is defined as an inline function in one of the header files being included - so you need not link in any extra library to get the executable. But if you are using any of the other Bluetooth utility functions like `ba2str', you have to link /usr/lib/libbluetooth.so to your code.
There is an interesting Python interface to the Bluetooth library in Linux called `PyBlueZ' available for download from. It simplifies the process of writing bluetooth socket programs on the Linux machine. Listing 3 shows the Python implementation of the server program described in the previous section.
Programs like `minicom' are used to talk to devices connected over a serial link (say a modem). There is a neat software trick to present a `serial-port-like' view of a bluetooth link so that programs like `minicom' can manipulate the connection effortlessly. Let's try it out.
First, edit /etc/bluetooth/rfcomm.conf so that it looks like the following:
rfcomm0 { bind no; device 00:0e:6d:9a:57:48; channel 1; comment "Example Bluetooth device"; }After stopping and restarting the bluetooth service, run the following command:
rfcomm bind /dev/rfcomm0You should see a file called `rfcomm0' under /dev after executing the above command. Now, you can set up `minicom' by running:
minicom -m -sThe only thing to do is to set the name of the device to connect to as /dev/rfcomm0. Save the new configuration as the default configuration and invoke:
minicom -mMinicom is now ready to talk to your phone! Type in `AT' and the program will respond with an `OK'. Say you wish to make your phone dial a number. Just type:
atdt 1234567;There are many other AT commands you can experiment with; try googling for say `mobile phone AT commands' or something of that sort!
After you have finished with your virtual serial port manipulations, you should run:
rfcomm release /dev/rfcomm0to `release' the serial-bluetooth link.
Once you get the serial port emulation working, there is another interesting hack to explore. The Nokia Python distribution comes with a program called `btconsole.py'. On one console of your Linux machine, run the command:
rfcomm listen /dev/rfcomm0Now run `btconsole.py' on the phone. You will see that after a few seconds, `rfcomm' will respond with a `connected' message. Once you get this message, take another console and run:
minicom -mWhat do you see on the screen? A Python interactive interpreter prompt! You can now type in Python code snippets and execute them on the phone on-the-fly! Isn't that cool?
I was curious to know how Microsoft's Windows XP operating system, famous for its `ease of use', would compare with Linux when it comes to interacting with my NGage QD. I installed the Windows driver for my no-name usb-bluetooth dongle and tried to get the Nokia PC suite up and running on an XP machine - maybe it's because I am far more experienced in GNU/Linux than on MS operating systems, but I found the XP experience far less `friendly' than MS would care to admit. I believe that most of the `user friendliness' of the Microsoft operating system comes from hardware vendors and application developers tightly integrating their products with the platform rather than any inherent quality of the OS as such.
For a general introduction to Bluetooth technology, see. An interesting paper on Bluetooth security is available at. has plenty of information regarding Bluetooth and Linux; I found the document `Bluetooth Programming for Linux' () very informative.
Lots of information about Python on series 60 mobiles is available at. ObexFTP seems to be an interesting tool - you can get it from. There are some documents floating on the net which describe how you can do an NFS mount of your phone's file system - try a google search for more info.
Source code/errata concerning this article will be available at.
|
http://pramode.net/articles/lfy/mobile/pramode.html
|
crawl-002
|
en
|
refinedweb
|
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
I recently posted GDB patches to support debugging statically linked applications using NPTL; they need a corresponding adjustment in glibc. For reference, you can reproduce this problem with GDB's staticthreads.exp testcase. Here's the basic problem: the API doesn't directly offer an "OK, libpthread.so is now initialized and available" event. GDB checks at the beginning of execution (for the static linking case) and again at every shared library load event for the symbols libthread_db needs; if they are available it initializes libthread_db. But if libpthread.a is statically linked into the executable, there is a window where the symbols are available but the library is not yet initialized. In LinuxThreads libthread_db detected this case and reported a single "fake" thread; enough of the standard operations worked on the fake thread that GDB could display and debug it. NPTL's libthread_db tries to do the same thing. But in NPTL, before libpthread is initialized, there's no way to predict the thread's TID. All sorts of operations fail; iterating over threads, for instance, reads an uninitialized value from the target to get the TLS base. And we can't fill in useful information in td_thr_get_info, because we don't have a TID yet. The best solution I found was to present a view in which there are no threads at all. Then, keyed off the global event mask, report the "creation" of the first thread when the library has initialized. Now GDB sets the global thread event mask right away, initializes its thread list, and if there are none, waits for glibc to inform it that the first thread is ready. Unfortunately this won't work with existing GDBs; they will get confused and resume all zero threads from the thread list, and then wait for the program to stop. You need the patches I've posted, which I plan to check in soon. In order to work around this problem, which would be an unpleasant version lock, I've arranged to only do this for static executables (which previously didn't work); the same approach used for static executables works with a patched GDB and a patched glibc for dynamic executables, but the old approach is correct enough so this is safer. Tested on x86_64-pc-linux-gnu, using new glibc with both old and new gdb, and the glibc testsuite. Is this patch OK? -- Daniel Jacobowitz CodeSourcery 2006-03-01 Daniel Jacobowitz <dan@codesourcery.com> * init.c (__pthread_initialize_minimal_internal) [!__SHARED]: Report a thread creation event. * pthreadP.h (__nptl_threads_events, __nptl_last_event): New. * pthread_create.c (__nptl_threads_events, __nptl_last_event): Change from static to hidden. 2006-03-01 Daniel Jacobowitz <dan@codesourcery.com> * structs.def (__libc_setup_tls): New entry. * td_ta_thr_iter.c (iterate_thread_list): Adjust fake_empty handling. * td_thr_validate.c (td_thr_validate): Adjust uninit handling. Index: nptl/init.c =================================================================== RCS file: /big/fsf/rsync/glibc/libc/nptl/init.c,v retrieving revision 1.56 diff -u -p -r1.56 init.c --- nptl/init.c 13 Feb 2006 01:22:36 -0000 1.56 +++ nptl/init.c 1 Mar 2006 17:12:42 -0000 @@ -268,6 +268,28 @@ __pthread_initialize_minimal_internal (v INIT_LIST_HEAD (&__stack_user); list_add (&pd->list, &__stack_user); +#ifndef SHARED + /* If event reporting has been enabled for the process, report the + "creation" of the first thread now. */ + const int _idx = __td_eventword (TD_CREATE); + const uint32_t _mask = __td_eventmask (TD_CREATE); + + if ((_mask & __nptl_threads_events.event_bits[_idx]) != 0) + { + pd->eventbuf.eventnum = TD_CREATE; + pd->eventbuf.eventdata = pd; + + /* Enqueue the descriptor. */ + do + pd->nextevent = __nptl_last_event; + while (atomic_compare_and_exchange_bool_acq (&__nptl_last_event, + pd, pd->nextevent) + != 0); + + /* Now call the function which signals the event. */ + __nptl_create_event (); + } +#endif /* Install the cancellation signal handler. If for some reason we cannot install the handler we do not abort. Maybe we should, but Index: nptl/pthreadP.h =================================================================== RCS file: /big/fsf/rsync/glibc/libc/nptl/pthreadP.h,v retrieving revision 1.58 diff -u -p -r1.58 pthreadP.h --- nptl/pthreadP.h 15 Feb 2006 16:51:35 -0000 1.58 +++ nptl/pthreadP.h 1 Mar 2006 17:12:42 -0000 @@ -290,6 +290,8 @@ extern void __nptl_create_event (void); extern void __nptl_death_event (void); hidden_proto (__nptl_create_event) hidden_proto (__nptl_death_event) +extern td_thr_events_t __nptl_threads_events attribute_hidden; +extern struct pthread *__nptl_last_event attribute_hidden; /* Register the generation counter in the libpthread with the libc. */ #ifdef TLS_MULTIPLE_THREADS_IN_TCB Index: nptl/pthread_create.c =================================================================== RCS file: /big/fsf/rsync/glibc/libc/nptl/pthread_create.c,v retrieving revision 1.50 diff -u -p -r1.50 pthread_create.c --- nptl/pthread_create.c 15 Feb 2006 16:53:15 -0000 1.50 +++ nptl/pthread_create.c 1 Mar 2006 17:12:42 -0000 @@ -39,10 +39,10 @@ static int start_thread (void *arg); int __pthread_debug; /* Globally enabled events. */ -static td_thr_events_t __nptl_threads_events; +td_thr_events_t __nptl_threads_events attribute_hidden; /* Pointer to descriptor with the last event. */ -static struct pthread *__nptl_last_event; +struct pthread *__nptl_last_event attribute_hidden; /* Number of threads running. */ unsigned int __nptl_nthreads = 1; Index: nptl_db/structs.def =================================================================== RCS file: /big/fsf/rsync/glibc/libc/nptl_db/structs.def,v retrieving revision 1.3 diff -u -p -r1.3 structs.def --- nptl_db/structs.def 4 Feb 2006 00:47:58 -0000 1.3 +++ nptl_db/structs.def 1 Mar 2006 17:12:42 -0000 @@ -48,6 +48,7 @@ DB_STRUCT (td_eventbuf_t) DB_STRUCT_FIELD (td_eventbuf_t, eventnum) DB_STRUCT_FIELD (td_eventbuf_t, eventdata) +DB_FUNCTION (__libc_setup_tls) DB_SYMBOL (stack_used) DB_SYMBOL (__stack_user) DB_SYMBOL (nptl_version) Index: nptl_db/td_ta_thr_iter.c =================================================================== RCS file: /big/fsf/rsync/glibc/libc/nptl_db/td_ta_thr_iter.c,v retrieving revision 1.8 diff -u -p -r1.8 td_ta_thr_iter.c --- nptl_db/td_ta_thr_iter.c 4 Apr 2004 00:31:10 -0000 1.8 +++ nptl_db/td_ta_thr_iter.c 1 Mar 2006 17:12:42 -0000 @@ -1,5 +1,6 @@ /* Iterate over a process's threads. - Copyright (C) 1999,2000,2001,2002,2003,2004 Free Software Foundation, Inc. + Copyright (C) 1999,2000,2001,2002,2003,2004, 2006 + Free Software Foundation, Inc. This file is part of the GNU C Library. Contributed by Ulrich Drepper <drepper@redhat.com>, 1999. @@ -41,13 +42,24 @@ iterate_thread_list (td_thragent_t *ta, if (next == 0 && fake_empty) { - /* __pthread_initialize_minimal has not run. - There is just the main thread to return. */ - td_thrhandle_t th; - err = td_ta_map_lwp2thr (ta, ps_getpid (ta->ph), &th); + /* __pthread_initialize_minimal has not run yet. If we have + initialized TLS, the main thread still has a valid ID; + in static applications the main thread doesn't have + an ID yet, so skip it. */ + psaddr_t taddr; + + err = DB_GET_SYMBOL (taddr, ta, __libc_setup_tls); if (err == TD_OK) - err = callback (&th, cbdata_p) != 0 ? TD_DBERR : TD_OK; - return err; + return TD_OK; + else + { + /* There is just the main thread to return. */ + td_thrhandle_t th; + err = td_ta_map_lwp2thr (ta, ps_getpid (ta->ph), &th); + if (err == TD_OK) + err = callback (&th, cbdata_p) != 0 ? TD_DBERR : TD_OK; + return err; + } } /* Cache the offset from struct pthread to its list_t member. */ Index: nptl_db/td_thr_validate.c =================================================================== RCS file: /big/fsf/rsync/glibc/libc/nptl_db/td_thr_validate.c,v retrieving revision 1.4 diff -u -p -r1.4 td_thr_validate.c --- nptl_db/td_thr_validate.c 1 Jun 2004 21:42:02 -0000 1.4 +++ nptl_db/td_thr_validate.c 1 Mar 2006 17:12:42 -0000 @@ -1,5 +1,6 @@ /* Validate a thread handle. - Copyright (C) 1999, 2001, 2002, 2003, 2004 Free Software Foundation, Inc. + Copyright (C) 1999, 2001, 2002, 2003, 2004, 2006 + Free Software Foundation, Inc. This file is part of the GNU C Library. Contributed by Ulrich Drepper <drepper@redhat.com>, 1999. @@ -77,13 +78,23 @@ td_thr_validate (const td_thrhandle_t *t if (err == TD_NOTHR && uninit) { - /* __pthread_initialize_minimal has not run yet. - But the main thread still has a valid ID. */ - td_thrhandle_t main_th; - err = td_ta_map_lwp2thr (th->th_ta_p, - ps_getpid (th->th_ta_p->ph), &main_th); - if (err == TD_OK && th->th_unique != main_th.th_unique) + /* __pthread_initialize_minimal has not run yet. If we have + initialized TLS, the main thread still has a valid ID; + in static applications the main thread doesn't have + an ID yet, so skip it. */ + psaddr_t taddr; + + err = DB_GET_SYMBOL (taddr, th->th_ta_p, __libc_setup_tls); + if (err == TD_OK) err = TD_NOTHR; + else + { + td_thrhandle_t main_th; + err = td_ta_map_lwp2thr (th->th_ta_p, + ps_getpid (th->th_ta_p->ph), &main_th); + if (err == TD_OK && th->th_unique != main_th.th_unique) + err = TD_NOTHR; + } } }
|
http://sourceware.org/ml/libc-alpha/2006-03/msg00014.html
|
crawl-002
|
en
|
refinedweb
|
An Injecto is a Groovy class that can be injected into other classes.
Want to make all String s bark? Me neither, but here is how you could do it using Injecto's ...
import injecto.Injecto class Dog { def bark = { -> println "woof" } } use (Injecto) { String.inject(Dog) } "".bark() // prints "woof"
Interesting huh? Well maybe not. But it can be used for for something useful. Injecto is the mechanism used to inject dynamic behaviour for the Gldapo library.
Some other key features ...
Injecto is an attempt at bring Ruby mixin like behaviour to Groovy.
It is also serves the purpose of allowing dynamic functionality to be discretely packaged and documented using Groovydoc.
|
http://ldaley.com/injecto/
|
crawl-002
|
en
|
refinedweb
|
Oh, what a tangled web
I often get asked how LINQ to SQL is supposed to be used with Test Driven Design (TDD). Okay, not really. People aren’t knocking on my door or calling me at 3:00 am. I do, however, occasionally read developers angst on their personal blogs. It seems they are trying to actually do this, but are often confounded by the DataContext and its dearth of appropriate interfaces. Of course, my original knee-jerk reaction is to question why anyone would want or need to do this in the first place. Certainly, abstraction at a higher level of the application would be more appropriate, yada yada yada. Eventually, my internal ranting ebbs and my practical side takes over. I start thinking like an engineer. How would I go about it? If only I’d added such fundamental interfaces such as IDataContext and ITable<T> before hitting RTM, all would be so much easier. Yet, TDD was not a priority. It wasn’t even on the list of features that didn’t make the cut. Still, how would I do it? Then I start wishing I could override the DataContext’s methods and substitute my own logic. Yet these methods are not virtual and cannot be overridden. Then with fitting irony I recall reading the other developer blogs that pointed this out too.
Of course, this only makes the problem that much more interesting and worthy of a good hack. I consider wrapping the DataContext in some other layer that looks exactly like it and abstract it that way, but then realize it would certainly trip the system up, especially deep in the query translation engine where it expects to find references to specific types. Instead, the ideal solution would keep the DataContext the same, yet allow me to do something other than hitting the database when a query is executed. If only LINQ to SQL had a public provider model, I could simply plug a new one in and use it to intercept all interaction with the database. Oh, double irony, as there is no such provider model, at least not a public one. Grin..
Fortunately, the DataContext has a nice little ‘provider’ instance variable just waiting to be overwritten. A little bit of reflection can make quick work of that. The trouble is how to specify a new provider. The DataContext only talks to it through an interface (as it should), and yet that interface is internal to the LINQ to SQL assembly. The programming language won’t let you define your own implementation. How do you go about implementing an interface that you can’t even say the name of in your source code?
Actually, I can think of two ways; 1) write a bunch of reflection emit code that generates an implementation at runtime or 2) trick the runtime into thinking some existing object implements the interface. You can probably guess where I am going from here, as every good hack needs a good trick. Besides, a bunch of reflection emit code would be a lot more work. Onward to the fun solution!
This is where CLR grand-interception-theory comes in; in the CLR you can intercept any interaction with any object, really, as long as it’s a method call and the object derives from MarshalByRef. Actually, that’s not really true, you can intercept more than method calls, or at least they don’t start out being method calls, and they don’t necessarily need to be on only MarshalByRef objects. Still, not only do I want to intercept calls on an object, I want to make the object appear to implement an interface and intercept the calls on that interface. That’s a tall order, to be sure. But it can be done.
The interception capability is the underpinnings of remoting (aka DCOM) support in the runtime. I can use it to make an object masquerade as another object. The original intention was to enable client-side proxy objects to appear to implement the API of an object that only really exists on a server. The term ‘MarshalByRef’ refers to the DCOM behavior of marshalling a reference to the object from the server back to the client, such that calls on the client-side proxy are marshaled back to the server. It works by the JITer injecting specialized thunks into the code that identify and handle calls to these special dopplegangers. The really interesting thing to note is that interfaces in the runtime work nearly the same. They also have thunks that are capable of recognizing these proxies and acting accordingly; quite possibly because COM is so dependent on multitudes of interfaces. However, regardless of the reason they exist, I can use this mechanism to wedge my own provider implementation into the mix.
What I first need to do is define a proxy object that will intercept these calls. The remoting mechanism actually uses two different proxies, one that masquerades as the type (the transparent proxy) and one that receives the interception (the ‘real’ proxy.) Both of these guys are intended to exist on the client. The real proxy is supposed to be the object that actually implements the marshalling behavior. My guess is that the only reason that I’m even allowed to implement my own real proxy is to enable marshalling over newer communication layers. Fortunately, I can use this proxy to simply act as an interceptor to do my bidding.
The next question I faced was what to do when I actually intercepted the calls. Should I forward them on to some new grand public provider model? That just seemed a bit over the top. Instead, I chose to redirect the calls back to methods on the DataContext that can be overridden. It was a quicker hack and introduces far fewer concepts to those already familiar with the DataContext. And that’s really what you wanted all along, anyway, wasn’t it?
So I reveal to you, the new and shiny ExtensibleDataContext, one with a few new poorly named methods that you can actually override and implement yourself.
using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Linq.Expressions;
using System.Text;
using System.Reflection;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Activation;
using System.Runtime.Remoting.Proxies;
using System.Runtime.Remoting.Messaging;
using System.Runtime.Remoting.Services;
using System.Data;
using System.Data.Common;
using System.Data.Linq;
using System.Data.Linq.Mapping;
using System.Data.Linq.Provider;
namespace System.Data.Linq
{
public class ExtensibleDataContext : DataContext
{
public ExtensibleDataContext(object connection, MappingSource mapping)
: base("", mapping)
{
FieldInfo providerField = typeof(DataContext).GetField("provider", BindingFlags.Instance | BindingFlags.NonPublic);
object proxy = new ProviderProxy(this).GetTransparentProxy();
providerField.SetValue(this, proxy);
this.Initialize(connection);
}
protected virtual void Initialize(object connection)
{
private TextWriter LogImpl { get; set; }
private DbConnection ConnectionImpl { get; set; }
private DbTransaction TransactionImpl { get; set; }
private int CommandTimeoutImpl { get; set; }
protected internal virtual void ClearConnectionImpl()
protected internal virtual void CreateDatabaseImpl()
protected internal virtual void DeleteDatabaseImpl()
protected internal virtual bool DatabaseExistsImpl()
return false;
protected internal virtual IExecuteResult ExecuteImpl(Expression query)
return new ExecuteResult(null);
protected class ExecuteResult : IExecuteResult
object value;
public ExecuteResult(object value)
{
this.value = value;
}
public object GetParameterValue(int parameterIndex)
return null;
public object ReturnValue
get { return this.value; }
public void Dispose()
IDisposable d = this.value as IDisposable;
if (d != null)
d.Dispose();
protected internal virtual object CompileImpl(Expression query)
return null;
protected internal virtual IEnumerable TranslateImpl(Type elementType, DbDataReader reader)
protected internal virtual IMultipleResults TranslateImpl(DbDataReader reader)
protected internal virtual string GetQueryTextImpl(Expression query)
protected internal virtual DbCommand GetCommandImpl(Expression query)
public class ProviderProxy : RealProxy, IRemotingTypeInfo
ExtensibleDataContext dc;
internal ProviderProxy(ExtensibleDataContext dc)
: base(typeof(ContextBoundObject))
this.dc = dc;
}
public override IMessage Invoke(IMessage msg)
if (msg is IMethodCallMessage)
{
IMethodCallMessage call = (IMethodCallMessage)msg;
if (call.MethodBase.DeclaringType.Name == "IProvider" && call.MethodBase.DeclaringType.IsInterface)
{
MethodInfo mi = typeof(ExtensibleDataContext).GetMethod(call.MethodBase.Name + "Impl", BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.DeclaredOnly);
if (mi != null)
{
try
{
return new ReturnMessage(mi.Invoke(this.dc, call.Args), null, 0, null, call);
}
catch (TargetInvocationException e)
return new ReturnMessage(e.InnerException, call);
}
}
}
throw new NotImplementedException();
public bool CanCastTo(Type fromType, object o)
return true;
public string TypeName
get { return this.GetType().Name; }
set { }
}
}
The ExtensibleDataContext’s constructor has the job of overwriting the DataContext’s private ‘provider’ variable. It creates a new ProviderProxy instance and assigns it to the private field using FieldInfo.SetValue(). The implementation of SetValue attempts to cast the object to the LINQ to SQL private interface IProvider. This succeeds because the function CanCastTo on the ProviderProxy returns true, allowing the proxy to be cast to any type. After that, all interface calls on this object are rerouted to the Invoke method. The implementation of Invoke simply calls the DataContext back, invoking methods with similar names. These are left empty for you to override in your own derivation of ExtensibleDataContext.
namespace MocksNix
public class MyDataContext : ExtensibleDataContext
static MappingSource mapping = new AttributeMappingSource();
public MyDataContext()
public Table<Customer> Customers
get { return this.GetTable<Customer>(); }
protected internal override IExecuteResult ExecuteImpl(System.Linq.Expressions.Expression query)
this.Log.WriteLine("executing query: {0}", query);
return new ExecuteResult(new Customer[] { });
public class Customer
[Column(IsPrimaryKey = true)]
public string CustomerId;
[Column]
public string ContactName;
class Program
static void Main(string[] args)
MyDataContext dc = new MyDataContext();
var query = from c in dc.Customers where c.CustomerId == "X" select c;
var list = query.ToList();
Now, I can use the ExtensibleDataContext in a small test program. I create my own MyDataContext that implements ExecuteImpl(). This method gets called whenever a query needs to be executed. Instead of executing the query, I write out a simple message and return an empty collection.
That’s it. Now take this bit of code and go forth and prosper.
DISCLAIMER: Overriding internal implementation details is not a practice recommend or supported by Microsoft. Implementation details are subject to change without warning.
But who cares!
Go on, mock LINQ to SQL all you want.
How does this work with generated code? Or are you suggesting that you would generate the code from the dbml and then replace DataContext with this ExtensibleDataContext?
I often get asked how LINQ to SQL is supposed to be used with Test Driven Design (TDD). Okay, not really.
Another solution would be to use Typemock Isolator at. There's a free community edition you can play with.
This allows you to mock almost anything including private and static methods. I've been using it for the last month on a C# application that was NOT designed to be unit tested. It's allowed me to test stuff that was impossible to test before without spinning up huge amounts of "state" just to test a simple thing.
This is a great hack and well worth the read. I believe it is the basis for the other mock frameworks like Rhino Mocks and MoQ.
This post is a confluence of two distinct sets of comments I got: The above-mentioned feature is a well-hidden
Matt.. I'll say it: You craaazy, man.
By the way, what *are* the security principle requirements for this trick to work? Should we expect it to work in a web hosting scenario?
Keith, I'm using reflection to overwrite a private field. You do the math.
Here's what I hope we get in V2 of Linq to SQL:
Full interfaces
Full Providers
A single file for each class that we can override and the designer doesn't screw with. It messes with Attributes and makes sure the stubs are right, and leaves the contents alone after it's been first implimented.
True OOP so that we can inherit classes and have the designer use them instead of the code gen crap.
True incremental change management instead of the entire file being ovewritten all of the time because of the designer.
Change Management at runtime for disconnected states (i.e. the object should store it's own changes and then deal with the context when it's added/updated to tell it what to update.
It should know which objects are modified and which aren't when added so that you don't have to just force it to update something that really hasn't changed.
Matt Warren has an "interesting" post on how you could hack your way to a mocked implementation of LINQ...
@Matt: Yeah, that's what I thought. Just wanted to be sure about whether it was Cthulu or Leviathon you were invoking :)
."
That sounds scary. As a customer, I think I want to know why. How can I find out? Who should I ask, and should I expect an honest answer if the reason is not technical?
Product decisions are made all the time that are not based on technical reasons. Often times it has to do with the balance of resources that can be put on the problem.
What are the services we get for free from Linq to Sql by using this approach?
I'm guessing I have to translate Expression(s) to DbCommand(s) and DbDatareader(s) to Object(s), but what about Object Tracking?
If you are 'mocking' the DataContext you are probably going to want to have it produce specific results depending on the cases you are testing and so you probably won't need to interpret the queries. I can imagine more advanced scenarios, however, writing code to handle these quickly becomes as complicated as writing a provider with its own query translator.
I think you're going about this the wrong way. Instead of trying to mock the DataContext directly, why not hide it behind an abstraction? Then you can easilly mock it, use it for dependency injection etc:
I made an attempt at implementing IUpdatable on the current (i.e. VS 2008 Sp1 B1) bits of ADO.NET Data...
While I have finished my series on LINQ to SQL I wanted to talk about some of the reaction. In his summary
Navigate back to Part 2 of this series of entries. Ok, ok, ok, the other two parts where lean on the...
The Automated Testing Continuum - Part 2 (Unit Testing LinQ)
Interesting blog post about it . And some related information on Stackoverflow posts . The basic gist appears to be comments made on the ado.net blog that state the Entity Framework is the only thing getting major developer time for Visual Studio 2010
|
http://blogs.msdn.com/mattwar/archive/2008/05/04/mocks-nix-an-extensible-linq-to-sql-datacontext.aspx
|
crawl-002
|
en
|
refinedweb
|
() {
QueryPerformanceFrequency(&li);
//)
address2 = std::move(myObj.address2);
#endif
};
int main()
perf_startup();
int total_copy_move_count = 0;
int total_vector_resizes = 0;
double total_pushback_time = 0.0;
std::vector<myUserObject> vec;
myUserObject obj;
obj.name = "Stephan T. Lavavej Stephan T. Lavavej Stephan T. Lavavej";
obj.telephone = "314159265 314159265 314159265 314159265 314159265";
obj.address = "127.0.0.0 127.0.0.0 127.0.0.0 127.0.0.0 127.0.0.0 127.0.0.0";
obj.name2 = "Mohammad Usman. Mohammad Usman. Mohammad Usman. ";
obj.telephone2 = "1234567890 1234567890 1234567890 1234567890 1234567890";
obj.address2 = "Republik Of mancunia. Republik Of mancunia Republik Of mancunia";
const long long start = counter();
for (int i = 0; i<1050000; ++i)
{
size_t capacity = vec.capacity();
size_t size = vec.size();
vec.push_back(obj);
if (capacity == size) // indication that vector reallocation has occurred.
{
++total_vector_resizes;
total_copy_move_count += size;
}
const long long finish = counter();
total_pushback_time = (finish-start)*1.0 / frequency();
// print result
std::cout << std::endl;
std::cout << std::fixed;
std::cout << "Total Pushback time: \t" << total_pushback_time << " sec" << std::endl;
std::cout << "Total number of vector reallocations: \t" << total_vector_resizes << std::endl;
std::cout << "Total number of objects copied/moved: \t" << total_copy_move_count << std::endl;
VS2008:
c:\TEMP\STLPerf>cl /EHsc /nologo /W4 /O2 /GL /D_SECURE_SCL=0 rvalue.cpp
rvalue.cpp
Generating code
Finished generating code
c:\TEMP\STLPerf> rvalue.exe
Total Pushback time: 8.309241 sec
Total number of vector reallocations: 36
Total number of objects copied/moved: 3149622
VS2010:
Total Pushback time: 2.335225 secTotal number of vector reallocations: 36Total number of objects copied/moved: 3149622
As you can see that in this specific scenario we are getting more than 3 times performance boost. This gain is due to the fact that the move machinery is used during vector reallocations, which is cheaper as compared to the copy machinery (copy construction and copy assignment). And the bigger your object is and the larger your vector is, the more performance gain you will get. So those of you who use vectors on a big scale, this is something you will surely love. Vector Reallocation (STL Types)
Now the interesting part is that similar to the sample code above, we have implemented move semantics (move constructors and move assignment operators) for our STL types in VS2010. It is worth mentioning here that for some types, in order to improve performance especially in situations where the copy was taking a lot of time such as vector reallocation, we used a trick called “swaptimization” before VS2010. (see pt.#16 in this blog post) Now with the arrival of rvalue references we don’t need that anymore and we have replaced those swaptimization tricks with the rvalue reference machinery and, guess what, we are now even faster. The following example shows one such scenario where we used swaptimization earlier.
long long total_copy_move_count = 0;
long long total_vector_resizes = 0;
std::vector<std::string> v_str;
long long start = counter();
for (int i = 0; i<12000000; ++i)
size_t size = v_str.size();
size_t capacity = v_str.capacity();
v_str.push_back("I think, therefore I am");
if (size == capacity) // indication that vector reallocation has occurred.
{
total_copy_move_count += size;
}
}
long long finish = counter();
total_pushback_time = ((finish - start)*1.0 / frequency());
std::cout << std::endl;
c:\TEMP\STLPerf>cl /EHsc /nologo /W4 /O2 /GL /D_SECURE_SCL=0 swaptimization.cpp
swaptimization.cpp
c:\TEMP\STLPerf> swaptimization.exe
Total Pushback time: 6.717094 sec
Total number of vector reallocations: 42
Total number of objects copied/moved: 35875989
Total Pushback time: 3.780286 sec
You can see that moving to rvalue references hasn’t impacted the performance. In fact, in this particular example, we are almost twice as fast as VS2008. The credit for this gain also goes to rvalue references, not specifically during vector reallocation, but during push_back() instead. As you can see that we are passing a literal string (an rvalue- see footnote) to the function. So the move machinery comes into play here again, giving us another performance boost.
This is all what I have to share in my blog. If you have any questions about performance of STL or in general about STL, please feel free to write to me at: Mohammad.Usman at microsoft.com.
Thanks!
-Usman
* String literals are actually lvalues (C++03 5.1/2), whereas all other literals are rvalues. However, since we’ve got a vector<string>, push_back() takes const string&. This constructs a temporary std::string, which is an rvalue. That’s where move semantics comes into play.
Bog
David LeBlanc and Michael Howard literally wrote the book on “Writing Secure Code”. David has also developed the SafeInt class, a template class that performs “checked” operations on integer types (with a name like SafeInt what else would you expect). In VS2010 we decided to ship the SafeInt class in the box, so users no longer need to go and download it separately and “install” it themselves. Ale Contenti was the main instigator behind this decision and now that VS2010 Beat 1 has shipped, Ale and David meet with Charles from Channel 9 to talk about the class and it uses. We hope you enjoy the video and feel free to post any questions/feedback on the Channel 9 site.
Hello.
The!!!
Hi,
My name is Bogdan Mihalcea and I’m a developer on the C++ Project & Build team. In the last 2 years I worked on a new C++ project system build on top MSBuild.
I’m writing this blog to share some excellent news related to improvements we made to the performance of project conversion since the Beta1 build. This was possible because of the feedback we got from you, and I want to thank you for doing so!
During our development milestones, we had tests reporting that we have a very slow conversion for specific types of projects. They were usually containing many files (1000+) and they had a significant percentage of them containing file level configurations. After we analyzed the root cause and the costs of fixing, we made the assumption that these projects with many file level configurations were not very common, so the impact of the performance in this scenario would be low. Another assumption we made was that the conversion it is a onetime thing so the impact even if big for those rare cases it is a onetime tax. Based on these assumptions we prioritized the fix lower and decided to invest our time in making the performance of main line scenarios better.
Our team has a customer plan which includes the early release of the product through Beta1 and having various face to face meetings with customers. One of these events is the Visual C++ DevLab, which last time we kept in May 09. During this event we noticed that 20% of the customers were experiencing a very slow conversion.
We investigated and we realized that all cases were caused by less than 5% of the projects in the solution, which took the majority of conversion time. One common detail about those few projects is that they were very heavy on file level configurations. From the discussions with the customers we realized that it is unfortunately easy to get in that state due to various reasons (evolutionary codebases, easy to cause human error due to bulk select, bugs in previous conversions, not a easy bulk way to revert the file level configurations to project level).
That pretty much invalidated our assumption that it is not common to get in a situation where a customer can have a project with 1000+ files and more than half have file level configurations. The first assumption was also invalidated from the discussions we had with the customers and from feedback we received from Beta1’s customers, which revealed that even if the projects that have this case are not very common the unfortunate thing is that it is common for customers with hundreds of projects have 1 or 2 of these and that will impact the whole experience of the conversion for a much bigger percentage of the customers, than what we estimated earlier.
Furthermore the second assumption was challenged as well as we learnt from your feedback that conversion might not be a onetime tax as most of the companies prefer to stay for months in a dual state with the development done in parallel on both products and it is common to maintain only the previous version files as a baseline for changes and to reconvert always when the new product is used. Considering the new feature we added in Dev10, native multitargeting, this seems to make this approach even more popular.
After we got all this feedback we acted on it. We analyzed the code paths and we found places where we could optimize for this type of scenarios and made the fix. Below is an extract from our measurements before and after the fix.
Project
Files/vcproj
Configurations
% files in vcproj with FileConfigurations
Previous Conversion Time
Current Conversion Time
Customer
~4000
4
50%
55 min
8 sec
MVP
~5000
8
80%
1h 40min
23 sec
PerfLab test
~200
2
less than 10%
2.8s
1.4s
~200
16
100%
50 min
10 sec
The common case projects (PerfLab test), with only few file level configurations, were impacted too as you can see from the above table. We used pristine lab machines dedicated to performance runs and the results are specific numbers to these machines.
We are constantly on lookout for feedback and we are adjusting our priorities based on it, and we will try to keep you guys up to date with the progress we make.
Regards,
Bogdan Mihalcea
C++ Project & Build
Thanks.
Hello, I’m Mitchell Slep, a developer on the Visual C++ compiler team. I’m very excited to tell you about a new feature in Visual Studio 2010 - C++ IntelliSense can now display compiler-quality syntax and semantic errors as you browse and edit your code!
We display a wavy underline or “squiggle” in your code at the location of the error. Hovering over the squiggle will show you the error message. We also show the error in the Error List window. The squiggles are great for showing you errors in the code you are currently viewing or editing, whereas the Error List window can help you find problems elsewhere in the translation unit, all without doing a build.
We had two scenarios in mind when designing this feature. One is of course productivity – it’s very convenient to be able to fix errors as they happen instead of waiting to discover them after a build, which can save you a lot of time. We also wanted to improve the experience when IntelliSense doesn’t work. IntelliSense has always been a black box – it often worked well, but if it didn’t, you had no idea why. Now IntelliSense has a powerful feedback mechanism that allows you take corrective action – either by fixing errors in your code or making sure your project is configured correctly.
One decision we had to make when designing this feature was how often to update the errors as you edit your code. If we don’t do it often enough, the errors quickly become out of date and irrelevant. But doing it too often can also lead to irrelevant results, like a squiggle under ‘vect’ while you’re in the middle of typing ‘vector’! We also don’t want to hog your CPU with constant background parsing.
We found that a good balance was to wait for 1 second of idle time after you edit or navigate to a new part of the code before beginning to update the errors. In this case ‘idle’ means that you haven’t typed anything and you haven’t navigated to a different part of the code.
We also experimented with some different designs for what to do with existing errors during the short window of time between when you make an edit and when the newly updated errors are available. For instance, one design we tried was to clear all squiggles on the screen immediately after an edit, and then redraw the new errors when they are available. We also considered a variant of this where we only clear the squiggles on the lines below the location of your edit (since making an edit can generally only affect code appearing after it). These designs have the advantage that you never see a stale squiggle, but in usability studies we found that this produced an annoying flickering affect, and also some confusion as to whether a squiggle disappeared because it was fixed or because it was just temporarily being updated. The design we went with was to leave existing squiggles in place after an edit, and then swap them with the new errors when they are available. This works well since the updates are very fast.
One of the technical challenges with this feature was making it fast. As many of you know, large C++ projects can often take several hours to build. One of the ways we get around this is by having IntelliSense focus on a single translation unit at a time (a translation unit is a .cpp file plus all of its included headers). However, even that didn’t give us the kind of responsiveness we wanted for a live-compilation feature like squiggles.
To get even better performance, we’ve developed some innovate incremental parsing techniques that minimize the amount of code we need to parse. This allows IntelliSense to parse your code much faster than the time it would take to do an actual build (or even the time it would take to compile a single .cpp file). The ideas are simple, but are challenging to implement in a complex, context-sensitive language like C++.
When you first open a file, we parse just enough code to build up a global symbol table, skipping over a lot of code (like function-bodies) that only introduces local symbols. Once we’ve built up the symbol table, we lazily parse the code that we skipped “on-demand”. For instance, we only parse the inside of a function body when you actually view it on the screen. If you make changes inside a function body, we are able to reparse just that function body. Of course, all of this only happens during idle time, as described above. These parsing techniques allow us to show you fast, relevant errors even as you edit large, complex code bases.
If your solution already builds with Visual Studio, you will immediately benefit from having accurate syntax and semantic errors reported by IntelliSense as you browse and edit your code. But this feature is also good news for those of you with external build systems. The IntelliSense errors in the Error List window can guide you towards setting up a solution with accurate IntelliSense. For instance, if you load up a solution configured for an external build system, you might see something like this:
Now you know that you need to adjust your Include Path. Making these tweaks to your solution will dramatically improve the quality of IntelliSense you get with an external build system.
It’s been a lot of fun working on this feature and especially dogfooding it – it’s great not to have to do builds all the time! You can preview this feature in Beta 1 and I look forward to hearing your feedback in the comments.
The!
Hi, my name is Boris Jabes. I've been working on the C++ team for over 4 years now (you may have come across my blog, which has gone stale...). Over the past couple of years, the bulk of my time has been spent on re-designing our IDE infrastructure so that we may provide rich functionality for massive code bases. Our goal is to enable developers of large applications that span many millions lines of code to work seamlessly in Visual C++. My colleagues Jim and Mark have already published a number of posts (here, here and here) about this project and with the release of Visual Studio 2010 Beta 1 this week; we're ready to say a lot more. Over the next few weeks, we will highlight some of the coolest features and also delve into some of our design and engineering efforts.
In this post, I want to provide some additional details on how we built some of the core pieces of the C++ language service, which powers features like Intellisense and symbol browsing. I will recap some of the information in the posts I linked to above but I highly recommend reading the posts as they provide a ton of useful detail.
Without going into too much detail, the issue we set about to solve in this release was that of providing rich Intellisense and all of the associated features (e.g. Class View) without sacrificing responsiveness at very high scale. Our previous architecture involved two (in)famous components: FEACP and the NCB. While these were a great way to handle our needs 10+ years ago, we weren’t able to scale these up while also improving the quality of results. Multiple forces were pulling us in directions that neither of these components could handle.
1. Language Improvements. The C++ language grew in complexity and this meant constant changes in many places to make sure each piece was able to grok new concepts (e.g. adding support for templates was a daunting task).
2. Accuracy & Correctness. We need to improve accuracy in the face of this complexity (e.g. VS2005/2008 often gets confused by what we call the “multi-mod” problem in which a header is included differently by different files in a solution).
3. Richer Functionality. There has been a ton of innovation in the world of IDEs and it’s essential that we unlock the potential of the IDE for C++ developers.
4. Scale. The size of ISV source bases has grown to exceed 10+ million lines of code. Arguably the most common (and vocal!) piece of feedback we received about VS2005 was the endless and constant reparsing of the NCB file (this reparsing happened whenever a header was edited or when a configuration changed).
Thus, the first step for us in this project was to come up with a design that would help us achieve these goals.
Our first design decision involved both accuracy and scalability. We needed to decouple the Intellisense operations that require precise compilation information (e.g. getting parameter help for a function in the open cpp file) from the features that require large-scale indexes (e.g. jumping to a random symbol or listing all classes in a project). The architecture of VS2005 melds these two in the NCB and in the process lost precision and caused constant reparsing, which simply killed any hope of scaling. We thus wanted to transition to a picture like this (simplified):
At this point, we needed to fill in the blanks and decide how these components should be implemented. For the database, we wanted a solution that could scale (obviously) and that would also provide flexibility and consistency. Our existing format, the NCB file, was difficult to modify when new constructs were added (e.g. templates) and the file itself could get corrupted leading our users to delete it periodically if things weren’t working properly in the IDE. We did some research in this area and decided to use SQL Server Compact Edition, which is an in-process, file-oriented database that gives us many of the comforts of working with a SQL database. One of the great things of using something like this is that gave us real indexes and a customizable and constant memory footprint. The NCB on the other hand contained no indexes and was mapped into memory.
Finally, we needed to re-invent our parsers. We quickly realized that the only reasonable solution for scalability was to populate our database incrementally. While this seems obvious at first, it goes against the basic compilation mechanism of C++ in which a small change to a header file can change the meaning of every source file that follows, and indeed every source file in a solution. We wanted to create an IDE where changing a single file did not require reparsing large swaths of a solution, thus causing churn in the database and even possibly locking up the UI (e.g. in the case of loading wizards). We needed a parser that could parse C++ files in isolation, without regard to the context in which they were included. Although C++ is a “context sensitive” language in the strongest sense of the word, we were able to write a “context-free” parser for it that uses heuristics to parse C++ declarations with a high degree of accuracy. We named this our “tag” parser, after a similar parser that was written for good old C code long ago. We decided to build something fresh in this case as this parser was quite different than a regular C++ parser in its operation, is nearly stand-alone, and involved a lot of innovative ideas. In the future, we’ll talk a bit more about how this parser works and the unique value it provides.
With the core issue of scalability solved, we still needed to build an infrastructure that could provide highly accurate Intellisense information. To do this, we decided to parse the full “translation unit” (TU) for each open file in the IDE editor* in order to understand the semantics of the code (e.g. getting overload resolution right). Building TUs scales well – in fact, the larger the solution, the smaller the TU is as a percentage of the solution size. Finally, building TUs allows us to leverage precompiled header technology, thus drastically reducing TU build times. Using TUs as the basis for Intellisense would yield highly responsive results even in the largest solutions.
Our requirements were clear but the task was significant. We needed rich information about the translation unit in the form of a high-level representation (e.g. AST) and we needed it available while the user was working with the file. We investigated improving on FEACP to achieve this goal but FEACP was a derivation of our compiler, which was not designed for this in mind (see Mark’s post for details). We investigated building a complete compiler front-end designed for this very purpose but this seemed like an ineffective use of our resources. In the 1980s and 1990s, a compiler front-end was cutting-edge technology that every vendor invested in directly but today, innovation lies within providing rich value on top of the compiler. As a result there has been a multiplication of clients for a front-end beyond code generation and we see this trend across all languages: from semantic colorization and Intellisense to refactoring and static analysis. As we wanted to focus on improving the IDE experience, we identified a third and final option: licensing a front-end component for the purposes of the IDE. While this may seem counter-intuitive, it fit well within our design goals for the product. We wanted to spend more resources on the IDE, focusing on scale and richer functionality and we knew of a state-of-the-art component built by the Edison Design Group (commonly referred to as EDG). The EDG front-end fit the bill as it provides a high-level representation chock-full of the information we wanted to build upon to provide insight in the IDE. The bonus is that already handles all of the world’s gnarly C++ code and their team is first in line to keep up with the language standard.
With all these pieces in place, we have been able to build some great new end-to-end functionality in the IDE, which we’ll highlight over the coming weeks. Here’s a sneak peek at one we’ll talk about next week: live error reporting in the editor.
* - We optimize by servicing as many open files as possible with a single translation unit..
Visual Studio 2010 Beta 1 is now available for download. I've recently blogged about how Visual C++ in VS 2010 Beta 1, which I refer to as VC10 Beta 1, contains compiler support for five C++0x core language features: lambdas, auto, static_assert, rvalue references, and decltype. It also contains a substantially rewritten implementation of the C++ Standard Library, supporting many C++0x standard library features. In the near future, I'll blog about them in Part 4 and beyond of "C++0x Features in VC10", but today I'm going to talk about the STL changes that have the potential to break existing code, which you'll probably want to know about before playing with the C++0x goodies.
Problem 1: error C3861: 'back_inserter': identifier not found
This program compiles and runs cleanly with VC9 SP1:
C:\Temp>type back_inserter.cpp
#include <algorithm>
using namespace std;
int square(const int n) {
return n * n;
int main() {
vector<int> v;
v.push_back(11);
v.push_back(22);
v.push_back(33);
vector<int> dest;
transform(v.begin(), v.end(), back_inserter(dest), square);
for (vector<int>::const_iterator i = dest.begin(); i != dest.end(); ++i) {
cout << *i << endl;
}
C:\Temp>cl /EHsc /nologo /W4 back_inserter.cpp
back_inserter.cpp
C:\Temp>back_inserter
121
484
1089
But it fails to compile with VC10 Beta 1:
back_inserter.cpp(19) : error C3861: 'back_inserter': identifier not found
What's wrong?
Solution: #include <iterator>
The problem was that back_inserter() was used without including <iterator>. The C++ Standard Library headers include one another in unspecified ways. "Unspecified" means that the Standard allows but doesn't require any header X to include any header Y. Furthermore, implementations (like Visual C++) aren't required to document what they do, and are allowed to change what they do from version to version (or according to the phase of the moon, or anything else). That's what happened here. In VC9 SP1, including <algorithm> dragged in <iterator>. In VC10 Beta 1, <algorithm> doesn't drag in <iterator>.
When you use a C++ Standard Library component, you should be careful to include its header (i.e. the header that the Standard says it's supposed to live in). This makes your code portable and immune to implementation changes like this one.
There are probably more places where headers have stopped dragging in other headers, but <iterator> is overwhelmingly the most popular header that people have forgotten to include.
Note: Range Insertion and Range Construction
By the way, when seq is a vector, deque, or list, instead of writing this:
copy(first, last, back_inserter(seq)); // Bad!
You should write this:
seq.insert(seq.end(), first, last); // Range Insertion - Good!
Or, if you're constructing seq, simply write this:
vector<T> seq(first, last); // Range Construction - Good!
They're not only slightly less typing, they're also significantly more efficient. copy()-to-back_inserter() calls push_back() repeatedly, which can trigger multiple vector reallocations. Given forward or better iterators, range insertion and range construction can just count how many elements you've got, and allocate enough space for all of them all at once. This is also more efficient for deque, and you may as well do it for list too.
Problem 2: error C2664: 'std::vector<_Ty>::_Inside' : cannot convert parameter 1 from 'IUnknown **' to 'const ATL::CComPtr<T> *'
C:\Temp>type vector_ccomptr.cpp
#include <atlcomcli.h>
vector<CComPtr<IUnknown>> v;
v.push_back(NULL);
C:\Temp>cl /EHsc /nologo /W4 vector_ccomptr.cpp
vector_ccomptr.cpp
C:\Temp>vector_ccomptr
C:\Temp>
C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\vector(623) : error C2664: 'std::vector<_Ty>::_Inside' : cannot convert parameter 1 from 'IUnknown **' to 'const ATL::CComPtr<T> *'
with
[
_Ty=ATL::CComPtr<IUnknown>
]
and
T=IUnknown
Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\vector(622) : while compiling class template member function 'void std::vector<_Ty>::push_back(_Ty &&)'
_Ty=ATL::CComPtr<IUnknown>
vector_ccomptr.cpp(9) : see reference to class template instantiation 'std::vector<_Ty>' being compiled
C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\vector(625) : error C2040: '-' : 'IUnknown **' differs in levels of indirection from 'ATL::CComPtr<T> *'
Solution: Use CAdapt
The Standard containers prohibit their elements from overloading the address-of operator. CComPtr overloads the address-of operator. Therefore, vector<CComPtr<T>> is forbidden (it triggers undefined behavior). It happened to work in VC9 SP1, but it doesn't in VC10 Beta 1. That's because vector now uses the address-of operator in push_back(), among other places.
The solution is to use <atlcomcli.h>'s CAdapt, whose only purpose in life is to wrap address-of-overloading types for consumption by Standard containers. vector<CAdapt<CComPtr<T>>> will compile just fine. In VC10 Beta 1, I added operator->() to CAdapt, allowing v[i]->Something() to compile unchanged. However, typically you'll have to make a few other changes when adding CAdapt to your program. operator.() can't be overloaded, so if you're calling CComPtr's member functions like Release(), you'll need to go through CAdapt's public data member m_T . For example, v[i].Release() needs to be transformed into v[i].m_T.Release() . Also, if you're relying on implicit conversions, CAdapt adds an extra layer, which will interfere with them. Therefore, you may need to explicitly convert things when pushing them back into the vector.
Problem 3: error C2662: 'NamedNumber::change_name' : cannot convert 'this' pointer from 'const NamedNumber' to 'NamedNumber &'
C:\Temp>type std_set.cpp
#include <set>
class NamedNumber {
NamedNumber(const string& s, const int n)
: m_name(s), m_num(n) { }
bool operator<(const NamedNumber& other) const {
return m_num < other.m_num;
string name() const {
return m_name;
int num() const {
return m_num;
void change_name(const string& s) {
m_name = s;
private:
string m_name;
int m_num;
void print(const set<NamedNumber>& s) {
for (set<NamedNumber>::const_iterator i = s.begin(); i != s.end(); ++i) {
cout << i->name() << ", " << i->num() << endl;
set<NamedNumber> s;
s.insert(NamedNumber("Hardy", 1729));
s.insert(NamedNumber("Fermat", 65537));
s.insert(NamedNumber("Sophie Germain", 89));
print(s);
cout << "--" << endl;
set<NamedNumber>::iterator i = s.find(NamedNumber("Whatever", 1729));
if (i == s.end()) {
cout << "OH NO" << endl;
} else {
i->change_name("Ramanujan");
C:\Temp>cl /EHsc /nologo /W4 std_set.cpp
std_set.cpp
C:\Temp>std_set
Sophie Germain, 89
Hardy, 1729
Fermat, 65537
--
Ramanujan, 1729
std_set.cpp(55) : error C2662: 'NamedNumber::change_name' : cannot convert 'this' pointer from 'const NamedNumber' to 'NamedNumber &'
Conversion loses qualifiers
Solution: Respect set Immutability
The problem is modifying set/multiset elements.
In C++98/03, you could get away with modifying set/multiset elements as long as you didn't change their ordering. (Actually changing their ordering is definitely crashtrocity, breaking the data structure's invariants.)
C++0x rightly decided that this was really dangerous and wrong. Instead, it flat-out says that "Keys in an associative container are immutable" (N2857 23.2.4/5) and "For [set and multiset], both iterator and const_iterator are constant iterators" (/6).
VC10 Beta 1 enforces the C++0x rules.
There are many alternatives to modifying set/multiset elements.
· You can use map/multimap, separating the immutable key and modifiable value parts.
· You can copy, modify, erase(), and re-insert() elements. (Keep exception safety and iterator invalidation in mind.)
· You can use set/multiset<shared_ptr<T>, comparator>, being careful to preserve the ordering and proving once again that anything can be solved with an extra layer of indirection.
· You can use mutable members (weird) or const_cast (evil), being careful to preserve the ordering. I strongly recommend against this.
I should probably mention, before someone else discovers it, that in VC10 Beta 1 we've got a macro called _HAS_IMMUTABLE_SETS . Defining it to 0 project-wide will prevent this C++0x rule from being enforced. However, I should also mention that _HAS_IMMUTABLE_SETS is going to be removed after Beta 1. You can use it as a temporary workaround, but not as a permanent one.
Problem 4: Specializing stdext::hash_compare
If you've used the non-Standard <hash_set> or <hash_map> and specialized stdext::hash_compare for your own types, this won't work anymore, because we've moved it to namespace std. <hash_set> and <hash_map> are still non-Standard, but putting them in namespace stdext wasn't accomplishing very much.
Solution: Use <unordered_set> or <unordered_map>
TR1/C++0x <unordered_set> and <unordered_map> are powered by the same machinery as <hash_set> and <hash_map>, but the unordered containers have a superior modern interface. In particular, providing hash and equality functors is easier.
If you still want to use <hash_set> and <hash_map>, you can specialize std::hash_compare, which is where it now lives. Or you can provide your own traits class.
By the way, for those specializing TR1/C++0x components, you should be aware that they still live in std::tr1 and are dragged into std with using-declarations. Eventually (after VC10) this will change.
This isn't an exhaustive list, but these are the most common issues that we've encountered. Now that you know about them, your upgrading experience should be more pleasant.
Stephan T. Lavavej
Visual C++ Libraries Developer
Hello, my name is Xiang Fan and I am a developer on the C++ Shanghai team.
Today I’d like to talk about two linker options related to security: /DYNAMICBASE and /NXCOMPAT.
These two options are introduced in VS2005, and target to improve the overall security of native applications.
You can set these two options explicitly in VS IDE:
These two options have three available values in the IDE: On, Off and Default.
They are set to “On” if you create native C++ application using VS2008 wizard.
When VS2008 upgrades projects created by older version of VC which doesn’t support these options, it will set them to “Off” after upgrade.
If you set them to “Default”, linker will treat it as “Off”.
After several years adoption, we plan to change the behavior of “Default” to “On” in VS2010 to reinforce the security. And we’d like to get your feedback.
Here are the detailed information about these two options:
1. DYNAMICBASE
/DYNAMICBASE modifies the header of an executable to indicate whether the application should be randomly rebased at load time by the OS. The random rebase is well known as ASLR (Address space layout randomization).
This option also implies “/FIXED:NO”, which will generate a relocation section in the executable. See /FIXED for more information.
In VS2008, this option is on by default if a component requires Windows Vista (/SUBSYSTEM 6.0 and greater)
/DYNAMICBASE:NO can be used to explicitly disable the random rebase.
This article talks about ASLR:
ASLR is supported only on Windows Vista and later operating systems. It will be ignored on older OS.
ASLR is transparent to the application. With ASLR, the only difference is OS will rebase the executable unconditionally instead of doing it only when an image base conflict exists.
2. NXCOMPAT
/NXCOMPAT is used to specify an executable as compatible with DEP (Data Execution Prevention)
Notice that, this option applies for x86 executable only. Non-x86 architecture versions of desktop Windows (e.g. x64 and IA64) always enforce DEP if the executable is not running in WOW64 mode.
Here is a comprehensive description of DEP:
This option is on by default if a component requires Windows Vista (/SUBSYSTEM 6.0 and greater).
/NXCOMPAT:NO can be used to explicitly specify an executable as not compatible with DEP.
However, the administrator can still enable the DEP even if the executable is not specified as compatible with DEP. So you should always test your application with DEP on.
Windows Vista SP1, Windows XP SP3 and Windows Server 2008 add a new API SetProcessDEPPolicy to allow the developer to set DEP on their process at runtime rather than using linker options. See the following link for more details:
There are several common (and incomplete) patterns which are not compatible with DEP (See for more information.)
a. Dynamic code generated in heap or new, malloc and HeapAlloc functions are non-executable. An application can use the VirtualAlloc function to allocate executable memory with the appropriate memory protection options.
Another option is to pass HEAP_CREATE_ENABLE_EXECUTE when create the heap via HeapCreate. Then the memory allocated by the subsequent HeapAlloc will be executable.
b. Executable code in data section
They should be migrated to a code section
Security vulnerabilities are more exploitable than they would be if DEP were enabled. So you should always make your application DEP compatible and turn DEP on.
The following sample demonstrates the code which is not compatible with DEP.
It also shows two DEP compatible ways to run the code on heap.
Running code in data section or on stack almost always implies security holes. You have to put the code in code section or heap instead.
#include "windows.h"
#include <cstdio>
typedef void (*funType)();
unsigned char gCode[] = {0xC3}; // ”ret” instruction on x86
const size_t gCodeSize = sizeof(gCode);
// these are not DEP compatible
void RunCodeOnHeap()
unsigned char *code = new unsigned char[gCodeSize];
memcpy(code, gCode, gCodeSize);
funType fun = reinterpret_cast<funType>(code);
fun();
delete []code;
// these are DEP compatible
void RunCodeOnHeapCompatible1()
unsigned char *code = (unsigned char *)::VirtualAlloc(NULL, gCodeSize, MEM_COMMIT, PAGE_READWRITE);
DWORD flOldProtect;
::VirtualProtect(code, gCodeSize, PAGE_EXECUTE_READ, &flOldProtect);
::VirtualFree(code, 0, MEM_RELEASE);
void RunCodeOnHeapCompatible2()
HANDLE hheap = ::HeapCreate(HEAP_CREATE_ENABLE_EXECUTE, 0, 0);
unsigned char *code = (unsigned char *)::HeapAlloc(hheap, 0, gCodeSize);
::HeapFree(hheap, 0, code);
::HeapDestroy(hheap);
INT DEPExceptionFilter(LPEXCEPTION_POINTERS lpInfo)
// please check
// for more information
if (lpInfo->ExceptionRecord->ExceptionCode == STATUS_ACCESS_VIOLATION &&
lpInfo->ExceptionRecord->ExceptionInformation[0] == 8) {
return EXCEPTION_EXECUTE_HANDLER;
return EXCEPTION_CONTINUE_SEARCH;
__try
{
RunCodeOnHeap();
printf("RunCodeOnHeap: OK\n");
__except (DEPExceptionFilter(GetExceptionInformation()))
printf("RunCodeOnHeap: Fail due to DEP\n");
RunCodeOnHeapCompatible1();
printf("RunCodeOnHeapCompatible1: OK\n");
printf("RunCodeOnHeapCompatible1: Fail due to DEP\n");
RunCodeOnHeapCompatible2();
printf("RunCodeOnHeapCompatible2: OK\n");
printf("RunCodeOnHeapCompatible2: Fail due to DEP\n");
Output:
cl test.cpp /link /nxcompat:no
RunCodeOnHeap: OK
RunCodeOnHeapCompatible1: OK
RunCodeOnHeapCompatible2: OK
cl test.cpp /link /nxcompat
RunCodeOnHeap: Fail due to DEP
In summary, “cl test.cpp” is equivalent to “cl test.cpp /link /nxcompat:no /dynamicbase:no” before VS2010. We plan to change it to “cl test.cpp /link /nxcompat /dynamicbase” in VS2010.
If you have any concerns about the default behavior change of these two options, don’t hesitate to give your feedback. Thanks!
Xiang:
For each project that you want to target at the RC version of the Windows 7 SDK do the following:Built:
|
http://blogs.msdn.com/vcblog/
|
crawl-002
|
en
|
refinedweb
|
XFireServlet
The core of the HTTP Transport takes place in the XFireServletController. Your own servlets can delegate appropriate requests to this class or you can use one of XFire's internal servlet classes. The XFireServlet is just a thin wrapper for the controller. The XFireServletController provides an xml configuration layer on top of this.
XFire also provides the XFireConfigurableServlet which reads the services.xml format automatically for you and the XFireSpringServlet which provides Spring integration.
HttpServletRequest/HttpServletResponse
The HttpServletRequest/HttpServletResponse can be accessed via the XFireServletController.
HttpServletRequest request = XFireServletController.getRequest(); HttpServletResponse response = XFireServletController.getResponse();
This method will work all the XFire servlets (XFireServlet, XFireConfigurableServlet, XFireSpringServlet).
Client Authentication
The Apache Jakarta HttpClient is used under the covers to provide HTTP client support. There are two ways which you can override the HttpClient settings:
1. You can set the USERNAME/PASSWORD
// Create your client Client client = ....; // Or get it from your proxy Client client = Client.getInstance(myClientProxy); client.setProperty(Channel.USERNAME, "username"); client.setProperty(Channel.PASSWORD, "pass");
2. You can supply your own HttpClientParms
client.setProperty(CommonsHttpMessageSender.HTTP_CLIENT_PARAMS, myParams);
The HTTPClient javadocs provide information on how to configure the HttpClientParams.
Client connecting to SSL Server via HTTPS
If your webservice is on a HTTPS URL then transport-layer (as opposed to message layer) encryption via SSL will be used. (See your web container's documentation, e.g. Tomcat's, on how to enable SSL on it; this section describes how to connect to such a server, not how to set up that server.)
If the SSL certificate of the server is "CArtel" signed (i.e. issued by Verisign, Thawte, etc.) all is well, as Java (JSSE) recognizes such certificates, because their root certs are in your JRE's lib/security/cacerts truststore. If however the server uses a self-signed certificate (or one signed by an in-house CA) you'll run into problems with messages like "ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". (This is fairly common, particularly in development; and including e.g. a default installation of a commercial application server such as BEA's or IBM's.)
There are several ways around this:
1. The "traditional" solution is to add the server certificate to a truststore using the JDK keytool, and then specifiy the global Java system property javax.net.ssl.trustStore to point to that file.
2. Particularly during development, maybe you are OK to just accept ALL (self-signed) certificates. You can do that by running the following two lines once in some startup code: (EasySSLProtocolSocketFactory is included in XFire starting with v1.2; for earlier versions download the full commons-ssl JAR like below.)
ProtocolSocketFactory easy = new EasySSLProtocolSocketFactory(); Protocol protocol = new Protocol("https", easy, 8443); Protocol.registerProtocol("https", protocol);
3. However, the EasySSLProtocolSocketFactory explicitly says in its JavaDoc that: "This socket factory SHOULD NOT be used for productive systems due to security reasons, unless it is a concious decision and you are perfectly aware of security implications of accepting [all] self-signed certificates.". A better way might be to accept just that one self-signed certificate, or all certificates signed by an in-houce CA. Indeed, the not-yet-commons-ssl library has a programmatic solution for this, using the following few lines: (You'll need to download not-yet-commons-ssl from for this code.)
// Technique similar to HttpSecureProtocol protocolSocketFactory = new HttpSecureProtocol(); // "/thecertificate.cer" can be PEM or DER (raw ASN.1). Can even be several PEM certificates in one file. TrustMaterial trustMaterial = new TrustMaterial(getClass().getResource("/thecertificate.cer")); // We can use setTrustMaterial() instead of addTrustMaterial() if we want to remove // HttpSecureProtocol's default trust of TrustMaterial.CACERTS. protocolSocketFactory.addTrustMaterial(trustMaterial); // Maybe we want to turn off CN validation (not recommended!): protocolSocketFactory.setCheckHostname(false); Protocol protocol = new Protocol("https", (ProtocolSocketFactory) protocolSocketFactory, 8443); Protocol.registerProtocol("https", protocol);
This is of course similar to 1. but avoids dealing with a global Java system property that other code may also depends on and a filename, e.g. you can the certificate in a classpath resource as well. -- If you don't have a .cer (same as a .pem) file of your server, the openssl tool (available from Cygwin under Windows) will fetch it. Not-yet-commons-ssl.jar also contains a command-line tool for fetching the certificate.
openssl s_client -showcerts -prexit -connect myhost.mydomain:port # not-yet-commons-ssl.jar can do the same thing: java -jar not-yet-commons-ssl-0.3.7.jar -t myhost.mydomain:port
This will have openssl print the certificate(s) of that web server; you can copy/paste the relevant (or certificate lines between (and including) the BEGIN/END CERTIFICATE lines into some thecertificate.cer text file. (BTW: Make sure there is a no trailing carriage return after the END line, else Java's keytool will have trouble with the file, in case you are planning to import it to a truststore.)
Security note: downloading the certificate directly from the SSL handshake using "openssl s_client" or "not-yet-commons-ssl.jar" is not safe. In a dev environment it's okay. But in a production environment it leaves you susceptible to the oft-cited man-in-the-middle. It's safer than EasySSLSockeyProtocolFactory because you only download the certificate one time, whereas EasySSLSockeyProtocolFactory is always vulnerable, with every socket created. But nonetheless you should try to acquire the self-signed certificate through a different medium, maybe email (with encryption?), fax, telephone, letter mail, usb-drive, etc. Or if the self-signed cert is hosted on a properly signed "https" site, that's also okay (e.g.).
4.Currently the code above allows for the user to establish an HTTPS connection with the server, but it does not account for sending a client certificate for mutual authentication. To send a client certificate for mutual certificate authentication, you can add the code below. The myKeystore.key file contains your private key and the certificate provided by your CA. For more details on using EasySSLProtocolSocketFactory for your clients, you can visit their examples page: Examples
char[] keytStorePass = new String("changeit").toCharArray(); KeyMaterial key = new KeyMaterial(new File("myKeystore.key"), keyStorePass); protocolSocketFactory.setKeyMaterial(key);
Proxy Support
Proxy support looks very similar to the username/password scenario:
// Create your client Client client = ....; // Or get it from your proxy Client client = Client.getInstance(myClientProxy); client.setProperty(CommonsHttpMessageSender.HTTP_PROXY_HOST, "host"); client.setProperty(CommonsHttpMessageSender.HTTP_PROXY_PORT, "8080");
Proxy Authentication
To use proxy authentication you need to use following code :
// Create your client Client client = ....; // Or get it from your proxy Client client = Client.getInstance(myClientProxy); client.setProperty(CommonsHttpMessageSender.HTTP_PROXY_HOST, "host"); client.setProperty(CommonsHttpMessageSender.HTTP_PROXY_PORT, "8080"); client.setProperty(CommonsHttpMessageSender.HTTP_PROXY_USER, "proxyuser"); client.setProperty(CommonsHttpMessageSender.HTTP_PROXY_PASS, "proxypassword");
HTTP Chunking
You'll need to enable HTTP chunking on the client if you are sending large files which can't be cached in memory:
import org.codehaus.xfire.transport.http.HttpTransport; Client client = ....; client.setProperty(HttpTransport.CHUNKING_ENABLED, "true");
Custom HTTP Headers
You can send your custom HTTP headers along with SOAP message with following code :
Map headers = new HashMap(); headers.put("header1", "value1"); client.setProperty(CommonsHttpMessageSender.HTTP_HEADERS, headers);
|
http://docs.codehaus.org/display/XFIRE/HTTP+Transport
|
crawl-002
|
en
|
refinedweb
|
This page is for brainstorming how REST support should work in XFire (or possibly a new project if xfire ends up being too heavy weight).
Dan's First Take
Creating a Service
public class CustomerService { @RestMethod(methods={HttpMethod.GET}) Customer getCustomer(String id); @RestMethod(methods={HttpMethod.DELETE}) void deleteCustomer(String id); @RestMethod(methods={HttpMethod.POST}) @WebResult(name="customerId") String addCustomer(Customer customer); @RestMethod(method={HttpMethod.PUT}) void updateCustomer(Customer customer); }
public enum HttpMethod { DELETE, GET, PUT, POST }
Mapping data to method parameters
The information to invoke our service could come from a number of places:
- URI Path Info
- @Path(2) - would select the first query parameter - i.e. "123" in "/customer/123"
- @QueryParameter("customerId") - would select the query parameter with the name "customerId".
- @RegexPath("someregexexpression") - would select some stuff from the uri
- HTTP Headers
- @HttpHeader("customerId") - would select the HTTP header with the name "customerId"
- XML in a POST/PUT method
- This can be done with JAXB, XMLBeans, etc.
Mapping the Operations to URIs
This could be done in the interface a couple different ways. At the class level:
@RestService(uri="/customer/") public class CustomerService { ... }
At the method level
public class CustomerService { @RestMethod(methods={HttpMethod.GET}, uri="/customer") Customer getCustomer(String id); ... }
At the service registration level:
CustomerService customerService = ...; ServiceRegistry registry = ...; registry.register("/internal-customers", customerService); registry.register("/external-customers", anotherCustomerService);
Questions
- What is the best way to map operations to URIs?
- Is there a good syntax to map URI fragments to method parameters
- What about MIME?
- Should this framework allow non XML responses? i.e. could it return a JPEG? - the bigger question is can XFire support that....
May 22, 2006
bsnyder says:I've been thinking of this same concept in ServiceMix to allow for direct exposu...
I've been thinking of this same concept in ServiceMix to allow for direct exposure of endpoints via HTTP methods (i.e., mapping endpoints to HTTP methods). I figured that the endpoint name could be used as the endpoint portion of the URI. I'd really like to see these two concepts come together because I think that ServiceMix would then be able to support REST in a manner that is sufficiently simple but still powerful for users.
May 22, 2006
Dan Diephouse says:Yeah, that sounds like a good idea. Hiram echoed similar sentiments about method...
Yeah, that sounds like a good idea. Hiram echoed similar sentiments about method names.
|
http://docs.codehaus.org/display/XFIRE/REST
|
crawl-002
|
en
|
refinedweb
|
Memo to self: I really need to update the scripts used to edit this site so they use a real XML parser instead of regular expressions, which don't even work on my PowerBook anyway. The only excuse I have for this bogosity is that these scripts predate XML by a year or two.
Second memo to self: I have to figure out whether it's BBEdit on Mac OS X, rsync, or something else that keeps corrupting all my UTF-8 files when I move them from the Linux box to the Powerbook.
Day 2 of XML Europe, More stream of conscienceness notes from the show, though probably fewer today since I also have to prepare for and deliver my own talk on SAX Conformance Testing. I'll put the notes, paper, and software for that up here next week when I return to the U.S., and have the time to discuss it on various mailing lists.
Memo to conference organizers: open wireless access at the conference is a must in 2004. If the venue won't allow this, find another venue!
Memo to conference attendees: ask the conference if they provide open wireless access. If the conference doesn't, find another conference!
Having wireless access radically changes the experience at the conference. It enables many things (besides net surfing in the boring talks). Live note taking and Rendezvous enable the audience to communicate with each other and comment on the talks in real time without disturbing others. When you're curious about a speaker's point, it's easy to Google it. Providing wireless access makes the sessions much more interactive.
The morning began with a session entitled, "Topic Maps Are Emerging. Why Should I Care?" Unfortunately the question in the title wasn't reallly answered in the session. I've been hearing about topic maps for years, and have yet to see what they (or RDF, or OWL, or other similar technologies) actually accomplish. What application is easier to write with topic maps than without? What problem does this stuff actually solve? All I really want to hear is one or two clear, specfic examples and use cases. So far I haven't seen one.
Next Alexander Peshkov is talking about a RELAX NG schema for XSL FO.
After some technical glitches, Uche Ogbuji is talking about XML good practices and antipatterns in a talk entitled "XML Design Principles for Form and Function". Subjects include (I love these names)
javaelement
He doesn't like "hump case" (camel case).
Using attributes to qualify other attributes is a big No-No. If you're doing this, you're swimming upstream. You should switch to elements.
Envelope elements (
company contains
employees contains
employee;
library contains
books contains
book) makes processing easier; but not always. Use them only if they really represent something in the problem domain, not just to make life easier for the processing tools.
Don't overuse namespaces. He (unlike Tim Bray) likes URNs for namespaces, mostly to avoid accidental dereferencing. He also suggests RDDL. He suggests "namespace normal form" Declare all namespaces at the top of the document. Do not declare two different prefixes for the same namespace.
A very good talk. I look forward to reading the paper. FYI, he's a wonderful speaker; probably the best I've heard here yet. (Stephen Pemberton and Chris Lilley were pretty good too.) Someone remind me to invite them to SD next year.
Componentize XML. Avoid large (gigabyte) documents.
Be wary of reflex use of data typing. Pre-packaged data types often don't fit your problem.
"Enforce well-formedness checks at every application boundary."
Forget "Binary XML." Use gzip. "The idea of binary XML flies in the face of all the concepts that make XML work."
The
acetominophen paracetamol acid test
for markup vocabularies: Show a smaple document to a typical XML-aware but non-expert user. Does it give them a headache?
Next up is Brandon Jockman of Innodata Isogen on "Test-Driven XML Development". Hmm, the A/V equipment in this room seems to be giving everyone fits today. It worked well yesterday. This does not bode well for my presentation this afternoon.
One thing I'm noting in this and several of the other talks is that in a mere 45-minute session the traditional tripartite outline structure (tell your audience what you're going to tell them, tell them, and then tell them what you told them) doesn't really work. There's not enough time to do it, nor is the talk long enough that it's necessary. At most summarize the talk in one sentence, not even an entire slide. In fact the title of the talk (if it isn't too cute) is often a sufficient summary.
"XSLT gives you a really big hammer to hit yourself with."
He suggests using Eric van der Vlist's
XSLTunit for writing XSLT that tests XSLT.
Also recommends XMLUnit for the .NET folks.
I should look at this to see if they're any good ideas
here I can borrow for XOM's
XOMTestCase class.
Mark Scardina, "owner" of Oracle's XML Developer Kit, is talking about "High Performance XML Data Retrieval"
XPath is the preferred query language, apparently because of its broad support in different standards like DOM, XSLT, and XQuery.
The DOM Working Group is finished and will not be rechartered. DOM Level 3 XPath is limited to XPath 1.0. Multiple XPath queries require multiple tree traversals (at least in a naive, non-caching implementation -ERH).
High performance requirements include managed memory resources, even for very large (gigabyte) documents. This requires streaming, but SAX/StAX aren't good fits. Also need to handle multiple XPaths (i.e. XPath location paths) with minimum node traversals. Knowing the XPath in advance helps. Will not handle situation where everything is dynamic. This must support both DTDs and schemas (and documents with neither).
These requirements led to "Extractor for XPath." This is based on SAX, for streaming and multicasting support. First you need to register the XPaths and handlers. This absolutizes the XPaths. Then Extractor compiles XPaths. This requires determining whether or not the XPath is streamable. Can reject non-streamablee XPaths. It also builds a predicate table and an index tree.
"XPath Tracking" maintains XPath state and matches document XPaths with the indexed XPaths. XPath is implemented as a state machine implemented via a stack. It uses fake nodes to handle /*/ and //. Output sends matching XPaths along with document content. Henry Thompson seems skeptical of the performance of the state machine. He thinks a ?bottom-up parser? might be much faster. I really don't understand this. I'm just copying Scardina's notes.
I ran all the way across the convention hall carrying my sleeping laptop,
something which I hate to do,
(Has anyone noticed that age is directly correlated to the care one takes of computer equipment?
I am amazed at how cavalierly the students at Polytechnic treat their laptops.
I suspect it involves both the cost and fragility of computers when one first learned
to use them. At the rate we're going, children born this year will be laying hacky-sack with their
laptops in the school yard.) to catch
Sebastian Rahtz talking about
"A Unified Model for Text Markup: TEI, DocBook, and Beyond."
The "Beyond" part inlcudes includes other formats like HTML and MathML.
The main purpose of this seems to be to allow
DocBook to be used in TEI and vice versa, for elements that one has that the other has
no real equivalent; e.g. a DocBook
guimenu element in a mainly TEI document.
This is done with RELAX NG schemas. He recommends David Tolpin's RNV parser
and James Clark's emacs mode for XML.
That's it for today. I'm going to wander into the park behind the convention center to see if it looks like a good site for some birding. Come back tomorrow for updates from the final day of the show.
|
http://www.cafeconleche.org/oldnews/news2004April20.html
|
crawl-002
|
en
|
refinedweb
|
Authors: Paul Hammant
Overview.
Types of I. }
Component Configuration:
public class SomeDaemonComponent implements Startable { public void start() { // listen or whatever } public void stop() { } // other methods }
Notes.
IoC Exceptions.
|
http://docs.codehaus.org/display/PICO/Inversion+of+Control
|
crawl-002
|
en
|
refinedweb
|
Lopy with pysense shield + deep sleep issues
I encountered this issue "IndexError: bytes index out of range" when i tried to get the deep sleep aspect to work on lopy with a pysense shield. The error occurred after I tried to execute
wake_s = ds.get_wake_status() as shown below.
from deepsleep import DeepSleep
ds = DeepSleep()
wake_s = ds.get_wake_status()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/flash/lib/deepsleep.py", line 144, in get_wake_status
File "/flash/lib/deepsleep.py", line 77, in peek
IndexError: bytes index out of range
Below are the links which i used to get started with using the pysense and lopy.
The information below shows the lopy os.uname() after the upgrade.
import machine
os.uname()
(sysname='LoPy', nodename='LoPy', release='1.8.0.b1', version='v1.8.6-760-g90b72952 on 2017-09-01', machine='LoPy with ESP32', lorawan='1.0.0')
Also the lopy is not going to deep sleep mode if i just use the following commands:
from deepsleep import DeepSleep
ds = DeepSleep()
ds.go_to_sleep(60)
Your help is much appreciated.
@andy12 said in Lopy with pysense shield + deep sleep issues:
wake_s = ds.get_wake_status()
Hi, I have the same problem, maybe Did you resolve it?
|
https://forum.pycom.io/topic/1787/lopy-with-pysense-shield-deep-sleep-issues
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
WiPy ADC resolution
It seems I'm unable to vary the ADC resolution from 12 bits. The doco says 9, 10, 11 and 12 bits are valid.
import pycom import machine pycom.heartbeat(False) adc = machine.ADC() apin = adc.channel(pin='P13',attn=adc.ATTN_11DB) adc.init(bits=11) print ("Count:",apin()," Voltage:",apin.voltage())
The above code gives:
Traceback (most recent call last): File "<stdin>", line 9, in <module> ValueError: invalid argument(s) value >
The only value that doesn't give the error is "bits=12".
Is my syntax correct?
@robert-hh Ok thanks for that. Glad to perhaps make a small contribution to an improvement to the code :-)
@Jonno I raised an issue about adc.init() which also contains the fixed function code.
I could also make a PR, starting the fight with the git dragon.
@robert-hh thanks for that. machine.ADC(bits=11) is a better way of defining the resolution anyway.
@Jonno: There is indeed an inconsistency in the code. While the adc instance creating call accepts all values for bits between 9 and 12, the adc.init() call only accepts 12. So you can use for instance:
adc = machine.ADC(bits=11)
Edit: Looking at the implementation of adc.init(), it looks like the assignment of the bits value to pyb_adc_obj_t->width is missing. Then, the default value 12 is used. So it may be wrong anyhow.
|
https://forum.pycom.io/topic/5068/wipy-adc-resolution
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Each namespace has an owning user namespace and now there is not wayto discover these relationships.Pid and user namepaces are hierarchical. There is no way to discoverparent-child relationships too.Why we may want to know relationships between namespaces?One use would be visualization, in order to understand the running system.Another would be to answer the question: what capability does process X have toperform operations on a resource governed by namespace Y?One more use-case (which usually called abnormal) is checkpoint/restart.In CRIU we age going to dump and restore nested namespaces.There [1] was a discussion about which interface to choose to determingrelationships
|
https://lkml.org/lkml/2016/7/14/629
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Override Flask configuration via Cookie at runtime.
Project description
flask-config-override
[]()
This extension allows to change the configuration of a Flask application at runtime. This behavior is controlled by cookie and therefore is contained to the session of an unique user; configuration changes are not affecting other users.
A common usage is to quickly change options in staging environment without having to redeploy configuration changes. For example, we use it for an external API location or a feature switch like using minified Javascript files or not.
The configuration options able to be overridden are limited and configurable as well (using CONFIG_OVERRIDE_EXTENDABLE_VARS). This option can NOT be overridden for security reason.
The idea is to replace the configuration object of a Flask application by a proxy object, whom behavior can be controlled/changed upon request while exposing the same interface as a Flask configuration. The extension also provide a blueprint (default base url to /config_override/) to control the cookie via some simple HTTP calls; this is automatically attached to the application.
Installation
Via Pypi:
pip install flask-config-override
Usage
Once installed, first attach the extension to your Flask application:
from flask import Flask from flask.ext.config_override import ConfigOverride
app = Flask(__name__) app.config[‘FOO’] = ‘bar’
# Enable the override for the DEBUG option (default to false) app.config[‘CONFIG_OVERRIDE_EXTENDABLE_VARS’] = [‘FOO’] config_override = ConfigOverride(app)
# configure your routes and what not…
Launch your app, then open your browser and go to this url to setup the FOO option to another value; here “toto”:
Your session will now run with the settings FOO set to the new value. You can access it normally from app.config[‘FOO’] within the context of a request.
To see the current changes, you can visit this url:
And to remove the changes, you just need to clear your cookie or go there:
Tests
- First install nose for test discovery: pip install nose
- Then run the tests within a virtual environment: nosetests
Feel free to post issues, pull requests in github or contact me directly on twitter @el_boby.
Immediate TODOs
- test for cookie_utils
- test for proxy_config (based on flask one)
- documentation API (sphinx)
TODO
- Override by Environment variables.
- Flask Debug Toolbar integration.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/Flask-Config-Override/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Computing without a Computer
. To try to minimize these human errors, shortcuts and aids of one form or another were developed.
A common computational problem is to solve equations of some number of variables. The tool that was developed for this class of problem is the nomograph, or nomogram. A nomograph uses a graphical representation of an equation to make solving the equation as simple as setting down a straightedge and reading off the result. Once a nomograph is constructed, it is one of the fastest ways to solve an equation by hand.
In this article, I explore some common nomographs that many of you likely will have seen, and I take a look at a Python package, PyNomo, that you can use to create your own. I also walk through creating some new nomographs, which hopefully will inspire you to try creating some too.
First, let me explain what a nomograph actually is. Electrical engineers already should have seen and used one example, the Smith chart. This chart provides a very quick way to solve problems involved with transmission lines and matching circuits. Solving these types of problems by hand was a very tedious task that wasted quite a lot of time, so the introduction of the Smith chart increased productivity immensely.
Figure 1. With a Smith chart, you can work on problems around transmission lines and circuit matching.
A Smith chart is scaled in normalized impedance, or normalized admittance, or both. The scaling around the outside is in wavelengths and degrees. The wavelength scale measures the distance along the transmission line between the generator and the load. The degree scale measures the angle of the voltage reflection coefficient at that point. Since impedance and admittance change as frequency changes, you can solve problems only for one frequency at a time. The result calculated at one frequency is a single point on the Smith chart. For wider bandwidth problems, you just need to solve for a number of frequencies to get the behaviour over the full range. But, because this isn't meant to be a lesson in electrical engineering, I will leave it as an exercise for the reader to see just how many other problems can be solved with a Smith chart.
Another example, which should be recognizable to any parent, is the height/weight charts used by doctors. These charts allow a doctor to take the weight and height of a child and see where he or she fits on a nonlinear scale that compares one child to the available statistics of a population very quickly. This is much easier than plugging those values into an equation and trying to calculate it manually.
But, what can you do if you want to use a totally new type of nomograph? Enter the Python module PyNomo. The easiest way to install PyNomo is to use pip. You would type:
pip install PyNomo
You may need to preface this command with
sudo if you want it
installed as a system module. To get started, you need to import everything
from the nomographer section with:
from pynomo.nomographer import *
This section contains the main Nomographer class that actually generates the nomograph you want to create. There are ten types of nomographs that you can create with PyNomo:
Type 1: three parallel lines
Type 2: N or Z
Type 3: N parallel lines
Type 4: proportion
Type 5: contour
Type 6: ladder
Type 7: angle
Type 8: single
Type 9: general determinant
Type 10: one curved line
Each of these also is described by a mathematical relationship between the various elements. For example, a type 1 nomograph is described by the relationship:
F1(u1) + F2(u2) + F3(u3) = 0
Each element of a given nomograph must be of one type or another. But, they can be mixed together as separate elements of a complete nomograph. A simple example, borrowed from the PyNomo examples on the main Web site, is a temperature converter for converting between Celsius and Fahrenheit degrees. It is generated out of two type 8 blocks. Each block is defined by a parameter object, where you can set maximum and minimum values, titles and tick levels, as well as several other options. A block for a scale going from –40 to 90 degrees Fahrenheit would look like this:
F_para={'tag':'A', 'u_min':'-40.0, 'u_max':'90.0, 'function':lambda u:celcius(u), 'title':r'$^\circ$ F', 'tick_levels':4, 'tick_text_levels':3, 'align_func':celcius, 'title_x_shift':0.5 }
You will need a similar parameter list for the Celsius scale. Once you have that, you need to create block definitions for each of the scales, which looks like this:
C_block={'block_type':'type_8', 'f_params':C_para }
The last step is to define a parameter list for the main Nomographer class. For the temperature converter, you can use something like the following:
main_params={'filename':'temp_converter.pdf', 'paper_height':20.0, 'paper_width':2.0, 'block_params':[C_block,F_block], 'transformations':[('scale paper')] }
Now you can create the nomograph you are working on with the Python command:
Nomographer(main_params)
Figure 2. A simple nomograph is a Celsius-Fahrenheit temperature conversion scale.
A more complicated example is a nomograph to help with the calculations involved in celestial navigation. To handle such a complex problem, you need to use a type 9 nomograph. This type is a completely general form. You need to define a determinant form to describe all of the various interactions. If the constituents are functions of one variable, they will create a regular scale. If they are of two variables, they will create a grid section. For example, one of the single scales in this example would look like this:
'g':lambda u:-cos(u*pi/180.0)
Whereas the grid is defined by:
'g_grid':lambda u,v:-sin(u*pi/180.0)*sin(v*pi/180.0)
Figure 3. You even can do something as complicated as celestial navigation with a nomograph.
Once this nomograph is constructed, you can use it to compute the altitude azimuth.
PyNomo goes through several steps in generating the nomograph. The last step is to apply any transformations to the various parts. Transformations to individual components can be applied only to type 9 nomographs. If you do apply transformations to individual components, you need to make sure that relative scalings between the various parts are still correct. For other nomograph types, transformations can be applied only to the entire nomograph. There aren't a large number of transformations available yet, but there are enough to handle most customizations that you may want to make. The transformations available are:
scale paper: scale the nomograph to the size defined by paper_height and paper_width.
rotate: rotates the nomograph through the given number of degrees.
polygon: applies a twisting transformation to the tops and bottoms of the various scales.
optimize: tries to optimize numerically the sum squared lengths of the axes with respect to paper area.
With these transformations, you should be able to get the look you want for your nomograph.
Now that you know about nomographs, and even more important, how to make them, you really have no excuse to avoid your trip to that isolated South Pacific island. Go ahead and play with PyNomo and see what other kinds of nomographs you can make and use.
|
https://www.linuxjournal.com/content/computing-without-computer
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
InkCollector.SetAllTabletsMode Method
Sets the InkCollector object to collect ink from any tablet attached to the Tablet PC.
Definition
Parameters
> > >> > >
Exceptions
COMException
::
ObjectDisposedException
: The InkCollector object is disposed.: The InkCollector object is disposed.
Remarks
This is the default mode for an InkCollector object. To allow the InkCollector object to collect ink from only one tablet, call the SetSingleTabletIntegratedMode method.
Note: The InkCollector object must be disabled before calling this method. To disable the InkCollector object, set the Enabled property to false. After calling the SetAllTabletsMode method, enable the InkCollector object by setting the Enabled property to true.
When an InkCollector object switches from collecting ink by using a single tablet to collecting ink by using all tablets, the Cursors property is set to the empty collection.
Note: If the SetAllTabletsMode method is called with the useMouseForInput parameter set to true (or no parameters), then the mouse is used as an input device. If the SetAllTabletsMode method is then called with the useMouseForInput parameter set to false, the mouse is not removed from the Cursors property.
Examples
[C#]
This C# example calls SetAllTabletsMode on a new InkCollector object, theInkCollector, with the useMouseForInput parameter set to false if more than one tablet is available.
using Microsoft.Ink; // . . . Tablets theTablets = new Tablets(); InkCollector theInkCollector = new InkCollector(); if (theTablets.Count > 1) theInkCollector.SetAllTabletsMode(false); else theInkCollector.SetAllTabletsMode();
[VB.NET]
This Microsoft® Visual Basic® .NET example calls SetAllTabletsMode on a new InkCollector object, theInkCollector, with the useMouseForInput parameter set to false if more than one tablet is available.
Imports Microsoft.Ink ' . . . Dim theTablets As New Tablets() Dim theInkCollector As New InkCollector() If theTablets.Count > 1 Then theInkCollector.SetAllTabletsMode(False) Else theInkCollector.SetAllTabletsMode() End If
See Also
|
https://docs.microsoft.com/en-us/previous-versions/aa515523(v=msdn.10)
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Yes I know, it's a strange title, but bear with me, you'll soon see why.
Every so often, you see things that seem to confuse all developers, regardless of time in the industry, or level of skill. The process of performing Boolean operations, unfortunately, seems to be one of these things.
I'm writing this post with the .NET developer in mind, and any code presented will be in C#, but the concepts I'm about to discuss are applicable to any programming language.
So, What Do We Mean by Bit Twiddling?
Well, to clarify that, we first need to take a trip back to Computer Science 101. For many younger developers who've been schooled using a language-based approach, this might actually be something new to them. To those who've done a dedicated CS or electronics course, this will probably just serve as a bit of a refresher.
The chips in your computer communicate using a series of electronic pulses; these pulses travel along "Busses" in groups. The sizes of these groups are what determine the "Bit Size" of your PC.
A 32-bit machine will group these busses into groups of 32 individual wires, each carrying one set of pulses. Likewise, a 64-bit machine will make groups of 64 wires.
I'm not going to deep dive into this subject because that would take a whole book. Instead, we'res interested only in the behaviour of one wire at a time.
Why?
Well, because the behaviour of one wire is perfectly modelled in computer software, when you start looking at performing Boolean operations on your data.
The wires in your PC can either have an electric current in them, or not. This typically manifests itself as +5 volts or 0 volts. When looking at the same thing from a software point of view, we can easily say that +5 volts is equal to "True" and 0 volts is equal to "False."
Once you start to understand this, you begin to understand how numbers are represented by your PC. Take the following example:
Table 1: 8 Bit binary values
In Table 1, I listed the first 8 wires, or as it's more commonly known, a "Byte." Each wire from the right going towards the left is given a power of 2 (because it can only be in one of two states: 1 or 0), and each power of 2 as you go left is multiplied twice by itself.
In the first row, the "1" indicates that wire 1 has +5 volts in it, while the "0"s in the other wires give 0 volts. This is how the computer represents a value of "1" as electrical signals, to mean a data value of "1" in your program.
In the second number, we have "128" because only the wire with value "128" has +5 volts in it.
The 3rd number is 255, because if you add up all the numbers 1 through to 128, you get 255, and all the wires for that number have +5 volts in them.
If you wanted to represent a value of "24," you'd put +5 volts into wires "16" and "4."
When you look at these values in your program code, if you convert them to binary notation, you'll see exactly the same thing, with 0s and 1s in the appropriate columns.
Bigger numbers are represented by adding more wires. In Table 1, we could add a 9th column equal to 256 (128 * 2) and that would give us a 9-bit number, whose maximum value would be "512."
All This Is Interesting Stuff, but if C# Takes Care of All This for Me, Why Do I Need to Care?
You need to care because this is how your if/then statements work, and how your transparent graphics merge pixels without destroying other graphics.
The very fundamentals of how a computer makes its decisions revolve around seven very basic logic operations:
- And
- Nand
- Or
- Nor
- Xor
- Nxor
- Not (Inverse)
These basic logic rules govern pretty much everything your PC does, and are the absolute fundamentals of how the CPU in your PC decides what to do based on what numerical instructions it's given.
It also happens that knowing this stuff has a load of uses in software, too.
The rules are simple to interpret, and each one has a defined set of inputs that give exactly one output. The following tables are the Truth Tables used to describe these rules:
AND
The rule for an AND states that the output is "1" only when all of its inputs are "1."
NAND
The rule for a NAND states that the output is "1" only when all of its inputs are not equal to "1."
OR
The rule for an OR states that the output is "1" only when one or more of its inputs are "1."
NOR
The rule for a NOR states that the output is "1" only when one or more of its inputs are not "1."
XOR
The rule for an XOR states that the output is "1" only when all of its inputs are different to each other.
NXOR
The rule for a NXOR states that the output is "1" only when all of its inputs are not different to each other.
NOT (Inverter)
The rule for a NOT states that the output is to be the opposite of the input.
Enough Theory. Let's See Some Code.
Create a simple console program in Visual Studio, and make sure program.cs has the following code in it:
using System; namespace bit_twiddling { class Program { static void Main() { int num1 = 1; int num2 = 2; int result = num1 & num2; Console.WriteLine("AND"); Console.WriteLine("Input A = {0} [{1}]", num1, Convert.ToString(num1, 2)); Console.WriteLine("Input B = {0} [{1}]", num2, Convert.ToString(num2, 2)); Console.WriteLine("Result = {0} [{1}]", result, Convert.ToString(result, 2)); } } }
If you press F5 and run this, you should see the following:
Figure 1: Output from our program ANDing two numbers
The Binary representation of "1" is "00000001" and the binary representation of "2" is "00000010." If you AND them as per the previous logic rules, you get the following:
Looking at the two columns with a 0 in and referring to the truth table:
1 AND 0 = 0
0 AND 1 = 0
So, our result gives us a 0.
Let's now change the code slightly and make num2 equal to 3
int num2 = 3;
Then run our program again. What do we see this time?
Figure 2: Output from AND operation with the second number changed to 3
You can see from Figure 2 that our result is now equal to input A, and what we've effectively done is used the input in num1 to turn off any bits in num2 that we don't care about.
This comes in handy, for example, when we only want part of a number.
Let's alter our code once more, and this time try an OR operation.
Change your code in program.cs so it looks like the following:
using System; namespace bit_twiddling { class Program { static void Main() { int num1 = 1; int num2 = 2; int result = num1 | num2; Console.WriteLine("OR"); Console.WriteLine("Input A = {0} [{1}]", num1, Convert.ToString(num1, 2)); Console.WriteLine("Input B = {0} [{1}]", num2, Convert.ToString(num2, 2)); Console.WriteLine("Result = {0} [{1}]", result, Convert.ToString(result, 2)); } } }
Now, try running it and you should see the following:
Figure 3: Result of the OR operation
This time you can clearly see that the opposite of the AND operation has happened, and you've set the 1st bit in your result using the input in num1.
Using an OR operation effectively sets parts of a number without changing parts already present.
An OR operation is often used in computer graphics when merging two images together, and ensures that existing information is preserved.
In .NET, the operators you use for these operations are as follows:
- & = AND
- | = OR
- ^ = XOR
- ! = NOT
To get NAND/NOR and NXOR, simply prefix the output with a NOT; for example:
int result = !(num1 | num2);
This will give you the same result as a NOR; whereas
int result = !(num1 & num2);
will equal the output of a NAND.
I'll leave the XOR and NXOR operations as an exercise for the reader to play with.
Got a tricky .NET problem you can't solve, or simply just want to know if there's an API for that? Hunt me down on Twitter as @shawty_ds or come and visit the Lidnug (Linked .NET) user group on the Linked-In platform that I help run, and let me hear your thoughts. It may even make it into a post in this column.
|
https://mobile.codeguru.com/columns/dotnet/bit-twiddling.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
C# Concurrent Collections
Course info
Course info
Description
Learn how to use concurrent collections in multithreaded code! This course is a comprehensive introduction to the concurrent collections. It shows you how to use each of the main collection types: ConcurrentDictionary, ConcurrentQueue, ConcurrentBag, and ConcurrentStack. You'll learn the correct techniques for using these collections to avoid bugs such as race conditions, and also how to use BlockingCollection with the concurrent collections correctly in producer-consumer scenarios. The course rounds off with a look at some concurrent collection best practices.
Section Introduction Transcripts
Introducing the Concurrent Collections
Hello. I'm Simon Robinson welcoming you to this Pluralsight course, C# Concurrent Collections. The concurrent collections are a new set of collection types that Microsoft introduced for. NET 4, which is specifically designed to be used in a multi-threaded environment. The aim of this course is to teach you how to use the concurrent collections safely and correctly and this looks like the course all ready to open, let's look inside. In the box we have the course modules. I've arranged them to show how they depend on each other. This first module is a general overview. I'll mention the purposes of the different types available in the system. collections. concurrent namespace and clarify in what ways concurrent collections can and can't help in a multi-threaded environment. In particular, you will learn the importance of the ConcurrentDictionary as the main general purpose concurrent collection. My teaching style, by the way, is very much that I like to make sure we've covered the concepts adequately and then apply them, which means there's a fair bit of theoretical learning in this first module, and then much more actual coding as we work through the course. The next two modules will go into ConcurrentDictionary in some detail. In module two I'll show you what methods it provides and how it differs from the standard generic dictionary using very simple code. Then in module three we will apply this knowledge using ConcurrentDictionary in a much more realistic, multi-threaded demo. Then we do basically the same thing for a group of collections that I personally like to call the Producer-Consumer collections, concurrent queue, concurrent stack, and concurrent bag. Again, there are two modules, module four introduces these tops, and then module five shows a more realistic demo, which also brings in the BlockingCollection, and we will wind up in the last module with a look at some best practices. Here you will learn a few tips on using concurrent collections effectively and learn how to avoid some common pitfalls, especially related to performance. Now before we begin I just need to mention a couple of prerequisites for the course. Firstly, this course is about concurrent collections, it's not a general threading or thread safety course. I will draw attention to particular points of thread safety and thread synchronization that relate to concurrent collections, but in general I'm assuming you know what a thread is, and you're comfortable, for example, using. NET tasks to do parallel processing. If you find while you're watching this course that you do need a refresher in the threads and tasks I can recommend Joe Hummel's Pluralsight course, Introduction to Async and Parallel Programming in. NET 4. Also, and just as important, I'm assuming you know how to use the standard generic collections, which roughly means the ones in the system. collections. generic namespace. My previous course, C# Collections Fundamentals covers the standard collections, and this concurrent collections course that you're watching now is largely a direct follow on from that course, so if you feel you need a refresher in standard collections do check that out. As far as code goes, I'm using Visual Studio 2013 and. NET 4. 5 for all the demos. Concurrent collections were actually released with. NET 4. 0, so if you're using. net4. 0 and Visual Studio 2010 you should still be able to use the concurrent collections. Bear in mind though that there were originally some performance issues with concurrent back in 4. 0, those were fixed in. NET 4. 5 and are no longer an issue..
|
https://www.pluralsight.com/courses/csharp-concurrent-collections
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
BBC micro:bit
Neopixels
Introduction
If you have used Neopixels with an Arduino, then you know what a joy it is to find out that they can be used with the micro:bit and MicroPython.
Neopixels is the name used by the electronics company Adafruit for a type of RGB LED that they package in different ways. Their guide to Neopixels can be found here. The products they make, sell and distribute are all really good. You can power around 8 LEDs directly from the micro:bit but, for more than that, you need to power them separately. They are meant to work at 5V but they play quite nicely with a few of the 3V3 microcontrollers like the micro:bit.
The Neopixels come in lots of different flavours, strips, individually, on circuit boards, rings. They all have a few things in common. They are chained together and have constant current drivers that means you supply a power and ground connection and one digital pin to control all of the LEDs.
For the programs on this page, I used the Neopixel Jewel and powered it directly from the micro:bit. It looks like this,
The Jewel is a nice choice because it's cheaper than the more impressive items but still pretty bright and well, pretty. Even one of these costs around £7. You need to do a good amount of programming to make sure you make the component earn its keep.
You can see 4 connections in the image. PWR should be connected to 3V on the micro:bit or to an external source 4-5V. GND should be connected to GND, naturally. The pin marked IN needs to be connected to one of the GPIO pins. Here I used pin 0.
Programming
This first program is good place to start with testing your connections and seeing some of the main features of the neopixel module.
from microbit import * import neopixel npix = neopixel.NeoPixel(pin0, 7) while True: for pix in range(0, len(npix)): npix[pix] = (255, 0, 0) npix.show() sleep(500) npix.clear() sleep(500)
The variable npix is set in the third line of code. When we instantiate the object, we select a pin and state the number of Neopixels. The pixels are indexed so that we can refer to them using a number in square brackets after our object variable npix. We change the colour of a pixel by supplying a value from 0-255 for each of the colours red, green and blue. To make the whole display update, we call the show() method.
The point of this program is to test the appearance of different colours and to see how the neopixels are numbered. The order in which they light up is their index, the first pixel being at index 0. This is the order in which they are connected electrically after being connected to the IN pin on the PCB. This information will be useful for using the Neopixels in your own projects.
The next program shows that you can find easier ways to work with setting the colours if you write a function. This one lights all of the LEDs a particular colour. 4 colours are defined at the start of the program. Read a bit more about Python and you can work out some better ways to store this kind of value.
from microbit import * import neopixel npix = neopixel.NeoPixel(pin0, 7) red = (255,0,0) green = (0,255,0) blue = (0,0,255) purple = (255,0,255) def LightAll(col): for pix in range(0, len(npix)): npix[pix] = col npix.show() return while True: LightAll(red) sleep(1000) LightAll(green) sleep(1000) LightAll(blue) sleep(1000) LightAll(purple) sleep(1000)
RGB LEDs, of any variety, are among my favourite components. Once you have made the electrical connections and learned the few statements you need for basic control of the LEDs, the rest is about programming and creativity. Without connecting any more components, you have hours of experimentation ahead to make the coolest light shows. The trick is to make more functions like these that light up the LEDs for you the way that you want.
This final example uses a loop to fade a colour in and out.
from microbit import * import neopixel npix = neopixel.NeoPixel(pin0, 7) def LightAll(col): for pix in range(0, len(npix)): npix[pix] = col npix.show() return while True: for i in range(0,255,5): LightAll((i,0,0)) sleep(20) for i in range(255,-1,-5): LightAll((i,0,0)) sleep(20)
Challenges
- Write a new function to light up all of the LEDs in the strip, one at a time, using the colour of your choice. Have a second argument for the function and make it so that you can specify the delay between each pixel taking on its new colour.
- On the Jewel and the Rings, the lights are in a circle. You can make it look like a pixel is going around the circle if you set a background colour for all of the pixels and then use a loop to quickly turn each pixel to another colour and then back again. Do this with each one in order (except the centre one on a Jewel) and it looks like the pixel is travelling around the circle.
- The Neopixels make great mood lighting. Experiment with different ways of varying the amount of red, green and blue over time and different ways of fading in and out. Try lighting the odd and even numbered pixels with different colours.
- Dancing pixels. It sounds good, whatever it means. You could combine this with some background music played through a buzzer or some hacked headphones.
- Make a function to generate a random colour for you.
- Use a potentiometer, button or accelerometer input to allow the user some control over the colour or pattern shown with the pixels.
- A nice challenging project is to work out how to fade smoothly from one colour to another. Imagine you have the colour (0,128,255) and you want to fade this into (255,255,128). It will need to take the same amount of time for each of the different colour channels to be changed to its new value. Since the amount of change required varies from one channel to the next, you need to work out different sized steps for the changes you make.
- Connect a microphone and you can make a sound-reactive circuit.
- Make some more pretty patterns with the lights.
|
http://www.multiwingspan.co.uk/micro.php?page=neopix
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
My Tokens Support
Q: Is MyTokens supported?
A: Yes, MyTokens is supported in Sharp Scheduler, but please note that since Sharp Scheduler runs on the Host not per Portal it will only have access to tokens that are in namespaces that are available on all Portals. Note that by default token namespaces are not available on all portals and neither can you set a namespace to available on all portals after it has been created.
Workaround:
- create a new namespace available on all portals
- move you token into that namespace (or clone the token and move the clone instead)
Q: How are tokens affected by Context (Portal and User) ?
A: All tokens that use Context(Portal or User) do not work by default because MyTokens will not have access to either the context or the user.
Q: Can I use portal settings or user settings inside a razorscript?
A: Yes, but you must create the portal settings. (using a hardcoded value or a value from the tokens parameters)
Example:
@using System.IO; @{ string someText = "someText"; DotNetNuke.Entities.Portals.PortalSettings myPortal = new DotNetNuke.Entities.Portals.PortalSettings(1); File.WriteAllText("C:\\test\\test.xml",someText); string result = myPortal.PortalId.ToString(); } @result
|
https://docs.dnnsharp.com/sharp-scheduler/my-tokens-support.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
How to create custom scss files and import them into a part of your application.
In the previous article, we have set up variables and added custom styling for elements and components. These are lighter customisation tasks, focused on overriding Harmony.
However, some projects and applications will have their own custom tiles and HTML components. This article will show you how to create and import your own custom SCSS stylesheets to your project.
In this demo, we will be added a custom stylesheet to the Codebots Zoo Project. If you want to follow along, you can download the project from the public git repository.
If you have not already, go to
clientside/src/app/tiles/custom/chart/chart.tile.component.html and add the following code to create a custom chart tile.
<div class="chart-tile"> <h2 class="icon-zoo icon-right"> Zoo Chart </h2> <p> Welcome to the chart of the stats of our zoo animals. To quickly navigate you've got the following buttons below. </p> <cb-button-group> <button class="btn-zoo" cb-button>Zoo</button> <button cb-button>Animals</button> <button cb-button>Statistics</button> <button cb-button>View Chart</button> </cb-button-group> </div>
Create a new file inside your target application’s
scss/pages/ folder, and name it
chart.scss. This is where we will add all styles specific to our chart page.
Add the following styles:
.chart-tile { height: 100%; justify-content: center; display: flex; flex-direction: column; h2 { align-items: center; justify-content: center; align-content: center; display: flex; margin-bottom: $space-xl; &:after { font-size: 2rem; } } p { text-align: center; } .btn-group { justify-content: center; .btn { min-width: 40%; } } }
In order to see your changes, we will need to import our newly created stylesheet. This will tell the bots to compile the SCSS in our
charts.scss file.
Custom import
Each sub-folder includes an
import file, where you can add your stylesheet.
Open
scss/pages/import-pages and turn on the protected region. Add the following code to import your custom stylesheet.
@import 'chart.scss'
|
https://codebots.com/library/techies/styling-custom-tiles-and-importing-custom-scss
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Nate Diller wrote:>>> as an example, if a program were to store some things it needs access> to in its executable's attributes, it should have the option of> keeping a hard reference to something, so that it can't be deleted out> from underneath. this enables sane sharing of resources without> ownership tracking problems (see windows DLL hell for details). the> attribute space should be indistinguishable from the rest of the> namespace, and should be able to link (soft or hard) anywhere in the> FS. anything less is too much work for too little reward.>You already have a problem with hardlinks not crossing mount points, butI understand your point. If we can write code for solving the cycleproblem cleanly, it would be best.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2005/7/6/295
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Motivation
Most of the programming languages are open enough to allow programmers to do things multiple ways for a similar outcome. JavaScript is in no way different. With JavaScript, we often find multiple ways of doing things for a similar outcome, and that's confusing at times.
Some of the usages are better than the other alternatives and thus, these are my favorites. I am going to list them here in this article. I am sure, you will find many of these on your list too.
1. Forget string concatenation, use template string(literal)
Concatenating strings together using the
+ operator to build a meaningful string is old school. Moreover, concatenating strings with dynamic values(or expressions) could lead to frustrations and bugs.
let name = 'Charlse'; let place = 'India'; let isPrime = bit => { return (bit === 'P' ? 'Prime' : 'Nom-Prime'); } // string concatenation using + operator let messageConcat = 'Mr. ' + name + ' is from ' + place + '. He is a' + ' ' + isPrime('P') + ' member.'
Template literals(or Template strings) allow embedding expressions. It has got unique syntax where the string has to be enclosed by the backtick. Template string can contain placeholders for dynamic values. These are marked by the dollar sign and curly braces (${expression}).
Here is an example demonstrating it,
let name = 'Charlse'; let place = 'India'; let isPrime = bit => { return (bit === 'P' ? 'Prime' : 'Nom-Prime'); } // using template string let messageTemplateStr = `Mr. ${name} is from ${place}. He is a ${isPrime('P')} member.` console.log(messageTemplateStr);
2. isInteger
There is a much cleaner way to know if a value is an integer. The
Number API of JavaScript provides a method called,
isInteger() to serve this purpose. It is very useful and better to be aware.
let mynum = 123; let mynumStr = "123"; console.log(`${mynum} is a number?`, Number.isInteger(mynum)); console.log(`${mynumStr} is a number?`, Number.isInteger(mynumStr));
Output:
3. Value as Number
Have you ever noticed,
event.target.value always returns a string type value even when the input box is of type number?
Yes, see the example below. We have a simple text box of type number. It means it accepts only numbers as input. It has an event handler to handle the key-up events.
<input type='number' onkeyup="trackChange(event)" />
In the event handler method, we take out the value using
event.target.value. But it returns a string type value. Now I will have an additional headache to parse it to an integer. What if the input box accepts floating numbers(like, 16.56)? parseFloat() then? Ah, all sorts of confusion and extra work!
function trackChange(event) { let value = event.target.value; console.log(`is ${value} a number?`, Number.isInteger(value)); }
Use
event.target.valueAsNumber instead. It returns the value as the number.
let valueAsNumber = event.target.valueAsNumber; console.log(`is ${value} a number?`, Number.isInteger(valueAsNumber));
4. Shorthand with AND
Let's consider a situation where we have a boolean value and a function.
let isPrime = true; const startWatching = () => { console.log('Started Watching!'); }
This is too much code to check for the boolean condition and invoke the function,
if (isPrime) { startWatching(); }
How about using the short-hand using the AND(&&) operator? Yes, avoid the
if statement altogether. Cool, right?
isPrime && startWatching();
5. The default value with || or ??
If you ever like to set a default value for a variable, you can do it using the OR(||) operator easily.
let person = {name: 'Jack'}; let age = person.age || 35; // sets the value 35 if age is undefined console.log(`Age of ${person.name} is ${age}`);
But wait, it has a problem. What if the person's age is 0(a just born baby, maybe). The age will be computed as 35 (
0 || 35 = 35). This is unexpected behavior.
Enter the
nullish coalescing operator (??). It is a logical operator that returns its right-hand side operand when its left-hand side operand is
null or
undefined, and otherwise returns its left-hand side operand.
To rewrite the above code with the
?? operator,
let person = {name: 'Jack'}; let age = person.age ?? 35; // sets the value 0 if age 0, 35 in case of undefined and null console.log(`Age of ${person.name} is ${age}`);
6. Randoms
Generating a random number or getting a random item from an array is a very useful method to keep handy. I have seen them appearing multiple times in many of my projects.
Get a random item from an array,
let planets = ['Mercury ', 'Mars', 'Venus', 'Earth', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']; let randomPlanet = planets[Math.floor(Math.random() * planets.length)]; console.log('Random Planet', randomPlanet);
Generate a random number from a range by specifying the min and max values,
let getRandom = (min, max) => { return Math.round(Math.random() * (max - min) + min); } console.log('Get random', getRandom(0, 10));
7. Function default params
In JavaScript, function arguments(or params) are like local variables to that function. You may or may not pass values for those while invoking the function. If you do not pass a value for a param, it will be
undefined and may cause some unwanted side effects.
There is a simple way to pass a default value to the function parameters while defining them. Here is an example where we are passing the default value
Hello to the parameter
message of the
greetings function.
let greetings = (name, message='Hello,') => { return `${message} ${name}`; } console.log(greetings('Jack')); console.log(greetings('Jack', 'Hola!'));
8. Required Function Params
Expanding on the default parameter technique, we can mark a parameter as mandatory. First, define a function to throw an error with an error message,
let isRequired = () => { throw new Error('This is a mandatory parameter.'); }
Then assign the function as the default value for the required parameters. Remember, the default values are ignored when a value is passed is as a parameter at the invocation time. But, the default value is considered if the parameter value is
undefined.
let greetings = (name=isRequired(), message='Hello,') => { return `${message} ${name}`; } console.log(greetings());
In the above code,
name will be undefined and that will try to set the default value for it which is the
isRequired() function. It will throw an error as,
9. Comma Operator
I was surprised when I realized, comma(,) is a separate operator and never gone noticed. I have been using it so much in code but, never realized its true existence.
In JavaScript, the comma(,) operator is used for evaluating each of its operands from left to right and returns the value of the last operand.
let count = 1; let ret = (count++, count); console.log(ret);
In the above example, the value of the variable
ret will be, 2. Similar way, the output of the following code will be logging the value 32 into the console.
let val = (12, 32); console.log(val);
Where do we use it? Any guesses? The most common usage of the comma(,) operator is to supply multiple parameters in a for a loop.
for (var i = 0, j = 50; i <= 50; i++, j--)
10. Merging multiple objects
You may have a need to merge two objects together and create a better informative object to work with. You can use the spread operator
...(yes, three dots!).
Consider two objects, emp and job respectively,
let emp = { 'id': 'E_01', 'name': 'Jack', 'age': 32, 'addr': 'India' }; let job = { 'title': 'Software Dev', 'location': 'Paris' };
Merge them using the spread operator as,
// spread operator let merged = {...emp, ...job}; console.log('Spread merged', merged);
There is another way to perform this merge. Using
Object.assign(). You can do it like,
console.log('Object assign', Object.assign({}, emp, job));
Output:
Note, both the spread operator and the Object.assign perform a shallow merge. In a shallow merge, the properties of the first object are overwritten with the same property values as the second object.
For deep merge, please use something like,
_merge of lodash.
11. Destructuring
The technique of breaking down the array elements and object properties as variables called,
destructuring. Let us see it with few examples,
Array
Here we have an array of emojis,
let emojis = ['🔥', '⏲️', '🏆', '🍉'];
To destructure, we would use the syntax as follows,
let [fire, clock, , watermelon] = emojis;
This is the same as doing,
let fire = emojis[0]; but with lots more flexibility.
Have you noticed, I have just ignored the trophy emoji using an empty space in-between? So what will be the output of this?
console.log(fire, clock, watermelon);
Output:
Let me also introduce something called the
rest operator here. If you want to destructure an array such that, you want to assign one or more items to variables and park the rest of it into another array, you can do that using
...rest as shown below.
let [fruit, ...rest] = emojis; console.log(rest);
Output:
Object
Like arrays, we can also destructure objects.
let shape = { name: 'rect', sides: 4, height: 300, width: 500 };
Destructuring such that, we get a name, sides in a couple of variables and rest are in another object.
let {name, sides, ...restObj} = shape; console.log(name, sides); console.log(restObj);
Output:
Read more about this topic from here.
12. Swap variables
This must be super easy now using the concept of
destructuring we learned just now.
let fire = '🔥'; let fruit = '🍉'; [fruit, fire] = [fire, fruit]; console.log(fire, fruit);
13. isArray
Another useful method for determining if the input is an Array or not.
let emojis = ['🔥', '⏲️', '🏆', '🍉']; console.log(Array.isArray(emojis)); let obj = {}; console.log(Array.isArray(obj));
14. undefined vs null
undefined is where a value is not defined for a variable but, the variable has been declared.
null itself is an empty and non-existent value that must be assigned to a variable explicitly.
undefined and
null are not strictly equal,
undefined === null // false
Read more about this topic from here.
15. Get Query Params
window.location object has a bunch of utility methods and properties. We can get information about the protocol, host, port, domain, etc from the browser URLs using these properties and methods.
One of the properties that I found very useful is,
window.location.search
The
search property returns the query string from the location URL. Here is an example URL:. The
location.search will return,
?project=js
We can use another useful interface called,
URLSearchParams along with
location.search to get the value of the query parameters.
let project = new URLSearchParams(location.search).get('project');
Output:
js
Read more about this topic from here.
This is not the end
This is not the end of the list. There are many many more. I have decided to push those to the git repo as mini examples as and when I encounter them.
atapas
/
js-tips-tricks
List of JavaScript tips and tricks I am learning everyday!
js-tips-tricks
List of JavaScript tips and tricks I am learning everyday!
- See it running here:
- Read this blog for more insights:
Many Thanks to all the
Stargazers who has supported this project with stars(
⭐)
What are your favorite JavaScript tips and tricks? How about you let us know about your favorites in the comment below?
If it was useful to you, please Like/Share so that, it reaches others as well. I am passionate about UI/UX and love sharing my knowledge through articles. Please visit my blog to know more.
You may also like,
- 10 lesser-known Web APIs you may want to use
- 10 useful HTML5 features, you may not be using
- 10 useful NPM packages you should be aware of (2020 edition)
Feel free to DM me on Twitter @tapasadhikary or follow.
Discussion
hey, awesome article. just a small correction
because of the null coalescing, if
person.age === 0, the variable
ageis
0.
Big thank for helping me to correct the typo.
Hadn't seen the
isRequiredidea before! Clever list :)
Thanks Elliot!
Number.isIntegermethod was designed to be used with numbers in order to test if it is an integer or not (NaN, Infinity, float...).
developer.mozilla.org/en-US/docs/W...
Using
typeof mynum === "number"is a better solution to test if a value is a number.
Note, that the Number rounding of floats in JavaScript is not accurate
Since this rounding affects integer representation, too, you should also consider
Number.isSafeInteger
developer.mozilla.org/en-US/docs/W...
Also please consider Integer boundaries in JS, that are represented by
Number.MAX_SAFE_INTEGERand
Number.MIN_SAFE_INTEGER
developer.mozilla.org/en-US/docs/W...
developer.mozilla.org/en-US/docs/W...
Edit: sry wanted to post on top level, somehow ended up as comment on yours
Awesome, thanks Jan!
I thought I would know everything in this list, but learned something new!!! Didn't know about the nullish coalescing operator, pretty useful! Thank you Tapas
Thanks, Oscar!
It's quite new, so it won't work everywhere yet.
?? operator not working for me. shows as below
SyntaxError: Unexpected token '?'
Which browser are you using? It should work.
Output, Age of Jack is 35
Also, check if there is a syntax error. You can paste the code as a comment.
Points 3,8,9 blew my mind away. Especially the user of a function to throw errors 🙏👏👏
Great.. Thanks, Venkatesh!
Really nice tricks to know. Nice work keep this good work going...
Good health
Thanks Sakar for the encouragements!
Awesome, thanks for sharing!
Thanks for reading and commenting!
Good tricks shared in this article I learnt some new also
Thanks Fahad!
I have used window.location.hash in pages where tab content need to be loaded initially based on url hash. I find this pretty useful
Cool, thanks for sharing Niroj!
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/atapas/my-favorite-javascript-tips-and-tricks-4jn4
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
by Maribel Duran
How I made my portfolio website blazing fast with Gatsby
If you are thinking of building a static site with React and want it to perform as fast as a cheetah, you should consider using GatsbyJS. I decided to try it out and was amazed with how easy it was to setup, deploy, and how fast the site loads now. Gatsby uses the best parts of other front end tools to make the development experience feel like you’re on vacation.
Performance Issues With Original Website
I had been meaning to optimize the images on my portfolio website, which was one of my first freeCodeCamp Frontend Development projects.
Ouch! A 33/100 Google optimization score was painful to see. Yup I needed some help from the optimization gods. My website contained at least 17 project screenshots. I didn’t want to have to compress each image, generate multiple sizes and resolutions of each image, and lazy-load them.
When I first created this website, the Bootstrap 3
img-responsive class took care of scaling the images to fit different screen sizes, but I didn’t think about the fact that it was still loading some of my screenshots that were around 1400 x 860 pixels on mobile devices!
My score was also low because I had not minified my CSS or setup browser caching for it, and was not async loading external CSS resources.
Gatsby To The Rescue
I really wanted to rebuild this project using React. I could have used create-react-app which provides an out-of-the box build script and development server, but this still didn’t take care of the long task of having to crop different image sizes for all of my images.
Fortunately I was listening to Syntax’s, “Why Static Site Generators are Awesome” episode, and they talked about a few static site generators on the StaticGen.com list. If you haven’t heard what static site generators do, they transform your site into a directory with a single HTML file and static assets. No database or server code needed.
Gatsby won me over due to the similarities it has with create-react-app, which includes hot reloading, easy dev environment setup, and a build script. Gatsby takes it further by offering server-side rendering, smart image loading, and dedication to performance.
Since Gatsby is built on the React, GraphQL, and Webpack stack, we can write our content as React components! Winning! Gatsby takes care of rendering at build time to the DOM as static HTML, CSS, and JavaScript.
Gatsby-image Component is BAE
So now to what I’ve really been wanting to share with you. Gatsby-image! Gatsby-image, is a React component that was designed to work with Gatsby’s GraphQL queries to completely optimize image loading for sites.
The approach is to use GraphQL queries to get images of the optimal size and then display them with the gatsby-image component.
How did I use this component to automatically create 3 thumbnails for each of my 17 project images? Magic! Not really, but it feels like it!
In my src/pages/index.js file, I queried all of the project images and gave it an alias of ProjectImgs. Since the queried data is now accessible through the data object as a prop, I was able to pass the projectImgData data (which is a node list of my project images) to my
<Projects /> component:
//imports
const HomePage = ({ data }) => { const siteTitle = data.site.siteMetadata.title; console.log(data.ProjectImgs); const { edges: projectImgData } = data.ProjectImgs; const { edges: iconImgData } = data.iconImgs; return ( <div> <Helmet title={siteTitle} link={[{ rel: "icon", type: "image/png", href: `${favicon}`}]} /> <Cover coverImg={data.coverImg} /> <div className="container-fluid main"> <Navigation /> <AboutMe profileImg={data.profileImg} iconImgs={iconImgData} /> <Projects projectImgs={projectImgData} /> <Contacts /> <Footer /> </div> </div> );};
export const query = graphql` query allImgsQuery { //additional queries ...
ProjectImgs: allFile( sort: { order: ASC, fields: [absolutePath] } filter: { relativePath: { regex: "/projects/.*.png/" } } ) { edges { node { relativePath name childImageSharp { sizes(maxWidth: 320) { ...GatsbyImageSharpSizes } } } } }
//additional queries... }`;
Note: I had some trouble getting my graphQL queries to work and had to do a little digging around to figure out how to query for multiple images within a folder. What helped me was looking at other portfolio sites made with Gatsby.
Using the console, we can see what
data.ProjectImgs returns to give you a better idea of what I am receiving from the query and what I am passing to my Projects component:
Console.log(data.ProjectImgs) returns an array of edges:
{edges: Array(17)}edges:(17) [{…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}]__proto__:Object{edges: Array(17)}
Extending one of the edges shows a node object that contains a childImageSharp property. This contains a sizes object which holds the image’s thumbnail sources. This sizes object is what we ultimately want to pass to our gatsby-image’s
<Img /> component.
Extending an edge to show the information in a node:
{edges: Array(17)} edges: Array(17) 0: node: childImageSharp: {sizes: {…}} name: "CamperLeaderboard" relativePath:"projects/CamperLeaderboard.png" __proto__:Object __proto__:Object 1:{node: {…}}
//more nodes...
In my
<Projects /> component, I receive the node list of project images as a prop and for each project, I extract the childImageSharp.sizes object (which is renamed to imageSizes), and pass it into the gatsby-im
age’s <Img /> component:
import React, { Component } from "react";import Img from "gatsby-image";//more imports...
class Projects extends Component { constructor(props) { super(props);
this.state = { selectedType: "front-end" };
this.onSelectChange = this.onSelectChange.bind(this); } onSelectChange(e) { this.setState({ selectedType: e.target.value }); }
render() { const projectImgs = this.props.projectImgs; const { selectedType } = this.state; return ( <section id="projects" className="section projects"> <h2 className="text-center">PROJECTS</h2> <div className="section-content"> <div className="subheader"> <FormGroup controlId="formControlsSelect"> ... </FormGroup> </div> <div className="project-list"> {projectList.map(project => { const isSelectedType = selectedType === project.type; const singleCardClass = classNames("single-card", { hide: !isSelectedType }); const image = projectImgs.find(n => { return n.node.relativePath === `projects/${project.img}`; }); const imageSizes = image.node.childImageSharp.sizes; return ( <a href={project.url} key={project.url} className={singleCardClass}; <div className="card-img"> <Img title={project.name} </div> <div className="blue-divider" /> <div className="card-info"> <h4 className="card-name">{project.name}</h4> <p>{project.description}</p> </div> </a> ); })} </div> </div> </section> ); }}
export default Projects;
And this is the end result:
That’s it! The
<Img /> component takes care of using the correct image size, creating the blur up effects, and lazy loading my project images, since they are located further down the screen. The above querying was a bit more complex than querying a single image.
If you’re new to GraphQL, below are a few resources that better explain how to use GraphQL queries and the gatsby-image component:
Hosting To Netlify Was a Breeze
Since Gatsby generates static files, you can pretty much use any hosting provider. I decided to change my hosting provider from Github Pages to Netlify. I had been hearing about how easy it is to deploy a website to Netlify and they were not lying. Their free tier provides awesome features that makes the deployment process and making a website secure a breeze. It provides one click HTTPS, global CDN, continuous deployment, and the list goes on.
The setup process was so simple. I logged into Netlify, clicked the “New site from Git” button on my dashboard, and chose the Git repository for this project. I configured the site to deploy from master and clicked “Deploy Site”. That was it! Netlify takes care of the build process and publishes it to the web.
As I mentioned, Netlify offers continuous deployment, so now whenever I push changes to my master branch on GitHub, this automatically triggers a new build on Netlify. Once the build is complete, my changes will be live on the web.
The Future Looks Bright
By rebuilding my website with Gatsby, not only did I learn about the different image optimization techniques for future projects, I also learned a bit about GraphQL, practiced my React skills, and took the opportunity to try out a new hosting provider.
I am really excited for the future of Gatsby and similar front end tools that remove the complexities of configuring environments and build tools. Instead, we can focus our energy and time on our code to build awesome stuff for our users.
If you liked this article, click the? below so other people will see it here on Medium.
Let’s be friends on Twitter. Happy Coding :)
|
https://laptrinhx.com/how-i-made-my-portfolio-website-blazing-fast-with-gatsby-1037597/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Hi Friend,Today we will learn "How to implement form based authentication concepts in ASP.NET MVC Application". This concepts is basically used for authenticating the user credentials using Forms.In Previous post , i have already implemented two authentication concepts in asp.net application .which are as given below:-Step 2 :- Now select Empty template -->Choose ASPX view engine as shown in below image.
MVC form based authentication almost same as asp.net form based authentication.But there are little differences which i will explain it in this post.
There are some steps to implement this concepts in asp.net mvc application as given below:-
Step 1 :- First open your visual studio --> File -->Project --> Select ASP.NET MVC 3 OR MVC 4 Application --> write your project name --> press OK as shown below:-
- How to implement form based authentication in asp.net application
- How to implement windows based authentication in asp.net application
There are some steps to implement this concepts in asp.net mvc application as given below:-
Step 1 :- First open your visual studio --> File -->Project --> Select ASP.NET MVC 3 OR MVC 4 Application --> write your project name --> press OK as shown below:-
Step 3 :- Now open Solution Explorer Window (if not open)Right click on controller --> Add Controller (Authentication_controller) -->Add as shown below:-
Step 4 :- Now Right click on ActionResult login_form method in coding part --> Add a View (login_form.aspx) --> Choose view engine ASPX as shown below:-
Note:- This web form will be displayed on client browser ,when any client call the ActionResult Method. For more details follow below link.
How to pass data from controller to view in asp.net mvc application.
Step 5 :-
- Now open login_form.asp page (view engine) --> Design text boxes and button controls using coding (you can add Html controls from toolbox) as given below:-
<%@ Page <meta name="viewport" content="width=device-width" /> <title>login_form</title> </head> <body> <div> <form action="Authenticate_User" method ="post"> <span class="auto-style1"><strong> Simple Form Based Authentication in ASP.NET MVC Application</strong></span><br /> <br /> Enter User Name <input id="txt_id" name ="txt_id" type="text" /> <br /> Enter Password <input id="txt_Pass" name="txt_pass" type="password" /><br /> <input id="Submit1" type="submit" value="Login" /> </form> </div> </body> </html>
- Now go Design mode --> You will see following login layout as shown below:-
1.) <form action="Authenticate_User" method ="post"> ,this Action (Authenticate_User) will be invoked when any user fill the login credentials and press login button.
2.) This action = Authenticate_User will call the controller class ActionResult method = Authenticate_User .
3.) This action (Authenticate_User ) will execute and check the user credentials from controller class .if entered user name and password are correct then it returns home_page otherwise returns same page.
4.) Method = "post" is used to retrieve the data from the page.you can use Get Method also.But it less secure and slower than post. you can see, Facebook is also used Post Method .
Step 6 :- Now Write the codes in Authentication_Controller.cs file as given below:-
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Security; namespace SIMPLE_ASP.NET_MVC__AUTHENTICATION.Controllers { public class Authentication_Controller : Controller { // // GET: /Authentication_/ public ActionResult login_form() { return View("login_form"); } // Add [Authorize] Attribute to Authenticate the below codes [Authorize] public ActionResult home_page() { return View("home_page"); } public ActionResult Authenticate_User() { //here we will check the users for forms authetication //first check whether username and passwordis correct or not if ((Request.Form["txt_id"] == "ram") && (Request.Form["txt_pass"] == "ram123")) { FormsAuthentication.SetAuthCookie(Request.Form["txt_id"], true); return View("home_page"); } else if ((Request.Form["txt_id"] == "neha") && (Request.Form["txt_pass"] == "neha123")) { FormsAuthentication.SetAuthCookie(Request.Form["txt_id"], true); return View("home_page"); } else if ((Request.Form["txt_id"] == "sujeet") && (Request.Form["txt_pass"] == "sujeet123")) { FormsAuthentication.SetAuthCookie(Request.Form["txt_id"], true); return View("home_page"); } else { //if entered data(username and passowrd) is incorrect,then return back to login_form return View("login_form"); } } } }Descriptions:-
1.) First include namespace using System.Web.Security; This is used in form authentication.
2.) If any user want to login then it first call the ActionResult = Authenticate_User.
3.) Here I have used a [Authorize] attribute.Which is used form validation purpose.Means nobody (anonymous user) can access the data by passing the Urls in Browser directly.
4.) Here, i have validated the user name and password using RequestForm methods in controller class as shown in above codes.
5.) Here i have set cookie also.
6.) In above codes, our application returns a view "home_page". I will create it in Step 7
Step 7 :- Now Right click on Authenticate_User action --> Add a view (home_page) as shown below:-
Step 8:- Now design your login_page as shown below:-
Step 9:- Now Add ActionResult home_page under [Authorize] attribute as given below.You can see this codes in Step 6.
[Authorize] public ActionResult home_page() { return View("home_page"); }Step 10:- Now open Web.Config file from Solution Explorer window --> Write the following codes as shown below:-
Note:- here
<forms loginUrl="~/Authentication_/login_form" timeout="2880" />
Means
<forms loginUrl="~/Controller-name/view-name" timeout="2880" />
Step 11:- Now Run the application (press F5) --> You will see following error as shown below:-
Step 12:- Now write the below syntax in your browser URLs--> press Enter as given below:-
localhost:1509/Controller-name/View-name
OR
localhost:1509/Authentication_/login_form
Step 13:- Suppose any users want to access the home_page directly like login_form through browser . they can't access the home_page through browser because this page comes under [Authorize] attribute tag. if user verify the user name and password then he can access the home_page information.This authentication mechanism is called Form -based authentication
Step 14:- Now Enter user name and password Correctly --> press Login button then you can access the home_page information as shown below:-
Step 15:- If you want to open login page without typing URLs in browser.Then you have change Routing table path . For more Information read below link
Routing table Real concepts in asp.net mvc application
For More...
Download
|
https://www.msdotnet.co.in/2015/05/simple-form-based-authentication-in.html
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Alamofire 5 Tutorial for iOS: Getting Started
In this Alamofire tutorial, you’ll build an iOS companion app to perform networking tasks, send request parameters, decode/encode responses and more.
Version
- Swift 5, iOS 13, Xcode 11
If you’ve been developing iOS apps for some time, you’ve probably needed to access data over the network. And for that you may have used Foundation’s
URLSession. This is fine and all, but sometimes it becomes cumbersome to use. And that’s where this Alamofire tutorial comes in!
Alamofire is a Swift-based, HTTP networking library. It provides an elegant interface on top of Apple’s Foundation networking stack that simplifies common networking tasks. Its features include chainable request/response methods, JSON and Codable decoding, authentication and more.
In this Alamofire tutorial, you’ll perform basic networking tasks including:
- Requesting data from a third-party RESTful API.
- Sending request parameters.
- Converting the response into JSON.
- Converting the response into a Swift data model via the Codable protocol.
Getting Started
To kick things off, use the Download Materials button at the top or bottom of this article to download the begin project.
The app for this tutorial is StarWarsOpedia, which provides quick access to data about Star Wars films as well as the starships used in those films.
Start by opening StarWarsOpedia.xcworkspace inside the begin project.
Build and run. You’ll see this:
It’s a blank slate now, but you’ll populate it with data soon!
Using the SW API
SW API is a free and open API that provides Star Wars data. It’s only updated periodically, but it’s a fun way to get to know Alamofire. Access the API at swapi.dev.
There are multiple endpoints to access specific data, but you’ll concentrate on and.
For more information, explore the Swapi documentation.
Understanding HTTP, REST and JSON
If you’re new to accessing third-party services over the internet, this quick explanation will help.
HTTP is an application protocol used to transfer data from a server to a client, such as a web browser or an iOS app. HTTP defines several request methods that the client uses to indicate the desired action. For example:
- GET: Retrieves data, such as a web page, but doesn’t alter any data on the server.
- HEAD: Identical to GET, but only sends back the headers and not the actual data.
- POST: Sends data to the server. Use this, for example, when filling a form and clicking submit.
- PUT: Sends data to the specific location provided. Use this, for example, when updating a user’s profile.
- DELETE: Deletes data from the specific location provided.
JSON stands for JavaScript Object Notation. It provides a straightforward, human-readable and portable mechanism for transporting data between systems. JSON has a limited number of data types to choose from: string, boolean, array, object/dictionary, number and null.
Back in the dark days of Swift, pre-Swift 4, you needed to use the
JSONSerialization class to convert JSON to data objects and vice-versa.
It worked well and you can still use it today, but there’s a better way now:
Codable. By conforming your data models to
Codable, you get nearly automatic conversion from JSON to your data models and back.
REST, or REpresentational State Transfer, is a set of rules for designing consistent web APIs. REST has several architecture rules that enforce standards like not persisting states across requests, making requests cacheable and providing uniform interfaces. This makes it easy for app developers to integrate the API into their apps without having to track the state of data across requests.
HTTP, JSON and REST comprise a good portion of the web services available to you as a developer. Trying to understand how every piece works can be overwhelming. That’s where Alamofire comes in.
Why Use Alamofire?
You may be wondering why you should use Alamofire. Apple already provides
URLSession and other classes for accessing content via HTTP, so why add another dependency to your code base?
The short answer is that while Alamofire is based on URLSession, it obscures many of the difficulties of making networking calls, freeing you to concentrate on your business logic. You can access data on the internet with little effort, and your code will be cleaner and easier to read.
There are several major functions available with Alamofire:
- AF.upload: Upload files with multi-part, stream, file or data methods.
- AF.download: Download files or resume a download already in progress.
- AF.request: Other HTTP requests not associated with file transfers.
These Alamofire methods are global, so you don’t have to instantiate a class to use them. Underlying Alamofire elements include classes and structs like
SessionManager,
DataRequest and
DataResponse. However, you don’t need to fully understand the entire structure of Alamofire to start using it.
Enough theory. It’s time to start writing code!
Requesting Data
Before you can start making your awesome app, you need to do some setup.
Start by opening MainTableViewController.swift. Under
import UIKit, add the following:
import Alamofire
This allows you to use Alamofire in this view controller. At the bottom of the file, add:
extension MainTableViewController { func fetchFilms() { // 1 let request = AF.request("") // 2 request.responseJSON { (data) in print(data) } } }
Here’s what’s happening with this code:
- Alamofire uses namespacing, so you need to prefix all calls that you use with
AF.
request(_:method:parameters:encoding:headers:interceptor:)accepts the endpoint for your data. It can accept more parameters, but for now, you’ll just send the URL as a string and use the default parameter values.
- Take the response given from the request as JSON. For now, you simply print the JSON data for debugging purposes.
Finally, at the end of
viewDidLoad(), add:
fetchFilms()
This triggers the Alamofire request you just implemented.
Build and run. At the top of the console, you’ll see something like this:
success({ count = 7; next = "<null>"; previous = "<null>"; results = ({...}) })
In a few very simple lines, you’ve fetched JSON data from a server. Good job!
Using a Codable Data Model
But, how do you work with the JSON data returned? Working with JSON directly can be messy due to its nested structure, so to help with that, you’ll create models to store your data.
In the Project navigator, find the Networking group and create a new Swift file in that group named Film.swift.
Then, add the following code to it:
struct Film: Decodable { let id: Int let title: String let openingCrawl: String let director: String let producer: String let releaseDate: String let starships: [String] enum CodingKeys: String, CodingKey { case id = "episode_id" case title case openingCrawl = "opening_crawl" case director case producer case releaseDate = "release_date" case starships } }
With this code, you’ve created the data properties and coding keys you need to pull data from the API’s film endpoint. Note how the struct is
Decodable, which makes it possible to turn JSON into the data model.
The project defines a protocol —
Displayable — to simplify showing detailed information later in the tutorial. You must make
Film conform to it. Add the following at the end of Film.swift:
extension Film: Displayable { var titleLabelText: String { title } var subtitleLabelText: String { "Episode \(String(id))" } var item1: (label: String, value: String) { ("DIRECTOR", director) } var item2: (label: String, value: String) { ("PRODUCER", producer) } var item3: (label: String, value: String) { ("RELEASE DATE", releaseDate) } var listTitle: String { "STARSHIPS" } var listItems: [String] { starships } }
This extension allows the detailed information display’s view controller to get the correct labels and values for a film from the model itself.
In the Networking group, create a new Swift file named Films.swift.
Add the following code to the file:
struct Films: Decodable { let count: Int let all: [Film] enum CodingKeys: String, CodingKey { case count case all = "results" } }
This struct denotes a collection of films. As you previously saw in the console, the endpoint swapi.dev/api/films returns four main values:
count,
previous and
results. For your app, you only need
count and
results, which is why your struct doesn’t have all properties.
The coding keys transform
results from the server into
all. This is because
Films.results doesn’t read as nicely as
Films.all. Again, by conforming the data model to
Decodable, Alamofire will be able to convert the JSON data into your data model.
Codable, see our tutorial on Encoding and Decoding in Swift.
Back in MainTableViewController.swift, in
fetchFilms(), replace:
request.responseJSON { (data) in print(data) }
With the following:
request.responseDecodable(of: Films.self) { (response) in guard let films = response.value else { return } print(films.all[0].title) }
Now, rather than converting the response into JSON, you’ll convert it into your internal data model,
Films. For debugging purposes, you print the title of the first film retrieved.
Build and run. In the Xcode console, you’ll see the name of the first film in the array. Your next task is to display the full list of movies.
Method Chaining
Alamofire uses method chaining, which works by connecting the response of one method as the input of another. This not only keeps the code compact, but it also makes your code clearer.
Give it a try now by replacing all of the code in
fetchFilms() with:
AF.request("") .validate() .responseDecodable(of: Films.self) { (response) in guard let films = response.value else { return } print(films.all[0].title) }
This single line not only does exactly what took multiple lines to do before, but you also added validation.
From top to bottom, you request the endpoint, validate the response by ensuring the response returned an HTTP status code in the range 200–299 and decode the response into your data model. Nice! :]
Setting up Your Table View
Now, at the top of
MainTableViewController, add the following:
var items: [Displayable] = []
You’ll use this property to store the array of information you get back from the server. For now, it’s an array of films but there’s more coolness coming soon! In
fetchFilms(), replace:
print(films.all[0].title)
With:
self.items = films.all self.tableView.reloadData()
This assigns all retrieved films to
items and reloads the table view.
To get the table view to show the content, you must make some further changes. Replace the code in
tableView(_:numberOfRowsInSection:) with:
return items.count
This ensures that you show as many cells as there are films.
Next, in
tableView(_:cellForRowAt:) right below the declaration of
cell, add the following lines:
let item = items[indexPath.row] cell.textLabel?.text = item.titleLabelText cell.detailTextLabel?.text = item.subtitleLabelText
Here, you set up the cell with the film name and episode ID, using the properties provided via
Displayable.
Build and run. You’ll see a list of films:
Now you’re getting somewhere! You’re pulling data from a server, decoding it into an internal data model, assigning that model to a property in the view controller and using that property to populate a table view.
But, as wonderful as that is, there’s a small problem: When you tap one of the cells, you go to a detail view controller which isn’t updating properly. You’ll fix that next.
Updating the Detail View Controller
First, you’ll register the selected item. Under
var items: [Displayable] = [], add:
var selectedItem: Displayable?
You’ll store the currently-selected film to this property.
Now, replace the code in
tableView(_:willSelectRowAt:) with:
selectedItem = items[indexPath.row] return indexPath
Here, you’re taking the film from the selected row and saving it to
selectedItem.
Now, in
prepare(for:sender:), replace:
destinationVC.data = nil
With:
destinationVC.data = selectedItem
This sets the user’s selection as the data to display.
Build and run. Tap any of the films. You should see a detail view that is mostly complete.
Fetching Multiple Asynchronous Endpoints
Up to this point, you’ve only requested films endpoint data, which returns an array of film data in a single request.
If you look at
Film, you’ll see
starships, which is of type
[String]. This property does not contain all of the starship data, but rather an array of endpoints to the starship data. This is a common pattern programmers use to provide access to data without providing more data than necessary.
For example, imagine that you never tap “The Phantom Menace” because, you know, Jar Jar. It’s a waste of resources and bandwidth for the server to send all of the starship data for “The Phantom Menace” because you may not use it. Instead, the server sends you a list of endpoints for each starship so that if you want the starship data, you can fetch it.
Creating a Data Model for Starships
Before fetching any starships, you first need a new data model to handle the starship data. Your next step is to create one.
In the Networking group, add a new Swift file. Name it Starship.swift and add the following code:
struct Starship: Decodable { var name: String var model: String var manufacturer: String var cost: String var length: String var maximumSpeed: String var crewTotal: String var passengerTotal: String var cargoCapacity: String var consumables: String var hyperdriveRating: String var starshipClass: String var films: [String] enum CodingKeys: String, CodingKey { case name case model case manufacturer case cost = "cost_in_credits" case length case maximumSpeed = "max_atmosphering_speed" case crewTotal = "crew" case passengerTotal = "passengers" case cargoCapacity = "cargo_capacity" case consumables case hyperdriveRating = "hyperdrive_rating" case starshipClass = "starship_class" case films } }
As with the other data models, you simply list all the response data you want to use, along with any relevant coding keys.
You also want to be able to display information about individual ships, so
Starship must conform to
Displayable. Add the following at the end of the file:
extension Starship: Displayable { var titleLabelText: String { name } var subtitleLabelText: String { model } var item1: (label: String, value: String) { ("MANUFACTURER", manufacturer) } var item2: (label: String, value: String) { ("CLASS", starshipClass) } var item3: (label: String, value: String) { ("HYPERDRIVE RATING", hyperdriveRating) } var listTitle: String { "FILMS" } var listItems: [String] { films } }
Just like you did with
Film before, this extension allows
DetailViewController to get the correct labels and values from the model itself.
Fetching the Starship Data
To fetch the starship data, you’ll need a new networking call. Open DetailViewController.swift and add the following import statement to the top:
import Alamofire
Then at the bottom of the file, add:
extension DetailViewController { // 1 private func fetch<T: Decodable & Displayable>(_ list: [String], of: T.Type) { var items: [T] = [] // 2 let fetchGroup = DispatchGroup() // 3 list.forEach { (url) in // 4 fetchGroup.enter() // 5 AF.request(url).validate().responseDecodable(of: T.self) { (response) in if let value = response.value { items.append(value) } // 6 fetchGroup.leave() } } fetchGroup.notify(queue: .main) { self.listData = items self.listTableView.reloadData() } } }
Here is what’s happening in this code:
- You may have noticed that
Starshipcontains a list of films, which you’ll want to display. Since both
Filmand
Starshipare
Displayable, you can write a generic helper to perform the network request. It needs only to know the type of item its fetching so it can properly decode the result.
- You need to make multiple calls, one per list item, and these calls will be asynchronous and may return out of order. To handle them, you use a dispatch group so you’re notified when all the calls have completed.
- Loop through each item in the list.
- Inform the dispatch group that you are entering.
- Make an Alamofire request to the starship endpoint, validate the response, and decode the response into an item of the appropriate type.
- In the request’s completion handler, inform the dispatch group that you’re leaving.
- Once the dispatch group has received a
leave()for each
enter(), you ensure you’re running on the main queue, save the list to
listDataand reload the list table view.
Now that you have your helper built, you need to actually fetch the list of starships from a film. Add the following inside your extension:
func fetchList() { // 1 guard let data = data else { return } // 2 switch data { case is Film: fetch(data.listItems, of: Starship.self) default: print("Unknown type: ", String(describing: type(of: data))) } }
Here’s what this does:
- Since
datais optional, ensure it’s not
nilbefore doing anything else.
- Use the type of
datato decide how to invoke your helper method.
- If the data is a
Film, the associated list is of starships.
Now that you’re able to fetch the starships, you need to be able to display it in your app. That’s what you’ll do in your next step.
Updating Your Table View
In
tableView(_:cellForRowAt:), add the following before
return cell:
cell.textLabel?.text = listData[indexPath.row].titleLabelText
This code sets the cell’s
textLabel with the appropriate title from your list data.
Finally, add the following at the end of
viewDidLoad():
fetchList()
Build and run, then tap any film. You’ll see a detail view that’s fully populated with film data and starship data. Neat, right?
The app is starting to look pretty solid. However, look at the main view controller and notice that there’s a search bar that isn’t working. You want to be able to search for starships by name or model, and you’ll tackle that next.
Sending Parameters With a Request
For the search to work, you need a list of the starships that match the search criteria. To accomplish this, you need to send the search criteria to the endpoint for getting starships.
Earlier, you used the films’ endpoint,, to get the list of films. You can also get a list of all starships with the endpoint.
Take a look at the endpoint, and you’ll see a response similar to the film’s response:
success({ count = 37; next = "<null>"; previous = "<null>"; results = ({...}) })
The only difference is that this time, the results data is a list of all starships.
Alamofire’s
request can accept more than just the URL string that you’ve sent so far. It can also accept an array of key/value pairs as parameters.
The swapi.dev API allows you to send parameters to the starships endpoint to perform a search. To do this, you use a key of
search with the search criteria as the value.
But before you dive into that, you need to set up a new model called
Starships so that you can decode the response just like you do with the other responses.
Decoding Starships
Create a new Swift file in the Networking group. Name it Starships.swift and enter the following code:
struct Starships: Decodable { var count: Int var all: [Starship] enum CodingKeys: String, CodingKey { case count case all = "results" } }
Like with
Films you only care about
count and
results.
Next, open MainTableViewController.swift and, after
fetchFilms(), add the following method for searching for starships:
func searchStarships(for name: String) { // 1 let url = "" // 2 let parameters: [String: String] = ["search": name] // 3 AF.request(url, parameters: parameters) .validate() .responseDecodable(of: Starships.self) { response in // 4 guard let starships = response.value else { return } self.items = starships.all self.tableView.reloadData() } }
This method does the following:
- Sets the URL that you’ll use to access the starship data.
- Sets the key-value parameters that you’ll send to the endpoint.
- Here, you’re making a request like before, but this time you’ve added parameters. You’re also performing a
validateand decoding the response into
Starships.
- Finally, once the request completes, you assign the list of starships as the table view’s data and reload the table view.
Executing this request results in a URL{name} where
{name} is the search query passed in.
Searching for Ships
Start by adding the following code to
searchBarSearchButtonClicked(_:):
guard let shipName = searchBar.text else { return } searchStarships(for: shipName)
This code gets the text typed into the search bar and calls the new
searchStarships(for:) method you just implemented.
When the user cancels a search, you want to redisplay the list of films. You could fetch it again from the API, but that’s a poor design practice. Instead, you’re going to cache the list of films to make displaying it again quick and efficient. Add the following property at the top of the class to cache the list of films:
var films: [Film] = []
Next, add the following code after the
guard statement in
fetchFilms():
self.films = films.all
This saves away the list for films for easy access later.
Now, add the following code to
searchBarCancelButtonClicked(_:):
searchBar.text = nil searchBar.resignFirstResponder() items = films tableView.reloadData()
Here, you remove any search text entered, hide the keyboard using
resignFirstResponder() and reload the table view, which causes it to show films again.
Build and run. Search for wing. You’ll see all the ships with the word “wing” in their name or model.
That’s great! But, it’s not quite complete. If you tap one of the ships, the list of films that ship appears in is empty. This is easy to fix thanks to all the work you did before. There’s even a huge hint in the debug console!
Display a Ship’s List of Films
Open DetailViewController.swift and find
fetchList(). Right now, it only knows how to fetch the list associated with a film. You need to fetch the list for a starship. Add the following just before the
default: label in the
switch statement:
case is Starship: fetch(data.listItems, of: Film.self)
This tells your generic helper to fetch a list of films for a given starship.
Build and run. Search for a starship. Select it. You’ll see the starship details and the list of films it appeared in.
You now have a fully functioning app! Congratulations.
Where to Go From Here?
You can download the completed project using the Download Materials button at the top or bottom of this article.
While building your app, you’ve learned a lot about Alamofire’s basics. You learned that Alamofire can make networking calls with very little setup and how to make basic calls using the request function by sending just the URL string.
Also, you learned to make more complex calls to do things like searching by sending parameters.
You learned how to use request chaining and request validation, how to convert the response into JSON and how to convert the response data into a custom data model.
This article covered the very basics. You can take a deeper dive by looking at the documentation on the Alamofire site at.
I highly suggest learning more about Apple’s URLSession, which Alamofire uses under the hood:
I hope you enjoyed this tutorial. Please share any comments or questions about this article in the forum discussion below!
|
https://www.raywenderlich.com/6587213-alamofire-5-tutorial-for-ios-getting-started
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
One more time, I've a problem with accents
One more time, I've a problem with accents
I use Pythonista 3, in Python 3
I compare a file name (got with ftplib nlst) to a file name built with alert.dialog text field.
The file name is Xxxé.
When I print on comsole or display in a ui.label, each variable shows Xxxé, but when I compare both variables, they are different.
When I loop on each character to print it, I get
Xxxe ́ for the ftp file name
Xxxé for the other
I really need help to understand and to solve my problem
Thanks in advance
You could try normalizing both strings before comparing them, using
unicodedata.normalize, e.g.
import unicodedata # ... filename = unicodedata.normalize('NFC', filename) dlg_text = unicodedata.normalize('NFC', dlg_text) if filename == dlg_text: #...
Thanks a lot, that solves my problem, but I don't understand the kind /encoding a of a string which contains/prints/displays é but prints e' when I loop on each character!
@cvp Unicode has multiple ways of representing accented characters. Most accented characters have their own code point, for example é is U+00E9 (LATIN SMALL LETTER E WITH ACUTE). But almost all accents also exist as separate "combining" characters, which you can place after another character to add an accent to it. This means that you can also write é as U+0065 (LATIN SMALL LETTER E) followed by U+0301 (COMBINING ACUTE ACCENT).
Both variants of é look the same when you display them, and most systems even treat "split up" characters as one character in text fields and such, so if you delete a "split up" character it removes the entire character and not just the accent. But if you look at the string character by character, you'll notice that they are actually different.
That's why Unicode defines four forms of "normalization" for strings. Form "NFC" combines all letters and their accents into a single character if possible ("composition"), and form "NFD" splits them into separate letter and combining accents if possible ("decomposition"). There are also the "compatibility" forms "NFKC" and "NFKD", which do a few additional conversions. (Look up "Unicode equivalence" on Wikipedia if you want more details.)
In most cases NFC is all you need, sometimes NFKC can be useful, and NFD and NFKD are almost never useful. But Apple's HFS+ file system (also called Mac OS Extended) uses the NFD form for file names, which means that if your FTP server is a Mac, it will give you decomposed characters, instead of normal composed characters like most other programs and services.
Thanks for your clear explanation.
Coming from IBM world, I had always used the EBCDIC code, where all machines "speak" the same language.
Thus, I'm still afraid that I could use a code to send a file to my Mac or NAS and that the file name or folder name would be unreadable by another system.
Thanks, I'll have a look
You're right. Just checked and seems strange. Thanks
|
https://forum.omz-software.com/topic/3490/one-more-time-i-ve-a-problem-with-accents/5
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
TL;DR: If you find yourself in a hurry, and you just want to quickly check how to setup ASP.NET Identity in a MVC project using StructureMap you can just check the example project in github
Although ASP.NET project templates are a very useful way to start a new project, some people don’t feel comfortable using them. There are many reasons for this, some people don’t like the project folder structure, others simply like to create the whole project themselves instead of starting from a sample application.
This was at some point the most requested feature in the ASP.NET MVC3 Uservoice site.
If you do decide to start from an empty project template you’ll have to setup everything yourself. That’s not necessarily a bad thing. It provides you the opportunity to setup the project in whichever way you prefer and you can for example use an IoC container to setup ASP.NET Identity which the focus of this blog post.
The main reason why I decided to write this down was because I’ve heard the way ASP.NET Identity is setup being described as “poor man’s dependency injection”, namely the use of
.CreatePerOwinContext<T> that you’ll find on the one of the partial class files for
Startup.cs in the default template for MVC with Individual User Accounts.
The video where I heard this was ASP.NET Identity Security. The reason why it’s described as “poor man’s DI” is because, instead of actual dependency injection, the dependencies are created in the
Startup class, stored as
Owin properties (that’s what
CreatePerOwinContext does) and then retrieved in the controller (e.g.
HttpContext.GetOwinContext().GetUserManager<ApplicationUserManager>()).
Anyway, there’s no reason to settle for a “poor man’s DI”, so lets use StructureMap instead.
Starting from scratch
If you start with the Empty project template File->New->Project->Web->Asp.Net Web Application and then:
Because ASP.NET Identity has dependencies on Owin, and because you probably want to use the owin cookie middleware to handle signing in users, start by installing the
Microsoft.Owin.Host.SystemWeb NuGet package. You also need to install the ASP.NET Identity NuGet packages. Here’s the list that you can just copy&paste to the package manager console.
Here we’ll be using the EntityFramework version of the user store for ASP.NET Identity (user store italicized because the interface that exposes the functionality to actually save and retrieve users in ASP.NET Identity is named
IUserStore).
install-package Microsoft.Owin.Host.SystemWeb install-package Microsoft.AspNet.Identity.Core install-package Microsoft.AspNet.Identity.Owin install-package Microsoft.AspNet.Identity.EntityFramework
And of course, install the StructureMap package for MVC:
install-package StructureMap.MVC5
Create your
Owin Startup class. Easiest way is to just right click on your project->Add New Item->Owin Startup Class.
You should add your own connection string in web.config for the database that ASP.NET Identity will be using. For example, if you want to use LocalDb and have the database called IdentitySetupWithStructureMap your connection string could look like this:
<connectionStrings> <add name="IdentitySetupWithStructureMap" providerName="System.Data.SqlClient" connectionString="Data Source=(localdb)\ProjectsV12;Integrated Security=SSPI;MultipleActiveResultSets=True;Initial Catalog=IdentitySetupWithStructureMap;AttachDbFileName=|DataDirectory|\IdentitySetupWithStructureMap.mdf" /> </connectionStrings>
This might not be the right connection string for you, for example your LocalDb might have a different name (you can check yours in Visual Studio’s SQL Server Object Explorer). If you are unsure check this great resource to help you setup your connection string.
StructureMap configuration
Before we dive in the StructureMap configuration for ASP.NET identity it helps to know how you would do it manually, i.e., actually new up everything you need.
The main class in ASP.NET Identity, and probably the only one you need to use is
UserManager<T>, where T will be a subclass of
IdentityUser (if you derive from
IdentityUser you can add your own properties to the user that gets saved, if you did not know about this you should check The good, the bad and the ugly of ASP.NET Identity.
UserManager<T> has a dependency on
IUserStore<T>. The implementation we are going to use of
IUserStore<T>,
UserStore<T>, has a dependency on
DbContext (this is the EntityFramework implementation we’ve added through NuGet:
Microsoft.AspNet.Identity.EntityFramework).
So “manually” you could do this:
var dbContext = new IdentityDbContext("IdentitySetupWithStructureMap"); var userStore = new UserStore<IdentityUser>(dbContext); var userManager = new UserManager<IdentityUser>(userStore);
And then you could setup your userManager, for example configure the password validator to only allow usernames with more than 6 characters:
userManager.PasswordValidator = new PasswordValidator { RequiredLength = 6 };
Now that you know how to do it completely manually, if you can pause for a brief moment, go compare this with what gets generated with the default project template when you choose individual user accounts, let me know what you think in the comments.
There’s only four simple things that you need to know to understand the StructureMap configuration for this, they are:
- Given a constructor that expects a certain class as a parameter, specify which subclass to use
- How do you set properties using StructureMap (for the
UserValidator,
PasswordValidator, etc in
UserManager)
- How do select a constructor that is not the default constructor that StructureMap would normally use (one that is not the most specific, i.e., not one with the most dependencies).
- How to set the value of
value typesin the constructor, for example, the value of a string parameter (e.g.
connectionString).
Lets start with the setup for the
UserManager<IdentityUser>‘s dependency,
IUserStory<IdentityUser> (this code belongs inside the StructureMap Registry you are using to setup MVC, usually it’s called
DefaultRegistry):
For<IUserStore<IdentityUser>>() .Use<UserStore<IdentityUser>>() .Ctor<DbContext>() .Is<IdentityDbContext>(cfg => cfg.SelectConstructor(() => new IdentityDbContext("value here does not matter, it's only for selecting the right ctor")) .Ctor<string>() .Is("IdentitySetupWithStructureMap"));
The
For
Use part is just how you map an interface to an implementation in StructureMap. Then
.Ctor<DbContext>().Is<IdentityDbContext>(...) is basically saying, for the
DbContext dependency use its subclass
IdentityDbContext. However
IdentityDbContext has many constructors, and the one we want to use is not the one with most dependencies (which would be the one StructureMap would use).
We want to use the
IdentityDbContext constructor that expects a single string parameter, the connection string. We use
.SelectConstructor(() => new IdentityDbContext("value does not matter, this is just to let the compiler select the ctor") to do that, and then we use
.Ctor<string> to specify the value we want for the string parameter, in this case the value of the connection string:
.Is("IdentitySetupWithStructureMap") (if the constructor had several string parameters you could pass the name of the parameter to
.Ctor, e.g.
.Ctor<string>("nameOrConnectionString") to disambiguate).
That takes care of
UserManager<IdentityUser>‘s dependencies. We still need, however, to set the
PasswordValidator and
UserValidator.
PasswordValidator is the class that defines the minimum password requirements, for example that the minimum password length is 6 characters.
UserValidator is the class that checks if the username is valid, for example if you require that the username is an unique email address. There are more properties that you can setup on
UserManager, however the way they are setup is very similar to these two.
Because
UserManager does not implement an interface we have to use the class directly in our controllers. The way we configure a concrete class in StructureMap is by using the
ForConcreteType<T> method, in our case:));
Here’s the full listing for the Registry:
public DefaultRegistry() { Scan( scan => { scan.TheCallingAssembly(); scan.WithDefaultConventions(); scan.With(new ControllerConvention()); }); For<IUserStore<IdentityUser>>() .Use<UserStore<IdentityUser>>() .Ctor<DbContext>() .Is<IdentityDbContext>(cfg => cfg.SelectConstructor(() => new IdentityDbContext("connection string")).Ctor<string>().Is("IdentitySetupWithStructureMap"));)); }
And there you go, with this configuration you can now add
UserManager to your controllers as a dependency, for example:
public class HomeController : Controller { private readonly UserManager<IdentityUser> _userManager; public HomeController(UserManager<IdentityUser> userManager) { _userManager = userManager; } ...
I’ve added an example project in Github that uses this setup and a few extra bits. It allows you to create, list and delete users, check it out by cloning the repository
git clone
Drop me a line in the comments if you are interested in a summary (cheat sheet) of how to do the most common operations with ASP.NET Identity, for example, create users, delete users, reset a user’s password without requiring email messages, setting up email messages for email confirmation and password reset, etc.
|
https://www.blinkingcaret.com/2016/03/09/setup-asp-net-identity-using-structuremap/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
C++ Constructor and Destructor Example Program
Hello Everyone!
In this tutorial, we will learn how to demonstrate the concept of Constructor and Destructor in the C++ programming language.
To understand the concept of Constructor and Destructor in CPP, we will recommend you to visit here: C++ Constructor and Destructor, where we have explained it from scratch.
Code:
#include <iostream> using namespace std; //Rectangle class to demonstrate the working of Constructor and Destructor in CPP class Rectangle { public: float length, breadth; //Declaration of the default Constructor of the Rectangle Class public: Rectangle() { cout << "\n\n****** Inside the Constructor ******* \n\n"; length = 2; breadth = 4; } //Declaration of the Destructor of the Rectangle Class public: ~Rectangle() { cout << "\n\n****** Inside the Destructor ******* \n\n"; } }; //Defining the main method to access the members of the class int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the concept of Constructor and Destructor in CPP ===== \n\n"; cout << "\nCalling the default Constructor of the Rectangle class to initialize the object.\n\n"; //Declaring the Class object to access the class members Rectangle rect; cout << "\nThe Length of the Rectangle set by the Constructor is = " << rect.length << "\n\n"; cout << "\nThe Breadth of the Rectangle set by the Constructor is = " << rect.breadth << "\n\n"; return 0; }
Output:
We hope that this post helped you develop better understanding of the concept of Contructor and Destructor in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : )
|
https://studytonight.com/cpp-programs/cpp-constructor-and-destructor-example-program
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Masonite 1.4 brings several new features and a few new files. This is a very simple upgrade and most of the changes were done in the pip package of Masonite. The upgrade from 1.3 to 1.4 should take less than 10 minutes
This requirement file has the
masonite>=1.3,<=1.3.99 requirement. This should be changed to
masonite>=1.4,<=1.4.99. You should also run
pip install --upgrade -r requirements.txt to upgrade the Masonite pip package.
There is now a new cache folder under
bootstrap/cache which will be used to store any cached files you use with the caching feature. Simply create a new
bootstrap/cache folder and optionally put a
.gitignore file in it so your source control will pick it up.
Masonite 1.4 brings a new
config/cache.py and
config/broadcast.py files. These files can be found on the GitHub page and can be copied and pasted into your project. Take a look at the new config/cache.py file and the config/broadcast.py file. Just copy and paste those configuration files into your project.
Masonite comes with a lot of out of the box functionality and nearly all of it is optional but Masonite 1.4 ships with three new providers. Most Service Providers are not ran on every request and therefore does not add significant overhead to each request. To add these 3 new Service Providers simple add these to the bottom of the list of framework providers:
PROVIDERS = [# Framework Providers...'masonite.providers.HelpersProvider.HelpersProvider','masonite.providers.QueueProvider.QueueProvider',# 3 New Providers in Masonite 1.4'masonite.providers.BroadcastProvider.BroadcastProvider','masonite.providers.CacheProvider.CacheProvider','masonite.providers.CsrfProvider.CsrfProvider',# Third Party Providers# Application Providers'app.providers.UserModelProvider.UserModelProvider','app.providers.MiddlewareProvider.MiddlewareProvider',]
Note however that if you add the
CsrfProvider then you will also need the CSRF middleware which is new in Masonite 1.4. Read the section below to add the middleware
Masonite 1.4 adds CSRF protection. So anywhere there is any POST form request, you will need to add the
{{ csrf_field }} to it. For example:
<form action="/dashboard" method="POST">{{ csrf_field }}<input type="text" name="first_name"></form>
This type of protection prevents cross site forgery. In order to activate this feature, we also need to add the CSRF middleware. Copy and paste the middleware into your project under the
app/http/middleware/CsrfMiddleware.py file.
Lastly, put that middleware into the
HTTP_MIDDLEWARE list inside
config/middleware.py like so:
HTTP_MIDDLEWARE = ['app.http.middleware.LoadUserMiddleware.LoadUserMiddleware','app.http.middleware.CsrfMiddleware.CsrfMiddleware',]
There has been a slight change in the constants used in the config/database.py file. Mainly just for consistency and coding standards. Your file may have some slight changes but this change is optional. If you do make this change, be sure to change any places in your code where you have used the Orator Query Builder. For example any place you may have:
from config import databasedatabase.db.table(...)
should now be:
from config import databasedatabase.DB.table(...)
with this change
|
https://docs.masoniteproject.com/upgrade-guide/masonite-1.3-to-1.4
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Utility functions for use by authentication GUI widgets or standalone apps. More...
#include <qgsauthguiutils.h>
Utility functions for use by authentication GUI widgets or standalone apps.
Definition at line 29 of file qgsauthguiutils.h.
Clear all cached authentication configs for session.
Definition at line 158 of file qgsauthguiutils.cpp.
Clear the currently cached master password (not its hash in database)
Definition at line 92 of file qgsauthguiutils.cpp.
Completely clear out the authentication database (configs and master password)
Definition at line 195 of file qgsauthguiutils.cpp.
Color a widget via a stylesheet if a file path is found or not.
Definition at line 238 of file qgsauthguiutils.cpp.
Open file dialog for auth associated widgets.
Definition at line 252 of file qgsauthguiutils.cpp.
Green color representing valid, trusted, etc.
certificate
Definition at line 30 of file qgsauthguiutils.cpp.
Green text stylesheet representing valid, trusted, etc.
certificate
Definition at line 50 of file qgsauthguiutils.cpp.
Verify the authentication system is active, else notify user.
Definition at line 65 of file qgsauthguiutils.cpp.
Orange color representing loaded component, but not stored in database.
Definition at line 35 of file qgsauthguiutils.cpp.
Orange text stylesheet representing loaded component, but not stored in database.
Definition at line 55 of file qgsauthguiutils.cpp.
Red color representing invalid, untrusted, etc.
certificate
Definition at line 40 of file qgsauthguiutils.cpp.
Red text stylesheet representing invalid, untrusted, etc.
certificate
Definition at line 60 of file qgsauthguiutils.cpp.
Remove all authentication configs.
Definition at line 168 of file qgsauthguiutils.cpp.
Reset the cached master password, updating its hash in authentication database and reseting all existing configs to use it.
Definition at line 114 of file qgsauthguiutils.cpp.
Sets the cached master password (and verifies it if its hash is in authentication database)
Definition at line 77 of file qgsauthguiutils.cpp.
Yellow color representing caution regarding action.
Definition at line 45 of file qgsauthguiutils.cpp.
|
https://api.qgis.org/2.12/classQgsAuthGuiUtils.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Package com.mongodb.operation
Class CreateViewOperation
- java.lang.Object
- com.mongodb.operation.CreateViewOperation
- All Implemented Interfaces:
AsyncWriteOperation<Void>,
WriteOperation<Void>
@Deprecated public class CreateViewOperation extends Object implements AsyncWriteOperation<Void>, WriteOperation<Void>Deprecated.An operation to create a view.
Constructor Detail
CreateViewOperation
public CreateViewOperation(String databaseName, String viewName, String viewOn, List<BsonDocument> pipeline, WriteConcern writeConcern)Deprecated.Construct a new instance.
- Parameters:
databaseName- the name of the database for the operation, which may not be null
viewName- the name of the collection to be created, which may not be null
viewOn- the name of the collection or view that backs this view, which may not be null
pipeline- the aggregation pipeline that defines the view, which may not be null
writeConcern- the write concern, which may not be null
Method Detail
getDatabaseName
public String getDatabaseName()Deprecated.Gets the database name
- Returns:
- the database name
getViewName
public String getViewName()Deprecated.Gets the name of the view to create.
- Returns:
- the view name
getViewOn
public String getViewOn()Deprecated.Gets the name of the collection or view that backs this view.
- Returns:
- the name of the collection or view that backs this view
getPipeline
public List<BsonDocument> getPipeline()Deprecated.Gets the pipeline that defines the view.
- Returns:
- the pipeline that defines the view
getWriteConcern
public WriteConcern getWriteConcern()Deprecated.Gets the write concern.
- Returns:
- the write concern
getCollation
public Collation getCollation()Deprecated.Gets the default collation for the view
- Returns:
- the collation, which may be null
collation
public CreateViewOperation collation(Collation collation)Deprecated.Sets the default collation for the view.
- Parameters:
collation- the collation, which may be null
- Returns:
- this
|
http://mongodb.github.io/mongo-java-driver/3.12/javadoc/com/mongodb/operation/CreateViewOperation.html
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Bug #1555
Inaccessible 'io' parameter in Test::Unit::UI::Console::TestRunner.initialize(x,y,io)
Description
=begin
Console::TestRunner supports an io parameter to send output to an IO object, but Autorunner has no way to control it!
test/unit/ui/console/testrunner.rb contains the Test::Unit::UI::Console::TestRunnner class with a 3-parm initalizer
But Test::Unit::AutoRunner has no option to specify an alternative destination for io,
and even if it did, run is only called with 2 parameters!
I need to redirect test output to a file, and the console testrunner supports this, but the autorunner machinery doesn't use it. The fix is simple, but requires 2 files to change.
Console::TestRunner defines a 3rd, optional 'io' parameter, obviously meant to support redirecting output to an arbitrary io object.
The autorunner machinery is fine for me (I don't need a whole custom autorunner), but needs an option to direct output elsewhere (e.g. a file). I added a new attribute, output_io, set via the new --output option, and the critical change
def run
...
result.run(@suite, @output_level).passed?
end
to
result.run(@suite, @output_level, @output_io).passed?
But run calls the TestRunner thru Test::Unit::UI::run() - we need to alter test\unit\ui\testrunnerutilities.rb to accept this 3rd parm and pass it along, i.e. change
def run(suite, output_level=NORMAL)
return new(suite, output_level).start
end
to
def run(suite, output_level=NORMAL, io=STDOUT)
return new(suite, output_level, io).start
end
I've attached modified files, diff against 1.8.6 to see the exact changes.
I don't see how Console TestRunner's io parameter is accessible - w/o writing a custom AutoRunner, just to control this one option. A heavyweight solution with ugly maintenance and sync implications. But if I've missed something, by all means, please point it out.
=end
Files
Updated by ujihisa (Tatsuhiro Ujihisa) over 10 years ago
- Status changed from Open to Assigned
- Assignee set to zenspider (Ryan Davis)
=begin
=end
Updated by zenspider (Ryan Davis) about 9 years ago
- Status changed from Assigned to Open
- Assignee deleted (
zenspider (Ryan Davis))
I don't maintain test/unit.
You might prefer to try minitest, which already supports an io object.
Updated by naruse (Yui NARUSE) about 9 years ago
- Status changed from Open to Rejected
- Priority changed from 5 to Normal
1.8 is dying; use 1.9.
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/1555
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
DEBSOURCES
Skip Quicknav
sources / libforms / 1.0.93sp1643>
* doc: Paul Nicholson put a lot of work into correcting
all kinds of issue with the documentation!
* lib/input.c: in some situations in a multi-line
input object parts of the scrollbar were drawn even
though no scrollbars were suposed to be shown.
* lib/xyplot.c: Active xyplot was broken.
* lib/box.c: If label is inside of box it's now clipped
to the inside of the box and never draws outside of it.
2010-05-21 Jens Thoms Toerring <jt@toerring.de>
* doc: Many spelling errors etc. removed that Paul
Nicholson had pointed out.
* fdesign: deprecated values from alignment label menu
in the form for editing object attributes removed.
* lib/forms.c: Bug with resizing scrollbars on resize of
form that Paul Nicholson pointed out fixed.
* fdesign/fd_attribs.c: Another bug found by Paul Nicholson:
when in changing the type of an object with childs and then
undoing the change immediately ("Attributes" form still open
and clicking "Cancel") fdesign crashed. Hopefully fixed now.
2010-05-19 Jens Thoms Toerring <jt@toerring.de>
* fdesign: small changes (mostly to fd/ui_theforms.fd)
to get rid of annoying flicker in the control window
when adding a new object in the other window.
2010-05-18 Jens Thoms Toerring <jt@toerring.de>
lib/objects.c: Another bug found by Serge Bromow fixed:
shortcuts with ALT key had stopped to work.
2010-05-17 Jens Thoms Toerring <jt@toerring.de>
* lib/tbox.c: As Serge Bromow pointed out in the functions
fl_set_browser_topline(), fl_set_browser_bottomline() and
fl_set_browser_centerline() there was a missing check for
the browser being empty, resulting in dereferencing a NULL
pointer.
2010-05-15 Jens Thoms Toerring <jt@toerring.de>
* lib/handling.c, lib/include/Basic.h: After intensive
discussions with Serge Bromow added new funtion that allows
to switch back to the pre-1.0.91 behavior concerning when
an interaction with an input object is considered to have
ended.
2010-05-07 Jens Thoms Toerring <jt@toerring.de>
* lib/tbox.c: As Marcus D. Leech pointed out setting colors
for a browser via fl_set_object_color() didn't work for
the font color (always black), hopefully fixed
* doc/part6_images.texi: Some more typos etc. found
by LukenShiro removed
2010-05-05 Jens Thoms Toerring <jt@toerring.de>
* doc/part6_images.texi: A number of typos etc.
found by LukenShiro removed.
2010-05-04 Jens Thoms Toerring <jt@toerring.de>
* image/image.c: LukenShiro pointed out a deviation between
the diocumented return type of the flimage_free() function.
It now returns void as already documented.
* lib/font.c: Fix for (rather hypothetical buffer) overrun
in get_fname().
2010-03-14 Jens Thoms Toerring <jt@toerring.de>
* clipboard.c: Converted error message into warning printed
out when a selectio request is made to the XForms program for
a type of Atom that XForms doesn't support. Thanks to Mark
Adler for pointing out the problem.
2010-03-09 Jens Thoms Toerring <jt@toerring.de>
* Several changes to the way things redrawn - there were
some problems with redrawing labels that needed several
changes to get it right (again).
* Some unused stuff removed from include files
* Corrections in the documentation
2010-01-09 Jens Thoms Toerring <jt@toerring.de>
* Lots of clean-up in header files to address inconsistencies
(and in some cases also function prototypes had to be changed)
as pointed out by LukenShiro.
* lib/input.c: Bug with return behaviour of FL_MULTI_INPUT
objects fixed.
* lib/popup.c, lib/nmenu.c: Functions added for adding
and modifying popup entries using a FL_POPUP_ITEM added.
* lib/objects.c: Several getter functions for object
properties added
* gl/glcanvas.c: Bug about missing requested event and
pointed out by Dave Strang fixed.
2009-12-21 Jens Thoms Toerring <jt@toerring.de>
Some problems with new forms.h pointed out by Lukenshiro
and Luis Balona cleaned up.
2009-12-14 Jens Thoms Toerring <jt@toerring.de>
Some more clean-up in header files (and documentation)
2009-12-13 Jens Thoms Toerring <jt@toerring.de>
demos/thumbwheel.c: Bug fixed as pointed out by LukeShiro
images/flimage.h: Removed useless declaration of fl_basename()
as proposed by LukenShiro
several include files: Removed useless members from a number
of structures and enums, adjusted return types of a few
functions to fit the documentation, all as part of the clean
up for the new SO version since this just th right moment to
get rid of garbage.
2009-11-30 Jens Thoms Toerring <jt@toerring.de>
* configure.ac: Updated SO_VERSION since the library isn't
compatible anymore with the 1.0.90 release and this lead to
trouble for Debian (at least).
* lib/spinner.c, lib/handling.c: Updates to eliminate a bug
detected by Werner Heisch that kept spinner objects from
working correctly if they are the only input object in a
form.
2009-11-23 Jens Thoms Toerring <jt@toerring.de>
* lib/fselect.c: Improved algorithm for finding file
to be shown selected on changes of input field
2009-11-20 Jens Thoms Toerring <jt@toerring.de>
* lib/positioner lib/include/positioner.h: Added a new
type of positioner (FL_INVISIBLE_POSITIONER) that's
completely invisible to put on top of other objects.
The idea for that came from Werner Heisch.
* fdesign/fd_superspec.c: Werner Heisch found that changing
copied menu and choice object entries also change the ones
of the object copied from. Bug hopefully fixed.
* lib/fselect.c: When entering text into the input object
of a file selector now a fitting file/directory (if one
exists) will now be selected automatically in the browser.
2009-11-03 Jens Thoms Toerring <jt@toerring.de>
lib/xyplot.c: As Jussi Elorante noticed posthandlers
didn't work with XYPLOT objects. This now should be
fixed.
Peter S. Galbraith pointed out that building the documen-
tation in info format didn't work properly.
2009-09-21 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_forms.c, fdesign/fd_groups.c,
fdesign/fd_super.c: Two bugs in fdesign, found
by Werner Heisch, removed.
2009-09-20 Jens Thoms Toerring <jt@toerring.de>
Minor corrections in the documentation.
2009-09-16 Jens Thoms Toerring <jt@toerring.de>
* lib/include/Basic.h: Removed a nonexistent color
that had made it into the list of colors as Werner
Heisch pointed out.
2009-09-15 Jens Thoms Toerring <jt@toerring.de>
* lib/events.c: general callbacks for events for user
generated windows weren't called anymore, repaired.
* lib/include/xpopup.h: Broken define Rouben Rostamian
found for the fl_setpup_default_checkcolor() repaired.
2009-09-14 Jens Thoms Toerring <jt@toerring.de>
* lib/thumwheel.c, lib/validator.c: Fixed return
behaviour of thumbwheel.
2009-09-13 Jens Thoms Toerring <jt@toerring.de>
* lib/input.c: Further problems with beep and
input objects removed.
2009-09-12 Jens Thoms Toerring <jt@toerring.de>
* fdesign/sp_spinner.c: Added forgotten output to
C file for setting colors and text size and style.
* lib/input.c: Removed beep on valid input into
FL_INT_INPUT and FL_FLOAT_INPUT objects.
2009-09-11 Jens Thoms Toerring <jt@toerring.de>
* lib/flcolor.c, lib/include/Basic.h: New pre-defined
colors added as proposed and assembled by Rob Carpenter.
* lib/spinner.c: Corrections to return behaviour of
spinner objects as pointed out by Werner Heisch.
2009-09-08 Jens Thoms Toerring <jt@toerring.de>
* lib/input.c: Bug in copy-and-paste and found by Werner
Heisch repaired.
* lib/include/zzz.h: defines of 'TRUE' and 'FALSE'
relaced by 'FL_TRUE' and 'FL_FALSE' to avoid problems
for other programs that may define them on their own
(thanks to Serge Bromow for pointing out that this
can be a real problem).
2009-09-06 Jens Thoms Toerring <jt@toerring.de>
Some more bugs in fdesign, found by Werner Heisch,
removed. Most important: bitmaps weren't drawn
correctly.
* lib/bitmap.c: fl_set_bitmapbutton_file() removed,
is now an alias for fl_set_bitmap_file()
* lib/sysdep.c: fl_now() doesn't add a trailing '\n'
anymore
2009-09-05 Jens Thoms Toerring <jt@toerring.de>
Several bugs reported by Werner Heisch in fdesign fixed.
Input of form and group names is now checked for being
a valid C identifier.
2009-09-03 Jens Thoms Toerring <jt@toerring.de>
* lib/util.c: Removed function fli_get_string()
which was just a duplication of fli_print_to_string()
2009-09-01 Jens Thoms Toerring <jt@toerring.de>
Tabs replaced by spaces.
Repairs to fd2ps that had stopped working (output
doesn't look too nice yet, changing that will probably
take quite a bit of work...)
2009-08-30 Jens Thoms Toerring <jt@toerring.de>
Support for spinner objects built into fdesign.
2009-08-28 Jens Thoms Toerring <jt@toerring.de>
Dependence of form size on snap grid setting in fdesign
removed since it led to unpleasant effects under KDE, form
size can now be set directly via a popup window.
Some more bugs with new way of reading .fd files removed.
* lib/browser.c: Missing redraw of scrollbar added in
fl_show_browser_line() (thanks to Werner Heisch for
noticing and telling me about it),
2009-08-26 Jens Thoms Toerring <jt@toerring.de>
* README: updated to reflect new mailing list location
and homepage
2009-08-25 Jens Thoms Toerring <jt@toerring.de>
A number of bugs in the new code for reading in .fd-Files
pointed out by Werder Heisch have been removed.
2009-08-22 Jens Thoms Toerring <jt@toerring.de>
Thanks to lots of input (patches and discussions) by Werner
Heisch the way .fd files get read in and analyzed has been
changed to be a lot more liberal of what is accepted as well
as spitting out reasonable error messages and warnings if
things go awry. New files added are fdesign/fd_file_fun.c
and fdesign/sp_util.c and lots of others have been changed.
2009-08-13 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_printC.c: Some corrections/bug fixes
* Bit of clean-up all over the place;-)
2009-08-06 Jens Thoms Toerring <jt@toerring.de>
* lib/Makefile.am, gl/Makefile.am, image/Makefile.am:
Applied patch send by Rex Dieter that changes the way the
dynamic libraries get created so that linking explicitely
against libX11.so and ibXpm.so (and possibly others) isn't
necessary anymore when linking against libforms.so.
2009-07-12 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c: Bug in convertion of string to
shortcut characters removed
2009-07-11 Jens Thoms Toerring <jt@toerring.de>
* Bit of cleanup of error handling
2009-07-10 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Forms.c split into two, forms.c and
handling.c
2009-07-09 Jens Thoms Toerring <jt@toerring.de>
* lib/events.c: Bug found by Werner Heisch when using
fdesign under KDE/Gnome removed
2009-07-05 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Hack added to correct drawing of formbrowser
objects
* lib/input.c: Cursor was sometimes not drawn at the correct
position
2009-07-05 Jens Thoms Toerring <jt@toerring.de>
* lib/tbox.c: Some adjustments to redraw of textbox
2009-07-04 Jens Thoms Toerring <jt@toerring.de>
* Bugs found by Werner Heisch in fdesign fixed.
2009-07-03 Jens Thoms Toerring <jt@toerring.de>
* Some bugs in code for drawing of folder and
formbrowser objects repaired.
* Mistakes in documentation removed.
2009-07-01 Jens Thoms Toerring <jt@toerring.de>
* Several bugs in fdesign removed
* lib/tbox.c: Realculation of horizontal offset after
removal of longest line fixed
2009-06-29 Jens Thoms Toerring <jt@toerring.de>
* Some bugs found by Werner Heisch in the new browser
implementation corrected.
* Some issues with fdesign and browsers removed.
* lib/scrolbar.c: Cleanup due to compiler warning
2009-06-12 Jens Thoms Toerring <jt@toerring.de>
* lib/tbox.c: Some corner cases for browsers
corrected.
2009-06-10 Jens Thoms Toerring <jt@toerring.de>
* lib/tbox.c: Bug in handling of new lines and
appending to existing lines fixed to make it
work like earlier versions.
2009-06-09 Jens Thoms Toerring <jt@toerring.de>
Several bug-fixes and changes all over the place to
get everything working again.
Flag '--enable-bwc-bs-hack' added to 'configure' to
allow compliation for programs that rely on the tradi-
tional behaviour of browsers and scrollbars, i.e. that
they don't report changes via e.g. fl_do_forms() but
do invoke a callback if installed.
2009-06-04 Jens Thoms Toerring <jt@toerring.de>
lib/tbox.c: Replacement for lib/textbox.c used in all
browsers.
2009-05-21 Jens Thoms Toerring <jt@toerring.de>
Lots of changes to the event handling system. The handler
routines for objects now are supposed to return information
about what happend (changes, end of interaction) instead
of just 1 or 0 (which indicated if the user application
was to be notified or not. Using the new system makes it
easier to use objects that consist of child objects e.g.
when dealing with callbacks for these kinds of objects.
2009-05-17 Jens Thoms Toerring <jt@toerring.de>
* lib/events.c: Bug fixed that resulted in crashes when
in the callback for an object the object itself got deleted.
* lib/input.c: fl_validate_input() function added.
* configure.ac, config/common.am, config/texinfo.tex,
doc/Makefile.am: Documentation added to built system
2009-05-16 Jens Thoms Toerring <jt@toerring.de>
* lib/events.c: Objects consisting just of child objects
weren't handled correctly in that they never got returned
by fl_do_forms().
* lib/browser.c, lib/formbrowser.c, lib/scrollbar.c,
lib/tabfolder.c: these objects now get created with a
default callback that does nothing to keep them reported
by fl_do_forms() (for backward compatibility reasons).
Also quite a bit of cleanup in lib/browser.c
* lib/spinner.c, lib/include/spinner.h, lib/private/pspinner.h:
new widget added, very similar to counter object (but realized
just using already existing objects).
* lib/child.c, lib/include/Basic.h: fl_add_child() is
am exported function now (again) since it might be
rather useful for creating new, composite widgets.
2009-05-13 Jens Thoms Toerring <jt@toerring.de>
* fdesign/Makefile: Added a few include directories
in order to allow fdesign's fd files when newly
converted with fdesign to be compiled without
manual changes.
2009-05-08 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c, lib/objects.c, lib/flinternals.h:
Using the return key to activate a FL_RETURN_BUTTON
object in a form with a single input object works
again.
2009-05-08 Jens Thoms Toerring <jt@toerring.de>
* configure.ac: check for nanosleep too
* lib/sysdep.c (fl_msleep): use HAVE_NONOSLEEP
2009-05-06 Jens Thoms Toerring <jt@toerring.de>
* lib/form.c: Changed return type of the functions
fl_show_form(), fl_prepare_form_window() and
fl_show_form_window() to 'Window' to reflect what
was (mostly) said in the documentation. That also
required including X11 header files already in
lib/include/Basic.h instead of lib/include/XBasic.h.
fl_prepare_form_window() now returns 'None' on
failure instead of -1. Also the type of the 'window'
member of the FL_FORM structure is now 'Window'
instead of 'unsigned long' and that of 'icon_pixmap'
and 'icon_mask' is 'Pixmap'.
FL_TRIANGLE_* macros renamed to FLI_TRIANGLE_* and
moved to lib/flinternal.h.
2009-05-06 Jens Thoms Toerring <jt@toerring.de>
* Just a bit of code cleanup in fdesign and
minor changes of the documentation.
2009-05-04 Jens Thoms Toerring <jt@toerring.de>
* lib/signal.c: in handle_signal() a caught signal
could lead to an infinite loop when the handling
function did something that put it back into the
main loop.
* Some improvements of teh documentation
2009-05-03 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_attribs.c: Bug that kept composite
objects from being selected after type change in
fdesign removed. Length of labels is now unlimited.
2009-05-02 Jens Thoms Toerring <jt@toerring.de>
* Some missing figures added to documentation.
2009-04-16 Jens Thoms Toerring <jt@toerring.de>
Git repository added.
2009-03-27 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_main.c, fdesign/fd_printC.c: As Rob
Carpenter noticed <glcanvas.h> doesn't get inclued
in the files generated by fdesign when a glcanvas
object exists. Changed that so that both <forms.h>
and <glcanvas.h> (but only if required) get included
in the header file created by fdesign.
2009-01-26 Jens Thoms Toerring <jt@toerring.de>
* lib/include/AAA.h.in: Contact address etc. corrected.
2009-01-25 Jens Thoms Toerring <jt@toerring.de>
* doc/images/: Some new figures added.
2009-01-21 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_spec.c, fdesign/fd_super.c: Removed lots of
potential buffer overruns and restriction on number of
lines/entries that could be used for browser, menu and
choice objects.
* lib/utils: Added function for reading in lines of
arbitrary length from a file and a function with similar
functionality as GNUs asprintf().
2009-01-16 Jens Thoms Toerring <jt@toerring.de>
* image/image_disp.c: Tried to correct display of images
on machines where the COMPOSITE extension is supported.
As Luis Balona noticed on these systems images dispayed
with the itest demo program appear half-transparent.
Probably not solved but looks a bit better now...
* image/image_jpeg.c: Bug in identification of JPEG images
corrected.
2009-01-11 Jens Thoms Toerring <jt@toerring.de>
* lib/nmenu.c, lib/include/nmenu.h, lib/private/pnmenu.h:
New type of menus based on the new popup code.
* lib/private/pselect.h: Small correction to get the knob
of browser sliders drawn correctly.
2009-01-03 Jens Thoms Toerring <jt@toerring.de>
* lib/select.c: Corrections and additions of new functions
for select objects.
* demos: Changes to a number of demo programs to use select
instead of choice objects.
* doc: Updates of some of the files of the documentation.
2009-01-02 Jens Thoms Toerring <jt@toerring.de>
* lib/select.c, lib/include/select.h, lib/private/pselect.h:
Files for the new select object added that is supposed to
replace the old choice object and is based on the new popup
code recently added.
* doc/part3_choice_objects.texi, doc/part3_deprecated_objects.texi:
Documentation of choice objects moved from that for choice
objects to that for deprecated objects and documenttation for
the new select object was added to that for choice objects.
2008-12-28 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_printC.c: Applied a patch Werner Heisch send
in for a bug that resulted in the label alignment getting
set incorrectly (always ended up as FL_ALIGN_CENTER).
2008-12-27 Jens Thoms Toerring <jt@toerring.de>
* doc: Reassembly of documentation in texinfo format more
or less complete. Still missing are most of the figures.
* lib/include/popup.h: File has been renamed xpopup.h.
* lib/popup.c, lib/include/popup.h, demo/new_popup.c: New
implementation of popups, supposed to replace old Xpopups.
Still missing: reimplementation of menu and choice objects
based on the new Popup class.
* lib/forms.c: fl_end_group() now returns void instead of
a pseudo-object that never should be used by the user.
2008-12-10 Jens Thoms Toerring <jt@toerring.de>
* lib/xpopup.c: Found that FL_PUP_GREY and FL_PUP_INACTIVE
are actually the same, so removed all uses of FL_PUP_INACTIVE.
2008-12-01 Jens Thoms Toerring <jt@toerring.de>
* doc: New directory with first parts of rewrite of docu-
mentation in texi format.
* lib/counter.c: Rob Carpenter noticed that it sometimes
can be difficult to use a counter to just change it by a
single step. Thus, according to his suggstions, the first
step now takes longer and the time between following
steps gets smaller and smaller until a final minimum
timeout is reached (initial timeout is 600 ms and final
is 50 ms per default). The fl_get_counter_repeat() and
fl_set_counter_repeat() are now for the initial timeout
and the final timeout can be controlled vianew functions
fl_set_counter_min_repeat()/fl_get_counter_min_repeat().
To switch back to the old behaviour use the functions
fl_set_counter_speedup()/fl_get_counter_speedup() and
set the initial and final rate to the same value. If
speed-up is switched off but initial and final timeouts
differ the initial timeout is used for the first step and
the final timeout for all following steps.
* lib/choice.c: Choices didn't react immediately to a click
with the middle or left mouse button. Now the selected entry
will change immediately and continue to change slowly when
the mouse button is kept pressed down.
* fdesign/fd_forms.c: Rob Carpenter and Werner Heisch found
that while loading .fd file a spurious "Failure to read file"
warning gets emitted.
2008-11-22 Jens Thoms Toerring <jt@toerring.de>
* lib/appwin.c, lib/events.c: Small changes to clean
up a few things that did look a bit confusing.
2008-11-11 Jens Thoms Toerring <jt@toerring.de>
Cosmetic changes to a number of files to pacify the
newest gcc/libc combination about issues with disre-
garded return values of standard input/output func-
tions (fgets(), fread(), fwrite(), sscanf() etc.)
2008-11-10 Jens Thoms Toerring <jt@toerring.de>
* lib/textbox.c: Another bug Rob Carpenter found: when
trying to scroll in an empty browser the program crashed
with a segmentation fault due to miscalculation of the
number of the topmost line of text.
2008-11-04 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c: Rob Carpenter pointed out another bug
that resulted in extremely slow redraws of objects and
was due to a off-by-one error in the calculation of the
bounding box of objects (which in turn made non-overlap-
ping objects appear to overlap.
2008-10-27 Jens Thoms Toerring <jt@toerring.de>
* lib/button.c: Bug in function for selecting which
mouse buttons a button reacts to fixed.
2008-10-20 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Added function fl_form_is_iconified()
that returns if a forms window is in iconfied state.
Thanks to Serge Bromow for propose a function like
that.
2008-10-18 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c, lib/tooltip.c: Bug removed that
led to multiple deletes of tooltip form in the
fl_finish() function.
2008-09-24 Jens Thoms Toerring <jt@toerring.de>
* lib/clock.c: FL_POINT array in draw_hand() was
one element too short.
2008-09-22 Jens Thoms Toerring <jt@toerring.de>
* Further code cleanup
* Update of man page
2008-09-21 Jens Thoms Toerring <jt@toerring.de>
* Bits of code clean-up in several places.
2008-09-17 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c: Added removal of tooltip when
object gets deleted.
2008-09-16 Jens Thoms Toerring <jt@toerring.de>
* lib/win.c, lib/forms.c: Code for showing a form
was changed. The previous code made the assumption
that all window managers would reparent the form
window withing a window with the decorations, but
this is not necessary the case (e.g. metacity, the
default window manager of Gnome). This led to
inconsistencies in the positioning of forms with
different window managers. Also positioning forms
with negative values for x and y (to position a
window with its right or bottom border relative
to the right or bottom of the screen didn't work
correctly.
2008-08-04 Jens Thoms Toerring <jt@toerring.de>
* lib/goodie_choice.c: Bug in setting the buttons
texts removed.
2008-08-03 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c, lib/objects.c: Removed bug pointed out
by J. P. Mellor that allowed selecting and editing
input objects even when the were deactivated.
* fdesign/fd_attribs.c: Removed a bug pointed out
by Werner Heisch that crashed fdesign if the type
of an object was changed.
* fdesign/fd_attribs.c: Bug in fdesign fixed that
led to crash when the type of a composite object
was changed and then Restore or Cancel was clicked.
2008-07-05 Jens Thoms Toerring <jt@toerring.de>
* lib/menu.c: Thanks to a lot of input from Jason
Cipriani several changes were made concerning the
ability to set menu item IDs and callback functions
for menu items. This includes slight changes to the
prototype of the three functions fl_set_menu(),
fl_addto_menu() and fl_replace_menu_item(). All of
them now accept in addition to their traditional
arguments an unspecified number of extra arguments.
Also tow new functions were added:
fl_set_menu_item_callback( )
fl_set_menu_item_id( )
Please see the file 'New_Features.txt' for a more
complete description.
* fdesign: Support for setting menu item IDs and
menu item callbacks has been added.
2008-07-03 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: for radio buttons an associated
callback function wasn't called on a click on an
already pressed radio button as Luis Balona found
out. Since this isn't the bahaviour of older
XForms version this could lead to problems for
applictaions that expect the old behaviour, so
the behaviour was switched back to the old one.
* config/: on "make maintainer-mode the scripts
install-sh, missing and mkinstalldirs got deleted.
While the first two were generated automatically
during the autoconf process the last wasn't which
led to a warning when running configure. Thus the
'mkinstalldirs' (from automake 1.10) was added.
2008-07-02 Jens Thoms Toerring <jt@toerring.de>
* lib/xpopup.c, lib/menu.c: Tried to fix a bug
resulting in artefacts with menus on some machines
as Luis Balona pointed out.
2008-06-30 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c: Removed a bug in the calculation
of size of the bounding of an object. Thanks to
Rob Carpenter for sending me example code that
did show the problem nicely.
* lib/forms.c, lib/objects.c: Added some code to
speed up freeing of forms (overlap of objects does
not get recalculated anymore, which could take a
considerable time for forms with many objects).
2008-06-29 Jens Thoms Toerring <jt@toerring.de>
* lib/object.c, lib/button.c, lib/xpopup.c: Fixed
two bugs found by Luis Balona that under certain
circumstances led to a segmentation fault.
* config/ltmain.sh, config/libtool.m4,
config/config.guess, config/config.sub:
Updated libtool files from version 1.4.3 to 1.5.26
since Raphael Straub, the maintainer of the MacPorts
port of XForms, pointed out that compilation of
fdesign on Mac OSX failed due to a problem with the
old libtool version.
2008-06-22 Jens Thoms Toerring <jt@toerring.de>
* lib/xsupport.c: Code cleanup.
* lib/pixmap.c: Changed code for drawing a pixmap
to take the current clipping setting unto account.
Many thanks To Werner Heisch for explaining the
problem with a lot of screentshots and several
example programs that did show what went wrong!
* lib/bitmap.c: Made bitmap buttons behave like
normal buttons, just with a bitmap drawn on top
of it. The foreground color of the bitmap is
the same as the label color (and never changes).
Also changed the code for drawing a bitmap to
take account fo the current clipping setting.
2008-06-17 Jens Thoms Toerring <jt@toerring.de>
* lib/pixmap.c: Made pixmap buttons behave like
normal buttons, just with a pixmap drawn on top
of it (which may get changed when the button
receives or loses the focus).
* lib/objects.c: Made some changes to the redraw
of objects when a "lower" object gets redrawn
and thus an object on top of it also needs to be
redrawn.
2008-05-31 Jens Thoms Toerring <jt@toerring.de>
* lib/pixmap.c: As Werner Heinsch pointed out
the display of partially transparent pixmaps
was broken due to a bug I had introduced when
cleaning up the code for redraw. Moreover,
already in 1.0.90 the pixmap of a pixmap button
was exchanged for the focus pixmap when the
button was pressed, which wasn't what the
documentation said. Code changed to avoid that.
* lib/objects.c: The code for determining if
two objects intersect was broken and reported
all objects to intersect, which then resulted
in a lot of useless redraws. Hopefully fixed.
2008-05-24 Jens Thoms Toerring <jt@toerring.de>
* Got rid of some compiler warnings removed.
* lib/fldraw.c: As Andrea Scopece pointed out
colors of box borders weren't correct and the
shadow wasn't drawn for for shadow boxes with
a border width of 1 or -1. Added his proposed
patches.
2008-05-17 Jens Thoms Toerring <jt@toerring.de>
* lib/goodies.c, lib/goodie_*.c: Some code cleanup
and made sure that memory allocated gets released.
2008-05-16 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c: Removed a bug that has been
pointed out by Werner Heisch with a small demo
program: if an object is partially or even fully
hidden by another object and gets redrawn it got
drawn above the object it was supposed to be
(more or less) hidden by, thus obscuring the
"upper" object.
* lib/pixmap.c: It could happen that parts of a
pixmap got drawn outside of the object that it
belongs to. That in turn could mess up redrawing
(e.g. if the pixmap object got hidden). Thus now
only that part of a pixmap that fits inside the
object gets drawn.
2008-05-15 Jens Thoms Toerring <jt@toerring.de>
* lib/textbox.c: The functions fl_addto_browser()
and fl_addto_browser_chars() didn't worlk correctly
anymore. When lines where appended the browser
wasn't shifted to display the new line. Thanks to
Werner Heisch for pointing out the problem.
2008-05-12 Jens Thoms Toerring <jt@toerring.de>
* lib/goodie_alert.c: Removed restriction on the
maximum length of the alert message.
Added new function
void
fl_show_alert2( int c,
const char * fmt,
... )
The first argument is the same as the last of
fl_show_alert(), indicating if the alert box is to be
centered on the screen. The second one is a printf()-
like format string, followed by as many further
arguments as there are format specifiers in the 'fmt'
argument. The title and the alert message are taken
from the resulting string, where the first form-feed
character ('\f') embedded in the string is used as the
separator between the title and the message.
2008-05-10 Jens Thoms Toerring <jt@toerring.de>
* lib/menu.c: Changed the default font style of
menus from FL_BOLD_STYLE to FL_NORMAL_STYLE and
menu entries from FL_BOLDITALIC_STYLE also to
FL_NORMAL_STYLE.
* lib/xpopup.c: Changed the default font style
of both popup entries as well as the title from
FL_BOLDITALIC_STYLE to FL_NORMAL_STYLE.
* lib/flcolor.c: Made the default background
color a bit lighter.
2008-05-09 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c: Removed bug that kept canvases
from being hidden and tabfolders from being
re-shown correctly. This was especially annoying
with fdesign as Rob Carpenter pointed out.
* lib/forms.c: Added a new function
int
fl_get_decoration_sizes( FL_FORM * form,
int * top,
int * right,
int * bottom,
int * left );
which returns the widths of the additional
decorations the window manager puts around
a forms window. This function can be useful
if e.g. one wants to store the position of a
window in a file and use the position the
next time the program is started. If one
stores the forms position and uses that to
place the window it will appear to be shifted
by the size of the top and left decoration.
So instead storing the forms position one has
to correct it for the decoration sizes.
* everywhere: further clean up (getting internal
stuff separated from stuff that belongs to API)
2008-05-08 Jens Thoms Toerring <jt@toerring.de>
* lib.objects.c, lib/childs.c: Rewrite of the
functions for hidding an object. Some adjustments
to the code for freeing objects to set the focus
correctly.
* lib/flresource.c: Changed the name of the option
to set the debug level from 'debug' to 'fldebug'
since it's too likely that a program using XForms
also has a 'debug' option which would get over-
written by XForms option.
Also added a 'flhelp' option that outputs the options
that XForms accepts and then calls exit(1). Thanks to
Andrea Scopece for contributing this.
2008-05-07 Jens Thoms Toerring <jt@toerring.de>
* lib/objects.c, lib/child.c: Handling of child
objects corrected - valgrind was reporting an
error with the old code (access to already re-
leased memory) and the code was rather buggy
and inconsistent anyway.
* lib/xpopup.c: Changed a XFhlush() to XSync()
after a popup was opened - without it an Map-
Notify was sometimes passed back to the user
program (happened due to a fix to a different
bug in lib/events.c).
* fdesign: Tried to make the fdesign GUI look a
bit nicer (thinner borders etc.). Some changes to
generated output files (format, call of fl_free()
on the different fdui's at the end of the main()
function etc.).
2008-05-05 Jens Thoms Toerring <jt@toerring.de>
* further clean-up of header files and renaming
of functions and macros usied only internally.
2008-05-04 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd_forms.c: Removed limit on number of
forms that can be created or read from a file. Few
changes to error handling.
* fdesign/fd_control.c: Removed limit on number of
objects in a form that can be dealt with.
* fdesign/fd_groups.c: Removed limit on number of
groups that can be dealt with.
* fdesign/fd/ui_theforms.fd: Changed browser for
groups to be a multi instead of hold browser.
* lib/events.c: Bug I had ontroduced in function
fl_handle_event_callbacks() repaired. Thanks to
Andrea Scopece for pointing out the problem.
* everywhere: started attempt to distinguish clearly
between functions, variables, and macros belonging to
the API and those only used internally to the library
by having API names start with 'fl_' (or 'FL_') while
internal names start with 'fli_' (or FLI_'). Also
removed doubly declared or non-existent functions
in lib/flinternal.h.
2008-05-01 Jens Thoms Toerring <jt@toerring.de>
* lib/goodies_msg.c: New function fl_show_msg()
now works.
2008-04-30 Jens Thoms Toerring <jt@toerring.de>
* lib/goodie_msg.c: Added function
void fl_show_msg( const char * fmt, ... )
The first argument is a printf-like format string,
followed by as many arguments as required by the
format specifiers in the format string. This simplfies
outputting freely formatted messages.
Changed fl_show_message() to avoid an upper limit
of 2048 characters on the total length of the three
strings passed to it.
Added #defines for fl_hide_messages and fl_hide_msg -
they are just alternative names for fl_hide_message().
2008-04-29 Jens Thoms Toerring <jt@toerring.de>
* lib/fselect.c: If a callback for a file selector is
installed the prompt line and the input field aren't
shown anymore. As Andrea Scopece pointed out the input
field can't be used at all for file selectors with a
callback (only a double click in the browser works)
so it doesn't make sense to show it.
* lib/n2a.c: This file isn't needed anymore - the only
of its functions used at all, fl_itoa(), was used in only
a single place (lib/errmsg.c) and got replaced by sprintf().
* image/image_proc.c: Bug fixed in flimage_tint() that led
to writes past the end of an array.
2008-04-28 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Jumping backwards with Shift-<TAB>
through a set of input objects now works even if
there are non-input objects in between.
* demos/browserop.c: Bug removed that only surfaced
now since clicking onto a button makes an input
object lose the focus.
* lib/canvas.c, lib/forms.c: On hiding a form it was
forgotten to unmap the windows of canvases belonging
to that form and to reset the ID of these windows.
Resulted in an XError on unhiding the form. Thanks
to Andrea Scopece for finding this bug.
2008-04-27 Jens Thoms Toerring <jt@toerring.de>
* lib/input.c: Correct leap year handling in date input
validator. Multi-line input field don't receive a <TAB>
anymore (which never did work anyway).
* lib/forms.c: <TAB> can now also be used to move the
focus out of a multi-line input field and into the next
input field.
* lib/version.c: The version output now contains the
full copyright information, not just the first three
lines.
2008-04-26 Jens Thoms Toerring <jt@toerring.de>
* lib/flresource.c: Library version information wasn't
output when the '-flversion' option was given. Repaired
by a patch Andrea Scopece send.
* lib/forms.c: Scrollbars had been extempt from resizing
due to the wrong assumption that they always would be
childs of a composite object, which isn't the case.
Thanks to Andrea Scopece for finding this problem.
* lib/scrollbar.c: Horizontal scrollbars now only get
resized in x-direction per default, verticall ones in
y-direction only.
2008-04-22 Jens Thoms Toerring <jt@toerring.de>
* lib/async_io.c: Removed a bug pointed out by Andrea
Scopece that resulted in a segmentation fault in the
'demo' program. This also needed some changes in the
files lib/flresource.c and lib/flinternal.h (where
also all remains from be.c were removed).
2008-04-20 Jens Thoms Toerring <jt@toerring.de>
* lib/flresource.c: Removed setting of the machines
locale setting as default for the program. This
change had already been discussed by Jean-Marc and
Angus back in 2004 but never actually done.
* lib/fselect.c: Programs doesn't crash anymore when
fl_set_directory() gets passed a NULL pointer.
* lib/buttons.c: Bug repaired that kept buttons from
becoming highlighted when the mouse was moved nto them
(and vice versa).
* lib/formbrowser.c: Changed the handling of the scroll-
bars to hopefully make it work correctly even when the
form gets resized.
2008-04-13 Jens Thoms Toerring <jt@toerring.de>
* fdesign/fd/*.c, fdesign/spec/*.c: Replaced
'#include "forms.h"' with '#include "include/forms.h"
to avoid compilation problems Peter Galbraith pointed
out.
* fdesign/fd_printC.c: In created C files we now have
'#include <forms.h>' instead of '#include "forms.h"'.
2008-04-10 Jens Thoms Toerring <jt@toerring.de>
* lib/buttons.c: Removed code that enforced on of a
set of radio buttons to be set - thus led to problems
for some older applications.
Also emoved restriction that buttons only react to
a click with the left mouse button per default.
Instead added two new public functions
fl_set_button_mouse_buttons()
fl_get_button_mouse_buttons()
that allow to set and query the mouse buttons a
button will react to. dfault is to react to all
mouse buttons.
* fdesign/sp_buttons.c, fdesign/fd_spec.c: Added
support for setting the mouse buttons a button
reacts to via fdesign (click the "Spec" tab rider
in the attributes window).
* fdesign/fd_main.c: Added option '-xforms-version'
to print out the version of the library being used.
* lib/tabfolder.c: All memory now gets released on
call of fl_finish().
* lib/symbols.c: Unlimited number of symbols can be
created without restrictions on the name length.
Memory allocated for symbols gets deallocated in
fl_finish().
* lib/flresource.c: Array allocated for copy of
command line arguments was one element too short
which led to crashes when using lots of command
line arguments. Added function to free this memory
in fl_finish().
* fdesign/fd_printC.c: Output wasn't correct ANSI-C89
when the pre_form_output() function was called.
* fdesign/fd/*.[ch], fdesign/spec/*.[ch]: Newly
generated using the newest fdesign version.
2008-03-27 Jens Thoms Toerring <jt@toerring.de>
* lib/button.c: Most buttons now again react only
to the release of the left mouse button, I had
introduced a bug that broke this behaviour.
* fdesign/sp_*.c: Some cosmetic correction to the
output format of the files generated by fdesign.
2008-03-26 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Clicking on a "pushable" object now
leads to an input object currently having the focus
lose the focus, thus enforcing it to report changes.
Until now this only happened if the object that was
clicked on was another input object.
A FocusOut event now takes away keyboard input from
input objects on the form in the window that lost
the focus, a FocusIn event restores it.
2008-03-25 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Further restructuring of event handling.
All memory for objects of a form now (hopefully) gets
deallocated on a call of fl_free_form() and a call of
fl_finish() deallocates all memory used for forms and
their objects, removes signal callbacks and deletes
timers. The only exception is memory for tabfolder,
I haven't yet understood the code for that...
* lib/objects.c: Object pre- and posthandlers aren't
called anymore on FL_FREEMEM events (the object or
some of its childs probably doesn't exist anymore
in that kind of situation).
* lib/timeout.c: Changed the code a bit and, in combi-
nation with changes in lib/forms.c, got the precision
of timeouts to be a bit higher (haven't seen it being
off more than 5 ms on my machine under light load) and
made sure they never expire too early (as promised in
the manual). Added a function to remove all timeouts,
to be be called from fl_finish().
* lib/be.c: File isn't used anymore, the list of memory
to be allocated never was used anyway if no idle handler
was installed and it also didn't do the right thing. No
calls to fl_addto_freelist() and fl_free_freelist() are
left in XForms.
* lib/include/Basic.h: FL_MOTION has come back, what
I should have thrown out was FL_MOUSE. FL_MOUSE is
still available for backward compatibility but isn't
used in the code anymore - FL_UPDATE is the new name
(in the object structure the 'want_update' member must
be set to request this type of event - can be switched
on and off at any time).
* lib/slider.c: Changed the code for sliders (and
thereby scrollbars) quite a bit - it was much too
complicated (unfortunately still is:-( and didn't
always work correctly. Scrollbars now react to
scroll wheel mouse the same way a textbrowser does.
* lib/signal: On system that support it sigaction()
instead of signal() is used now. Added a function
to remove all signal handlers, to be used from
fl_finish().
* fdesign/ps_printC.c: Replaced the use of fl_calloc()
by fl_malloc() when writing out C files - there's no
good reason to spend time on zeroing out the memory.
2008-03-20 Jens Thoms Toerring <jt@toerring.de>
* textbox.c: Textboxes didn't get regular update
events that are needed for scrolling with the
mouse pressed down just below or above the box.
They also did only react to left mouse buttons
(and scroll wheel) and now again also to the
middle and right button.
* lib/counter.c, fdesign/fd_object.c: Removed some
debugging output accidentally left in.
* lib/dial.c: Corrected return behaviour on mouse
button release.
2008-03-19 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Further cleanup and removal of cruft that
was hard to understand but actually was unnecessary or
counter-productive. Added check that makes sure that
one of the radio buttons of a form is always set.
* lib/include/Basic.h: FL_MOTION got removed, instead
FL_UPDATE was introduced for events of the artificial
timer (the one that kicks in when there are no events).
The FL_OBJECT structure got two new elements, 'want_motion'
and 'want_update'. If the first is set an object which is
not an object that can be "pushed" will receive mouse
movement events (e.g. in case the object has some inner
structure that depends on the mouse position like counter
objects) and the second is to be set by objects that want
to receive FL_UPDATE events (but they still need to be
objects that can be "pushed") - at the moment these are
touch buttons, counters and choice objects.
* lib/choice.c: FL_DROPLIST_CHOICE didn't work correctly
anymore, fixed. Scroll wheel can now also be used to
walk through the entry up or down in the popup. Added a
new function fl_get_choice_item_mode() to the public
interface.
* lib/menu.c: Menus become highlighted when the mouse
is moved onto it. Code cleaned up a bit.
Added a function fl_set_menu_notitle() (analogous to
the fl_set_choice_notitle() function) to allow removal
of the sometimes ugly menu popup titles. This leads to
an important change in the behaviour of FL_PUSH_MENU
objects: if the title is switched off they only get
opened on button release and stay directly below the
menu button (like FL_PULLDOWN_MENU objects).
There's a lot of code identical to that in choice.c,
it might be reasonable to remove the duplication (what
actually is the big difference between the menu and
choice objects, anyway?)
* lib/button: Changes to fit the new event handling
code. Buttons now only react to clicks with the left
mouse. button. Handling of radio buttons corrected.
* lib/choice, lib/counter: Changes to fit the new event
handling code.
* lib/slider.c, lib/thumbwheel.c, lib/textbox.c,
lib/positioner.c, lib/dial.c: Now react to left mouse
button only (and mouse wheel as far as reasonable).
* lib/fldraw.c: Issues with memory handling checked
and corrected.
2008-03-12 Jens Thoms Toerring <jt@toerring.de>
* lib/forms.c: Removed code injecting fake FL_RELEASE
events that led to problems with double click selections
e.g. in the file selector (this in turn required changes
to lib/xpopup.c).
* lib/xpopup.c: Extensive code cleanup, bug fixes and
rewrite of event handling. Popup's, menus etc. now work
more like one is it used from other toolkits. Shadows
around popups got removed since they don't (and never
did) work correctly.
* Further code cleanup all over the place, removing
bugs that may lead to segmentation faults or memory
or X resources leaks.
2008-02-04 Jens Thoms Toerring <jt@toerring.de>
* Resizing code again changed since I hadn't understood all
the interdepencies between gravity and resize settings.
Hopefully works correctly now.
The special treatment of the case for objects that have no
gravity set and all resizing switched off (in which case
the center of gravity is moved when the enclosing form is
resized) seems to be a bit strange. Why is not the same
behaviour used for e.g. the x-direction if an object
isn't fixed in x-direction by its gravity setting and it
isn't to be resized in horizontal direction (same for y)?
* lib/events.c: Changed the Expose event compression code that
did lead to missed redraws under e.g. KDE or Gnome if they are
set up to update a window also during resizing and the mouse is
moved around a lot in the process.
* lib/textbox.c: Hopefully fixed a bug (perhaps it's the one
that Michal Szymanski reported on 2005/3/11 in the XForms
mailing list) that resulted under certain circumstances in e.g.
fl_do_forms() returning the object for a normal textbrowser
unexpectedly when the mouse wheel was used, which in turn could
make programs exit that did not expect such a return value (the
fbrowse.c demo program did show the problem).
* lib/textbox.c: Hopefully fixed another bug that kept the
text area of a browser from being redrawn correctly following
the resizing of its window when sliders were on and not in
the left- or top-most position.
* lib/objects.c: Added three functions fl_get_object_bw(),
fl_get_object_gravity() and fl_get_object_resize() (to be
added to the public interface).
* lib/flinternal.h: Added several macros that test if the
upper left hand and the lower right hand corner of an
object are locked due to gravity settings and macros that
test if the width or height is "fixed", i.e. determined
by the gravity settings (so they are not influenced by
the corresponding resizing settings).
* demos/grav.c: Created a small demo program that shows the
effects of the different gravity and resizing settings. The
results can sometimes be a bit surprising at a first glance
but I hope to have gotten it right;-)
2008-01-28 Jens Thoms Toerring <jt@toerring.de>
* Resizing behaviour got rewritten to get it to work correctly
even if a window gets resized to a very small size and then
back to a large one (see e.g. the xyplotall demo program for
the behaviour). This required to add elements to the FL_FORM
and FL_OBJECT structures, but since they shouldn't be used
directly from user programs and also user defined object should
always be created via a call of fl_make_object(), where the
geometry of the object gets set, this shouldn't lead to any
trouble. One aspect of the changes is that an objects gravity
setting now always takes precedence over the 'resize' setting
and the 'resize' setting gets automatically corrected whenever
necessary.
* lib/events.c: changed queueing system so that queue overflows
and thus loss of calls of callback functions or Xevents shouldn't
be possible anymore. The queues are now implemented using linked
lists that get extendend if necessary, deallocation is done from
fl_finish().
* Got rid of a redraw bug that led to a form not being redrawn
correctly after its window was made smaller (e.g. under fvwm2)
* Several bugs where fixed that sometimes crashed the program
with XErrors after resizing a window, especially when the
window was made very small (exhibited by e.g. the formbrowser
demo program).
* Number of forms that can be created is now unlimited (or
only limited by the available memory) instead of having an
arbitrary maximum of 64
* Changes to autogen.sh to allow built with newer versions of
autoconf and small changes on config/xformsinclude.m4 to avoid
warnings. Added '-W' compiler flag (which in turn required to
mark unused arguments of a lot of functions as such to avoid
compiler warnings, see the new macro FL_UNUSED_ARG in
lib/include/Basic.h that exists for just that purpose).
* Handling of the number of colors was corrected for displays
with more colors than can be stored in an unsigned long (e.g.
32-bit depth display with 32-bit wide unsigned longs).
* Correction of the sizes of the scrollbars of FL_NORMAL_FORMBROWSER
type of objects.
* lib/dial.c: mouse-wheel handling for dials added
* lib/tabfolder.c: bugs in memory handling corrected
* Replaced float by double in many places (not yet finished!).
* Code cleanup (concerns several dozens of files)
2004-12-28 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* image/image_jpeg.c: fix compilation with IBM's xlc.
* Makefile.am: remove useless GNUish construct.
* lib/listdir.c: add a better definition of S_ISSOCK, that works
with SCO OpenServer (thanks to Paul McNary).
2004-10-05 Angus Leeming <angus.leeming@btopenworld.com>
* xforms.spec.in: Updating SO_VERSION revealed a flaw in the logic
that tries to use this variable to define some missing
symbolic links. The 'post' and 'postun' scripts have been rewritten
to work once more.
* lib/flinternal.h: move FL_NoColor...
* lib/include/Basic.h: here.
* lib/forms.c (do_interaction_step): prevent potential crash
caused by invoking fl_get_winsize with a width as the first
argument rather than a window ID.
* NEWS: add some highlights post 1.0.90.
2004-10-05 Angus Leeming <angus.leeming@btopenworld.com>
* configure.ac (SO_VERSION): updated to "2:0:1" in preparation
for the xforms 1.1 release.
2004-10-06 Angus Leeming <angus.leeming@btopenworld.com>
* lib/textbox.c (fl_set_textbox_xoffset): don't ignore a
request to reset the offset if the manipulated value is less
than zero. Instead, reset it to zero and proceed.
* lib/browser.c (get_geometry): reset the horizontal offset to
zero if the horizontal scrollbar is turned off. (Bug #3205.)
2004-07-28 Angus Leeming <angus.leeming@btopenworld.com>
* lib/forms.c (fl_prepare_form_window): correct typo in
error message.
2004-06-04 Angus Leeming <angus.leeming@btopenworld.com>
* lib/fonts.c (fl_try_get_font_struct): change an error message to
an informational one as the function is often used to test
whether a font is loadable or not.
2004-06-03 Angus Leeming <angus.leeming@btopenworld.com>
* lib/Makefile.am (EXTRA_DIST): distribute dirent_vms.h and
vms_readdir.c.
2004-06-01 Duncan Simpson <dps@simpson.demon.co.uk>
* fdesign/fd_printC.c (build_fname): re-write using fl_snprintf
as a simpler and safer replacement for strncat and strncpy.
2004-05-27 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_printC.c (build_fname): if no output_dir is specified,
then output files in the current directory.
2004-05-27 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_main.c: improve diagnostics when failing to convert
the .fd file to a .[ch] pair.
Also remove some redundant cruft.
2004-05-18 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/Basic.h (fl_set_err_logfp): function was known as
'fl_set_error_logfp' in XForms 0.89. Define a typedef to map
from the old to the new.
2004-05-18 Angus Leeming <angus.leeming@btopenworld.com>
* demos/demo27.c:
* demos/iconify.c:
* demos/pup.c:
* fdesign/fd_attribs.c:
* fdesign/fd_main.c:
* fdesign/fd_main.h:
* fdesign/fd_rubber.c:
* gl/glcanvas.c:
* image/flimage.h:
* image/flimage_int.h:
* image/image.c:
* image/image_disp.c:
* image/image_fits.c:
* image/image_gif.c:
* image/image_jquant.c:
* image/image_marker.c:
* image/image_proc.c:
* image/image_xwd.c:
* image/matrix.c:
* lib/asyn_io.c:
* lib/canvas.c:
* lib/child.c:
* lib/choice.c:
* lib/flcolor.c:
* lib/flinternal.h:
* lib/flresource.c:
* lib/forms.c:
* lib/fselect.c:
* lib/input.c:
* lib/listdir.c:
* lib/menu.c:
* lib/objects.c:
* lib/pixmap.c:
* lib/win.c:
* lib/xdraw.c:
* lib/xpopup.c:
* lib/xsupport.c:
* lib/include/Basic.h:
* lib/include/XBasic.h:
* lib/include/bitmap.h:
* lib/include/button.h:
* lib/include/canvas.h:
* lib/include/choice.h:
* lib/include/menu.h:
* lib/include/popup.h:
* lib/private/pcanvas.h:
* lib/private/ptextbox.h: s/unsigned/unsigned int/
2004-05-17 Angus Leeming <angus.leeming@btopenworld.com>
Revert some functions to the same API as was used in XForms
version 0.89, patch level 5. In all cases, this is just a case of using
the typedef rather than the raw type.
* lib/browser.c (fl_create_browser, fl_add_browser):
* lib/include/browser.h (fl_create_browser, fl_add_browser): use FL_Coord.
* lib/flcolor.c (fl_bk_color, fl_bk_textcolor):
* lib/include/Basic.h (fl_bk_color, fl_bk_textcolor): use FL_COLOR.
* lib/flresource.c (fl_initialize):
* lib/include/XBasic.h (fl_initialize): use FL_CMD_OPT *.
* lib/formbrowser.c (fl_add_formbrowser):
* lib/include/formbrowser.h (fl_add_formbrowser): use FL_Coord.
* lib/oneliner.c (fl_show_oneliner):
* lib/include/goodies.h (fl_show_oneliner): use FL_Coord.
* lib/scrollbar.c (fl_create_scrollbar, fl_add_scrollbar():
* lib/include/scrollbar.h (fl_create_scrollbar, fl_add_scrollbar():
use FL_Coord.
* lib/signal.c (fl_add_signal_callback):
* lib/include/Basic.h (fl_add_signal_callback): use FL_SIGNAL_HANDLER.
* lib/tabfolder.c (fl_add_tabfolder, fl_get_folder_area):
* lib/include/tabfolder.h (fl_add_tabfolder, fl_get_folder_area):
use FL_Coord.
* lib/win.c (fl_winmove, fl_winreshape):
* lib/include/XBasic.h (fl_winmove, fl_winreshape): use FL_Coord.
* lib/xdraw.c (fl_polygon):
* lib/include/XBasic.h (fl_polygon): use FL_COLOR.
* lib/xtext.c (fl_drw_text_beside):
* lib/include/Basic.h (fl_drw_text_beside): use FL_COLOR.
* lib/include/goodies.h (fl_exe_command, fl_end_command, fl_check_command):
use FL_PID_T.
2004-05-17 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/canvas.h: change the change to AUTOINCLUDE_GLCANVAS_H.
* gl/glcanvas.h: #include <GL/glx.h>. Add C++ guards.
2004-05-14 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/canvas.h: add a preprocessor-qualified #include
of glcanvas.h. The user must inititalise the GLCANVAS_H_LOCATION
appropriately.
This is a means to maintain some sort of backwards compatibility
without the old, hacky code.
2004-05-13 Angus Leeming <angus.leeming@btopenworld.com>
* image/Makefile.am (libflimage_la_LDFLAGS):
* gl/Makefile.am (libformsGL_la_LDFLAGS): change the -version-info
data to '@SO_VERSION@' so that all get updated automatically.
2004-05-13 Reed Riddle <drriddle@mac.com>
* lib/xyplot.c:
* lib/include/xyplot.h (fl_replace_xyplot_point_in_overlay):
new function, generalizing the existing fl_replace_xyplot_point
which acts only on the first dataset.
2004-05-12 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* gl/Makefile.am (INCLUDES):
* demos/Makefile.am (INCLUDES):
* fd2ps/Makefile.am (INCLUDES):
* fdesign/Makefile.am (INCLUDES): add X_CFLAGS
2004-05-07 Angus Leeming <angus.leeming@btopenworld.com>
* (xforms.spec.in): add code to the 'post' script to modify
libforms.la et al. to prevent libtool from complaining that
the files have been moved.
2004-05-07 Angus Leeming <angus.leeming@btopenworld.com>
* lib/private/pvaluator.h (repeat_ms, timeout_id, mouse_pos):
new variables.
* lib/include/slider.[ch] (fl_[sg]et_slider_repeat):
* lib/include/counter.[ch] (fl_[sg]et_counter_repeat):
new accessor functions, enabling the user to query and modify the
timeout used to control the behaviour of these widgets when the
mouse is kept pressed down.
* lib/include/slider.[ch] (handle_mouse):
* lib/include/counter.[ch] (handle_mouse): use a timeout to
control the rate at which the slider/counter is incremented.
Replaces the current strategy which used a simple counter loop and
which has become unusable with today's fast processors.
2004-05-06 Angus Leeming <angus.leeming@btopenworld.com>
* configure.ac (SO_VERSION): new variable defining the libtool
version info. Substituted in lib/Makefile.am and xforms.spec.in.
* lib/Makefile.am (libforms_la_LDFLAGS): use the configure-time
variable @SO_VERSION@ rather than the hard-coded 1:0:0.
* xforms.spec.in: fix 'Release' and 'Source0' info.
add 'post' and 'postun' scripts to create and remove symbolic links,
respectively.
2004-05-06 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_spec.c: revert the change made earlier today.
Turned out to be used in the demos code...
2004-05-06 Angus Leeming <angus.leeming@btopenworld.com>
* xforms.spec.in: modify so that devfiles and binfiles are not
placed in ${RPM_BUILD_ROOT}. Prevents rpm from bombing out with a
"Checking for unpackaged files" error.
2004-05-05 Angus Leeming <angus.leeming@btopenworld.com>
* lib/xtext.c (fl_drw_string): enable the drawing of characters
in a font larger than the input widget.
2004-05-06 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_spec.c: initialization of the FL_CHOICE component of
the objspec struct used 'emit_menu_header' and 'emit_menu_global',
a cut-n-paste typo from the following FL_MENU component.
They have both been reset to '0'.
2004-05-04 Angus Leeming <angus.leeming@btopenworld.com>
* NT/libxforms.dsp, NT/xformsAll.dsw: removed these Visual C++
project files. They're way out of date and can be re-added
if needed.
2004-05-05 Mike Heffner <mheffner@vt.edu>
* lib/fselect.c (select_cb): clean-up and simplify this callback
function by use of the existing fl_set_browser_dblclick_callback.
2004-05-04 Angus Leeming <angus.leeming@btopenworld.com>
The original patch, posted to the xforms list on June 21, 2002,
appears to have got lost. Archived here:
Pass the associated (XEvent * xev) to fl_handle_object on an FL_DRAW
event. This XEvent * is not used at all by any of xforms' "native"
widgets, but an FL_FREE object is able to make use of this info to
redraw only the part of the window that has changed.
* forms.c (fl_handle_form): pass the XEvent on an FL_DRAW event.
* objects.c (redraw_marked): pass the XEvent to fl_handle_object.
(mark_for_redraw): new, static function containing all but the
'redraw_marked' call of the original fl_redraw_form.
(fl_redraw_form): refactored code. Functionality unchanged.
(fl_redraw_form_using_xevent): identical to fl_redraw_form, except
that it passes the XEvent on to redraw_marked.
2004-05-02 Angus Leeming <angus.leeming@btopenworld.com>
* lib/flresource.c (get_command_name): squash valgrind warning
about a possible memory leak.
2004-04-30 Angus Leeming <angus.leeming@btopenworld.com>
* lib/Makefile.am, fdesign/Makefile.am: silence automake
warning about trailing backslash on last line of file.
2004-04-20 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/xpopup.c (fl_freepup): do not free unallocated entries
(fl_setpup_maxpup): do not forget to reset parent and window in
newly created menu_rec entries
2004-04-19 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* demos/Makefile.am (glwin_LDADD, gl_LDADD): fix ordering of
libraries
(LDFLAGS): rename from AM_LDFLAGS (automake 1.5 did not like that)
* config/xformsinclude.m4 (XFORMS_PROG_CC): fix description of
--enable-debug
2004-04-05 Angus Leeming <angus.leeming@btopenworld.com>
* Dummy commit to check all is well with my account.
2004-04-01 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/flresource.c (fl_get_resource): when a resource is a
FL_STRING, avoid doing a strncpy of a string over itself (triggers
a valgrind report)
2004-03-30 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* README: mention the --enable-demos and --disable-gl flags, which
got forgotten
2004-03-30 Hans J. Johnson <hjohnson@mail.psychiatry.uiowa.edu>
* lib/pixmap.c (cleanup_xpma_struct): use a better check for
libXpm version
2004-03-30 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* XForms 1.0.90 released
* NEWS:
* README: update for 1.0.90
* configure.ac: set version to 1.0.90. Use XFORMS_CHECK_VERSION
* config/xformsinclude.m4 (XFORMS_CHECK_VERSION): merge
XFORMS_SET_VERSION and XFORMS_CHECK_VERSION. Set PACKAGE here and
read version from PACKAGE_VERSION (set by AC_INIT). Remove
detection of prereleases. Development versions are now versions
with minor version number >= 50.
* README: small update
* configure.ac: add new define RETSIGTYPE_IS_VOID
* lib/signal.c: fix handling of RETSIGTYPE
2003-12-02 Angus Leeming <angus.leeming@btopenworld.com>
* demos/Makefile.am: enable 'make -j2' to work on a
multi-processor machine.
* demos/Makefile.am: handle the .fd -> .c conversion in
automake-standard fashion.
* lib/include/Makefile.am: pass sed the names of the files to
be manipulated as '${srcdir}/`basename $$i`' rather than as
'${srcdir}/$$i' or things go awol on the Dec. (Running ksh, fwiw.)
2003-11-28 Angus Leeming <angus.leeming@btopenworld.com>
* Makefile.am: re-add xforms.spec to EXTRA_DIST. It is needed as well
as xforms.spec.in or else 'make rpmdist' will fail.
2003-11-28 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_attribs.c:
* image/image_jpeg.c:
* image/image_xwd.c:
* lib/flcolor.c: warning free compilation of the entire xforms source.
2003-11-28 Angus Leeming <angus.leeming@btopenworld.com>
* demos/demotest.c:
* demos/folder.c:
* demos/free1.c:
* demos/group.c:
* demos/popup.c:
* demos/wwwl.c:
* demos/xyplotall.c:
* demos/fd/scrollbar_gui.fd: squash all remaining warnings when
compiling the demos directory '-W -Wall -Wno-unused-parameter'.
2003-11-28 Angus Leeming <angus.leeming@btopenworld.com>
* Makefile.am:
* configure.ac: compile fd2ps after fdesign. Will allow me to get rid
of the files generated from the .fd files.
2003-11-27 Angus Leeming <angus.leeming@btopenworld.com>
* demos/fd/Makefile.am: remove all the .[ch] files generated from their
.fd parents.
* demos/Makefile.am: generate the fd/*.[ch] files on-the-fly.
* demos/buttonall.c: no longer #include fd/buttons_gui.c.
* demos/butttypes.c:
* demos/demotest.c:
* demos/dirlist.c:
* demos/folder.c:
* demos/formbrowser.c:
* demos/inputall.c:
* demos/pmbrowse.c:
* demos/scrollbar.c:
* demos/thumbwheel.c: ditto for their own fd-generated files.
* demos/pmbrowse.h: removed: cruft.
* demos/fd/buttons_gui.[ch]:
* demos/fd/butttypes_gui.[ch]:
* demos/fd/fbtest_gui.[ch]:
* demos/fd/folder_gui.[ch]:
* demos/fd/formbrowser_gui.[ch]:
* demos/fd/ibrowser_gui.[ch]:
* demos/fd/inputall_gui.[ch]:
* demos/fd/is_gui.[ch]:
* demos/fd/is_gui_main.c:
* demos/fd/pmbrowse_gui.[ch]:
* demos/fd/scrollbar_gui.[ch]:
* demos/fd/twheel_gui.[ch]: removed.
2003-11-27 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_printC.c (filename_only): use strrchr.
* fdesign/fdesign.man: document the -dir <destdir> option.
2003-11-27 Angus Leeming <angus.leeming@btopenworld.com>
* NEWS: updated to reflect what has been going on in the 1.1 cycle.
2003-11-26 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/fd_main.h: add a 'char * output_dir' var to the FD_Opt struct.
* fdesign/fd_main.c: add code to initialize FD_Opt::output_dir.
* fdesign/fd_forms.c (save_forms): pass fdopt.output_dir var to the
external converter if non-zero.
* fdesign/fd_printC.c (filename_only, build_fname): new helper functions
that use FD_Opt::output_dir if it is set.
(C_output): invoke build_fname rather than building the file name
itself.
2003-11-27 Angus Leeming <angus.leeming@btopenworld.com>
* demos/demotest_fd.[ch]:
* demos/demotest_fd.fd: removed. The routines were not invoked by
demotest (witness that it still links fine).
* demos/pmbrowse.c: split out the fdesign generated code.
Ensuing changes to use the fdesign generated code unchanged.
* demos/pmbrowse.fd: moved...
* demos/fd/pmbrowse_gui.[ch]:
* demos/fd/pmbrowse_gui.fd: to here.
* demos/Makefile.am:
* demos/fd/Makefile.am: ensuing changes.
2003-11-27 Angus Leeming <angus.leeming@btopenworld.com>
* image/image_gif.c (flush_buffer): do not pass 'incode'. Instead use
a local variable.
2003-11-26 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* fdesign/fd_forms.c (save_forms): do not try to remove twice ".fd"
from file name (avoids problem with path names containing a '.').
2003-11-25 Clive A Stubbings <xforms2@vjet.demon.co.uk>
* image/image_gif.c (flush_buffer): new static function, containing
code factored out of process_lzw_code.
(process_lzw_code): invoke flush_buffer where old code was in
process_lzw_code itself. In addition, also invoke flush_buffer
when cleaning up after an old-style gif image.
* image/image_jpeg.c (JPEG_identify): handle 'raw' JPEG images
without the JFIF header.
2003-11-26 Angus Leeming <angus.leeming@btopenworld.com>
* demos/boxtype.c: squash warning about uninitialized data.
2003-11-24 Angus Leeming <angus.leeming@btopenworld.com>
* fdesign/sp_menu.c (emit_menu_header): output properly initialized
C-code.
2003-11-20 Angus Leeming <angus.leeming@btopenworld.com>
* demos/Makefile.am: enable the conditional building of the demo
GL codes.
* demos/gl.c:
* demos/glwin.c: #include gl/glcanvas.h and so prevent warnings
about implicit function declarations.
2003-11-20 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/local.h: do not define HAVE_KP_DEFINE
* lib/flinternal.h: test directly for X11 version here
* lib/forms.c (fl_keyboard):
* lib/flcolor.c (fl_mapcolor, fl_dump_state_info):
* lib/xpopup.c (fl_addtopup):
* lib/clock.c (draw_clock): use proper ML_xxx macros instead of
bogus names
2003-11-20 Angus Leeming <angus.leeming@btopenworld.com>
* lib/events.c:
* lib/fldraw.c:
* lib/forms.c:
* lib/xpopup.c:
* lib/xsupport.c:
* image/image_fits.c:
* image/image_gif.c:
* image/image_jpeg.c:
* image/image_replace.c:
* image/image_tiff.c:
* image/ps_core.c:
* image/ps_draw.c:
* image/image_fits.c:
* image/image_gif.c:
* image/image_jpeg.c:
* image/image_replace.c:
* image/image_tiff.c:
* image/image_xwd.c:
* image/ps_core.c:
* image/ps_draw.c:
* fdesign/fd_main.c: squash warnings about comparison of
signed and unsigned variables. Only 'safe' warnings have been squashed.
2003-11-20 Angus Leeming <angus.leeming@btopenworld.com>
* lib/flsnprintf.c: remove unused variable 'credits'.
* lib/flresource.c: remove line 'fl_context->xim;' as it is a
statement with no effect.
* lib/version.c: remove unused variable 'c'.
2003-11-20 Angus Leeming <angus.leeming@btopenworld.com>
* lib/flinternal.h: add declaration of fl_handle_form.
Squash warnings about implicit declaration of the function when
compiling lib/tabfolder.c.
* gl/glcanvas.h: remove #ifdef HAVE_GL_GLX_H guard.
Cruft from pre-autoconf days.
Squash warnings about implicit declaration of the function when
compiling gl/glcanvas.c
2003-11-20 Angus Leeming <angus.leeming@btopenworld.com>
* fd2ps/papers.c:
* fd2ps/pscol.c:
* fd2ps/psdraw.c:
* fdesign/fd_control.c:
* fdesign/fd_main.c:
* fdesign/fd_printC.c:
* fdesign/fd_spec.c:
* fdesign/sp_dial.c:
* image/image_marker.c:
* image/image_tiff.c:
* image/ps_core.c:
* lib/cursor.c:
* lib/flcolor.c: squash warnings about 'var may be uninitialized' when
compiling with gcc -W -Wall by explicitly initializing all parts of the
arrays in the above files.
2003-11-19 Angus Leeming <angus.leeming@btopenworld.com>
* autogen.sh: enable the use of autoconf 2.58.
2003-11-19 Angus Leeming <angus.leeming@btopenworld.com>
* lib/OS2 and all files therein: removed.
* lib/Makefile.am: remove mention of OS2.
* lib/Readme: removed.
* os2move.cmd: removed.
* gl/canvas.h: removed.
* gl/Makefile.am: remove canvas.h.
2003-11-19 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/flinternal.h: remove obsolete comment
* config/xformsinclude.m4 (XFORMS_PATH_XPM): honor X_CFLAGS to
find xpm.h (should fix problem reported by Reed Riddle)
* README: update. In particular, the acknowledgement of copyright
has been removed since the code is not here anymore (and the
advertising clause is not needed anymore). Try to point to the new
nongnu.org site.
* Makefile.am (dist-hook): remove old leftover from LyX
(EXTRA_DIST): do not distribute xforms.spec, which
is generated at configure time
* lib/signal.c (default_signal_handler): fix typo
2003-11-14 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* config/config.guess:
* config/config.sub:
* config/libtool.m4:
* config/ltmain.sh: updated from libtool 1.4.3 (as distributed
with rh9)
* config/depcomp: updated from automake 1.4 (as distributed
with rh9)
2003-11-18 Angus Leeming <angus.leeming@btopenworld.com>
* xforms.spec.in: update the %doc list to reflect actuality.
2003-11-18 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* INSTALL: generic instructions from autoconf.
2003-10-03 Angus Leeming <angus.leeming@btopenworld.com>
Patch from Matthew Yaconis by way of
David Dembrow <ddembrow@nlxcorp.com>.
* lib/fselect.c: remove the arbitrary restriction on the display of
borderless forms.
* lib/tabfolder.c: display the tab forms correctly when using
bottom tab folders.
2003-11-13 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* config/common.am: do not set LIBS to an empty value
* image/Makefile.am (INCLUDES):
* lib/Makefile.am (INCLUDES): honor X_CFLAGS
* demos/Makefile.am:
* fdesign/Makefile.am:
* fd2ps/Makefile.am: use $(foo) form instead of @foo@ for
variables references. Honor X_LIBS, X_PRE_LIBS and X_EXTRA_LIBS.
2003-09-10 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* Makefile.am: only build the gl/ directory if required
* configure.ac: simplify handling of --enable-demos. Add support
for --disable-gl option; gl support is only compiled in if
GL/glx.h is found
2003-09-09 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* config/xformsinclude.m4 (XFORMS_CHECK_LIB_JPEG): no need to link
against the X11 libs...
* configure.ac: remove lots of checks for headers and functions.
We only keep the ones that were already tested for in the old
source (although we do not know whether they are still useful).
* lib/asyn_io.c: use HAVE_SYS_SELECT_H
2003-09-09 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* Makefile.am: only build demos/ directory if required
* configure.ac: add --enable-demos option
2003-09-09 Angus Leeming <angus.leeming@btopenworld.com>
* lib/forms.c (fl_keyboard): pass it the event to allow it to
distinguish between KeyPress and KeyRelease events.
(dispatch_key): new function, factored out of do_keyboard.
(do_keyboard): Handles KeyRelease events correctly. The KeyPress keysym
is stored and then dispatched on KeyRelease also, since
XmbLookupString is undefined on a KeyRelease event.
2003-09-05 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/version.c (fl_print_version): remove workaround for XENIX
* lib/local.h: remove NO_SOCK (who wants to support old SCO anyway?)
2003-07-31 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/local.h (FL_SIGRET):
* lib/signal.c (default_signal_handler): use RETSIGTYPE instead of
FL_SIG_RET
* lib/errmsg.c (fl_get_syserror_msg): use HAVE_STRERROR
* lib/sysdep.c (fl_msleep): use HAVE_USLEEP
* lib/local.h: remove variables DONT_HAVE_USLEEP,
DONT_HAVE_STRERROR, NO_CONST (handled by AC_C_CONST),
FL_SIGRET_IS_VOID, FL_SIGRET
* configure.ac: check for usleep too
2003-05-23 Angus Leeming <angus.leeming@btopenworld.com>
* image/rgb_db.c: follow Rouben Rostamian's advice and remove all the
helper functions that were used to ascertain the name of the RGB color
before he rewrote fl_lookup_RGBcolor.
* flimage.h: add a comment to the declaration of fl_init_RGBdatabase
that is does nothing and is retained for compatibility only.
2003-05-23 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/Basic.h: remove declarations of functions
fl_init_RGBdatabase and fl_lookup_RGBcolor as they are part of
libflimage, not libforms.
2003-05-30 Angus Leeming <angus.leeming@btopenworld.com>
* Changes: renamed as NEWS.
* COPYING: renamed as COPYING.LIB.
* 00README: renamed as README.
2003-05-22 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/include/Makefile.am: make sure that forms.h is not distributed
2003-05-21 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* configure.ac: do not set VERSION explicitely, this is done in
XFORMS_SET_VERSION.
* config/xformsinclude.m4 (XFORMS_SET_VERSION): simplify a tiny bit
2003-05-22 Rouben Rostamian <rostamian@umbc.edu>
* image/rgb_db.c (fl_lookup_RGBcolor): this function fell off the
dist at 1.0pre3. Now it is back again with a shiny new, more efficient
implementation.
2003-05-05 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/include/Makefile.am (forms.h): create forms.h using the
target stamp-forms, so that it remains untouched when AAA.h is
regenerated but did not change.
* lib/include/AAA.h.in: new file. This is the template from which
AAA.h is generated
* lib/include/.cvsignore: add AAA.h
* configure.ac: call XFORMS_SET_VERSION; generate AAA.h from AAA.h.in
* config/xformsinclude.m4 (XFORMS_SET_VERSION): new macro, which
sets the VERSION string for xforms
(XFORMS_CHECK_VERSION): simplify a bit
2003-04-24 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* image/image_fits.c (Bad_bpp): use abs() and not fabs(), since
bpp is an int
2003-04-24 Angus Leeming <angus.leeming@btopenworld.com>
Migrate from imake to autoconf/automake.
* Imakefile:
* Imakefile.os2:
* demos/Imakefile:
* demos/Imakefile.os2:
* fd2ps/Imakefile:
* fd2ps/Imakefile.os2:
* fdesign/Imakefile:
* fdesign/Imakefile.os2:
* fdesign/Imakefile.xxx:
* gl/Imakefile:
* image/Imakefile:
* lib/Imakefile:
* lib/Imakefile.os2:
* lib/OS2/Imakefile.os2:
* lib/include/Imakefile: removed.
* autogen.sh:
* configure.ac:
* config/.cvsignore:
* config/common.am:
* config/config.guess:
* config/config.sub:
* config/cygwin.m4:
* config/depcomp:
* config/libtool.m4:
* config/ltmain.sh:
* config/xformsinclude.m4: Here be magic ;-)
* Makefile.am:
* config/Makefile.am:
* demos/Makefile.am:
* demos/fd/Makefile.am:
* fd2ps/Makefile.am:
* fd2ps/test/Makefile.am:
* fdesign/Makefile.am:
* fdesign/fd/Makefile.am:
* fdesign/fd4test/Makefile.am:
* fdesign/notes/Makefile.am:
* fdesign/spec/Makefile.am:
* fdesign/xpm/Makefile.am:
* gl/Makefile.am:
* image/Makefile.am:
* lib/Makefile.am:
* lib/OS2/Makefile.am:
* lib/bitmaps/Makefile.am:
* lib/fd/Makefile.am:
* lib/include/Makefile.am:
* lib/private/Makefile.am: added.
* xforms.spec.in: the RPM spec file.
* lib/local.h: make use of the HAVE_STRCASECMP preprocessor variable.
* lib/pixmap.c: use XPM_H_LOCATION instead of pre-processor stuff.
* demos/demotest.c: define the callback.
* fd2ps/sys.c: use preprocessor variable HAVE_STRCASECMP rather than
NO_STRCASECMP.
* fd2ps/sys.h: now redundant, so remove it.
* fd2ps/fd2ps.h:
* fd2ps/sys.c: remove #include "sys.h"
* gl/canvas.h:
* gl/glcanvas.h: make use of HAVE_GL_GLX_H preprocessor variable.
2003-04-24 Angus Leeming <angus.leeming@btopenworld.com>
* lib/tabfolder.c (handle): ensure that we have an active folder
before trying to manipulate its contents.
2003-04-24 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/Imakefile: do not copy the generated forms.h to ../.
* lib/Imakefile: remove the targets to install forms.h.
* pretty well all .c files: change #include "forms.h" to
#include "include/forms.h".
2003-04-22 Angus Leeming <angus.leeming@btopenworld.com>
* fd2ps/sys.h: remove #define NO_STRDUP and FL_SIGRET as they aren't
used.
2003-04-22 Angus Leeming <angus.leeming@btopenworld.com>
* */*.c: ensure that config.h is #included if the HAVE_CONFIG_H
preprocessor variable is set.
2003-04-22 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/zzz.h: remove the #include "flinternal.h" line whose
inclusion depends on the MAKING_FORMS preprocessor variable.
* lib/forms.h:
* lib/include/forms.h: regenerated.
2003-04-20 Angus Leeming <angus.leeming@btopenworld.com>
* demos/wwwl.c: #include "private/flsnprintf.h".
2003-04-20 Angus Leeming <angus.leeming@btopenworld.com>
* lib/private/flsnprintf.h: use #defines to prevent needless
fl_snprintf bloat.
* lib/flsnprintf.c: prepend portable_v?snprintf with "fl_" to prevent
name clashes with other software. Make these functions globally
accessible.
Importantly, #if 0...#endif a block that prevents the code from
linking correctly on the DEC.
2003-04-20 Angus Leeming <angus.leeming@btopenworld.com>
* image/image.c:
* lib/errmsg.c: no need to check for fl_vsnprintf anymore.
2003-04-20 Angus Leeming <angus.leeming@btopenworld.com>
* demos/Imakefile:
* fd2ps/Imakefile:
* fdesign/Imakefile:
* gl/Imakefile:
* image/Imakefile:
* lib/Imakefile: pass the expected -DHAVE_SNPRINTF options to the
compiler.
2003-04-17 Angus Leeming <angus.leeming@btopenworld.com>
Make fl_snprintf private.
* lib/include/flsnprintf.h: moved to lib/private/flsnprintf.h.
* lib/include/Imakefile: remove flsnprintf.h.
* lib/forms.h:
* lib/include/forms.h: regenerated.
* fdesign/fd_attribs.c:
* image/image.c:
* image/image_io_filter.c:
* image/image_postscript.c:
* lib/choice.c:
* lib/cmd_br.c:
* lib/events.c:
* lib/flresource.c:
* lib/fselect.c:
* lib/goodie_alert.c:
* lib/goodie_choice.c:
* lib/goodie_msg.c:
* lib/goodie_salert.c:
* lib/version.c:
* lib/xpopup.c: add #include "private/flsnprintf.h".
2003-04-17 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* lib/Imakefile (EXTRA_INCLUDES): add $(XPMINC)
2003-04-17 Angus Leeming <angus.leeming@btopenworld.com>
* demos/Imakefile:
* fd2ps/Imakefile:
* fdesign/Imakefile:
* gl/Imakefile:
* image/Imakefile:
* lib/Imakefile: don't pass -Iprivate to the complier.
* fdesign/fd_super.c:
* fdesign/sp_browser.c:
* fdesign/sp_choice.c:
* fdesign/sp_counter.c:
* fdesign/sp_dial.c:
* fdesign/sp_menu.c:
* fdesign/sp_positioner.c:
* fdesign/sp_xyplot.c:
* image/image_postscript.c:
* image/postscript.c:
* image/ps_core.c:
* image/ps_draw.c:
* image/ps_text.c:
* lib/browser.c:
* lib/canvas.c:
* lib/choice.c:
* lib/counter.c:
* lib/dial.c:
* lib/flinternal.h:
* lib/formbrowser.c:
* lib/menu.c:
* lib/objects.c:
* lib/positioner.c:
* lib/scrollbar.c:
* lib/sldraw.c:
* lib/slider.c:
* lib/textbox.c:
* lib/thumbwheel.c:
* lib/valuator.c:
* lib/xyplot.c: associated changes to the #include directives.
2003-04-17 Angus Leeming <angus.leeming@btopenworld.com>
* lib/xforms.5: renamed as xforms.man. This probably breaks the
installation, but that is all slated for change anyway.
2003-04-17 Angus Leeming <angus.leeming@btopenworld.com>
* demos/Imakefile: do not -Ifd when compiling.
* demos/Imakefile:
* demos/buttonall.c:
* demos/demotest.c:
* demos/dirlist.c:
* demos/folder.c:
* demos/formbrowser.c:
* demos/ibrowser.c:
* demos/inputall.c:
* demos/itest.c:
* demos/scrollbar.c:
* demos/thumbwheel.c: associated changes.
* demos/.cvsignore: add all the generated executables.
2003-04-17 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/canvas.h: cruft removal. Don't mention glcanvas.h
here in case the user does not want GL support.
* lib/include/forms.h
* lib/forms.h: regenerated.
* gl/glcanvas.c: include glcanvas.h as this is no longer in forms.h
2003-04-16 Angus Leeming <angus.leeming@btopenworld.com>
Remove the SNP directory and replace it with a single file,
flsnprintf.c. Invoke snprintf through a wrapper fl_snprintf.
* Imakefile: remove SUBDIR snp.
* lib/flsnprintf.c, lib/include/flsnprintf.h: new files.
* lib/include/Imakefile: add flsnprintf.h to the files used to
generated forms.h.
* lib/forms.h
* lib/include/forms.h: regenerated.
* lib/Imakefile: add flsnprintf.c.
Pass -DHAVE_SNPRINTF as a compiler option.
* lib/local.h: remove HAVE_SNPRINTF stuff.
* demos/Imakefile:
* fd2ps/Imakefile:
* fdesign/Imakefile:
* gl/Imakefile:
* image/Imakefile:
pass -DHAVE_SNPRINTF as a compiler option. Remove other SNP stuff.
* demos/wwwl.c:
* fdesign/fd_attribs.c:
* image/image.c:
* image/image_io_filter.c:
* image/image_postscript.c:
* lib/choice.c:
* lib/cmd_br.c:
* lib/errmsg.c:
* lib/events.c:
* lib/flresource.c:
* lib/fselect.c:
* lib/goodie_alert.c:
* lib/goodie_choice.c:
* lib/goodie_msg.c:
* lib/goodie_salert.c:
* lib/version.c:
* lib/xpopup.c:
s/\(v*snprintf\)/fl_\1/
* snp/*: all files removed.
2003-04-15 Angus Leeming <angus.leeming@btopenworld.com>
* lots of files: reduce the amount of magic includes of header files
and therefore include flinternal.h explicitly much more.
2003-04-15 Angus Leeming <angus.leeming@btopenworld.com>
* .cvsignore:
* demos/.cvsignore:
* fd2ps/.cvsignore:
* fdesign/.cvsignore:
* gl/.cvsignore:
* image/.cvsignore:
* libs/.cvsignore:
* libs/include/.cvsignore: prepare the way for autoconf/automake.
2003-04-10 Angus Leeming <angus.leeming@btopenworld.com>
* lib/include/Basic.h: add FL_RESIZED to the FL_EVENTS enum.
* lib/include/AAA.h: up FL_FIXLEVEL to 2 to reflect this.
* lib/forms.h:
* lib/include/forms.h: regenerated.
* lib/forms.c (scale_form): pass event FL_RESIZED to the object handler
if the object size is changed.
* lib/tabfolder.c (handle): handle the FL_RESIZED event to ensure
that the currently active folder is resized.
2003-04-10 Angus Leeming <angus.leeming@btopenworld.com>
* lib/version.c (fl_print_version, fl_library_version): use
FL_VERSION, FL_REVISION rather than RCS stuff.
2003-04-10 Angus Leeming <angus.leeming@btopenworld.com>
* most files: Remove all the RCS strings from the header files
and about half of 'em from the .c files.
2003-04-10 John Levon <moz@compsoc.man.ac.uk>
* lib/pixmap.c (init_xpm_attributes): "fix" XPixmaps containing
colour "opaque".
2003-04-09 Angus Leeming <angus.leeming@btopenworld.com>
* demos/.cvsignore:
* snp/.cvsignore: Ignore Makefile*
2003-04-09 Angus Leeming <angus.leeming@btopenworld.com>
Move tabfolder-specific code out of forms.c and allow individual
FL_OBJECTs to respond to such events. Means that the library
becomes extensible to new, user-defined widgets once again.
* lib/include/Basic.h: add FL_MOVEORIGIN to the FL_EVENTS enum.
* lib/forms.h:
* lib/include/forms.h: regenerated automatically.
* lib/forms.c (fl_handle_form): no longer a static function.
Dispatch FL_MOVEORIGIN events to the form's constituent objects.
(fl_get_tabfolder_origin): removed. Functionality moved into
tabfolder.c.
(do_interaction_step): no longer call fl_get_tabfolder_origin. Instead,
dispatch a call to fl_handle_form(form, FL_MOVEORIGIN, ...).
* lib/tabfolder.c (handle): add FL_MOVEORIGIN to the event switch.
Update the x,y absolute coords of the active_folder and dispatch
a call to fl_handle_form(active_folder, FL_MOVEORIGIN, ...) to
ensure that the x,y absolute coords of nested tabfolders are also
updated.
2003-04-09 Jean-Marc Lasgouttes <lasgouttes@lyx.org>
* image/Imakefile (EXTRA_INCLUDES): change the order of includes,
to avoid that an older installed forms.h is used instead of the
fresh one
2003-04-09 Angus Leeming <angus.leeming@btopenworld.com>
* lib/objects.c (hide_tooltip): renamed as checked_hide_tooltip.
(unconditional_hide_tooltip): new static helper function,
invoked within fl_handle_it on FL_KEYPRESS and FL_PUSH events.
* lib/include/AAA.h: up-ed FL_FIXLEVEL to 1 to reflect the changes
made above.
* lib/forms.h: regenerated to reflect changed FL_FIXLEVEL.
* version.c (version): update to reflect this also.
2003-04-08 Angus Leeming <angus.leeming@btopenworld.com>
Enable tooltips to be shown correctly in "composite" widgets
such as the browser.
* lib/objects.c (get_parent): new static helper function. Given an
FL_OBJECT*, returns its parent FL_OBJECT.
(tooltip_handler): rewritten to show the tooltip that is stored
by the parent FL_OBJECT.
(hide_tooltip): new static helper function: on leaving an FL_OBJECT,
only hide the tooltip if we have also left the bounds of the parent
FL_OBJECT.
(fl_handle_it): make use of these new functions to show and hide
tooltips.
2003-04-08 Angus Leeming <angus.leeming@btopenworld.com>
* image/image_rotate.c (flimage_rotate): enable the rotation of
grayscale images by 90 degree multiples and more generally prevent
other unsupported image types from crashing xforms.
* lib/flresource.c (fl_initialize): clean-up properly if we fail to
create input contexts or methods.
* lib/textbox.c (handle_textbox):
* lib/thumbwheel.c (handle):
* lib/util.c (flevent): FL_KEYBOARD has been replaced by FL_KEYPRESS.
The former is retained for compatability, but the latter should be
used internally.
|
https://sources.debian.org/src/libforms/1.0.93sp1-2/ChangeLog/
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
flutter_feather_icons 1.0.3
Feather is a collection of simply beautiful open source icons. Each icon is designed on a 24x24 grid with an emphasis on simplicity, consistency and usability.
flutter_feather_icons v1.0.3 #
See Catalog #
Important Note #
Naming conventions have been changed for better readability and consistancy with all other flutter icon packs
To convert from the catalog simply follow this method
alert-circle => alertCircle arrow-down-left => arrowDownLeft
if still you face any problems have a look into the documentation (class FeatherIcons)
280 General Purpose Icons for Flutter #
This flutter package allows you to use all the Feather Icons made by Feather Team
Find it at pub.dartlang.org
Installation #
In the
dependencies: section of your pubspec.yaml, add the following line:
flutter_feather_icons: <latest_version>
Usage #
import "package:flutter_feather_icons/flutter_feather_icons.dart"; class MyAwesomeWidget extends StatelessWidget { Widget build(BuildContext context) { return IconButton( icon: Icon(FeatherIcons.github), onPressed: () { print("awesome platform to share code and ideas"); } ); } }
Example #
View the flutter app in the
example directory
Screenshot #
|
https://pub.dev/packages/flutter_feather_icons
|
CC-MAIN-2020-34
|
en
|
refinedweb
|
Awesome Django authorization, without the database
Project description
rules is a tiny but powerful app providing object-level permissions to Django, without requiring a database. At its core, it is a generic framework for building rule-based systems, similar to decision trees. It can also be used as a standalone library in other contexts and frameworks.
Features
rules has got you covered. rules is:
- Documented, tested, reliable and easy to use.
- Versatile. Decorate callables to build complex graphs of predicates. Predicates can be any type of callable – simple functions, lambdas, methods, callable class objects, partial functions, decorated functions, anything really.
- A good Django citizen. Seamless integration with Django views, templates and the Admin for testing for object-level permissions.
- Efficient and smart. No need to mess around with a database to figure out whether John really wrote that book.
- Simple. Dive in the code. You’ll need 10 minutes to figure out how it works.
- Powerful. rules comes complete with advanced features, such as invocation context and storage for arbitrary data, skipping evaluation of predicates under specific conditions, logging of evaluated predicates and more!
Table of Contents
- Requirements
- Upgrading from 2.x
- Upgrading from 1.x
- How to install
- Using Rules
- Using Rules with Django
- Advanced features
- Best practices
- API Reference
- Licence
Requirements
rules requires Python 3.7 or newer. The last version to support Python 2.7 is rules 2.2. It can optionally integrate with Django, in which case requires Django 2.2 or newer.
Note: At any given moment in time, rules will maintain support for all currently supported Django versions, while dropping support for those versions that reached end-of-life in minor releases. See the Supported Versions section on Django Project website for the current state and timeline.
Upgrading from 2.x
The are no significant changes between rules 2.x and 3.x except dropping support for Python 2, so before upgrading to 3.x you just need to make sure you’re running a supported Python 3 version.
Upgrading from 1.x
- Support for Python 2.6 and 3.3, and Django versions before 1.11 has been dropped.
- The SkipPredicate exception and skip() method of Predicate, that were used to signify that a predicate should be skipped, have been removed. You may return None from your predicate to achieve this.
- The APIs to replace a rule’s predicate have been renamed and their behaviour changed. replace_rule and replace_perm functions and replace_rule method of RuleSet have been renamed to set_rule, set_perm and RuleSet.set_perm respectively. The old behaviour was to raise a KeyError if a rule by the given name did not exist. Since version 2.0 this has changed and you can safely use set_* to set a rule’s predicate without having to ensure the rule exists first.
How to install
Using pip:
$ pip install rules
Manually:
$ git clone $ cd django-rules $ python setup.py install
Run tests with:
$ ./runtests.sh
You may also want to read Best practices for general advice on how to use rules.
Configuring Django
Add rules to INSTALLED_APPS:
INSTALLED_APPS = ( # ... 'rules', )
Add the authentication backend:
AUTHENTICATION_BACKENDS = ( 'rules.permissions.ObjectPermissionBackend', 'django.contrib.auth.backends.ModelBackend', )
Using Rules
rules is based on the idea that you maintain a dict-like object that maps string keys used as identifiers of some kind, to callables, called predicates. This dict-like object is actually an instance of RuleSet and the predicates are instances of Predicate.
Creating predicates
Let’s ignore rule sets for a moment and go ahead and define a predicate. The easiest way is with the @predicate decorator:
>>> @rules.predicate >>> def is_book_author(user, book): ... return book.author == user ... >>> is_book_author <Predicate:is_book_author object at 0x10eeaa490>
This predicate will return True if the book’s author is the given user, False otherwise.
Predicates can be created from any callable that accepts anything from zero to two positional arguments:
- fn(obj, target)
- fn(obj)
- fn()
This is their generic form. If seen from the perspective of authorization in Django, the equivalent signatures are:
- fn(user, obj)
- fn(user)
- fn()
Predicates can do pretty much anything with the given arguments, but must always return True if the condition they check is true, False otherwise. rules comes with several predefined predicates that you may read about later on in API Reference, that are mostly useful when dealing with authorization in Django.
Setting up rules
Let’s pretend that we want to let authors edit or delete their books, but not books written by other authors. So, essentially, what determines whether an author can edit or can delete a given book is whether they are its author.
In rules, such requirements are modelled as rules. A rule is a map of a unique identifier (eg. “can edit”) to a predicate. Rules are grouped together into a rule set. rules has two predefined rule sets:
- A default rule set storing shared rules.
- Another rule set storing rules that serve as permissions in a Django context.
So, let’s define our first couple of rules, adding them to the shared rule set. We can use the is_book_author predicate we defined earlier:
>>> rules.add_rule('can_edit_book', is_book_author) >>> rules.add_rule('can_delete_book', is_book_author)
Assuming we’ve got some data, we can now test our rules:
>>> from django.contrib.auth.models import User >>> from books.models import Book >>> guidetodjango = Book.objects.get(isbn='978-1-4302-1936-1') >>> guidetodjango.author <User: adrian> >>> adrian = User.objects.get(username='adrian') >>> rules.test_rule('can_edit_book', adrian, guidetodjango) True >>> rules.test_rule('can_delete_book', adrian, guidetodjango) True
Nice… but not awesome.
Combining predicates
Predicates by themselves are not so useful – not more useful than any other function would be. Predicates, however, can be combined using binary operators to create more complex ones. Predicates support the following operators:
- P1 & P2: Returns a new predicate that returns True if both predicates return True, otherwise False. If P1 returns False, P2 will not be evaluated.
- P1 | P2: Returns a new predicate that returns True if any of the predicates returns True, otherwise False. If P1 returns True, P2 will not be evaluated.
- P1 ^ P2: Returns a new predicate that returns True if one of the predicates returns True and the other returns False, otherwise False.
- ~P: Returns a new predicate that returns the negated result of the original predicate.
Suppose the requirement for allowing a user to edit a given book was for them to be either the book’s author, or a member of the “editors” group. Allowing users to delete a book should still be determined by whether the user is the book’s author.
With rules that’s easy to implement. We’d have to define another predicate, that would return True if the given user is a member of the “editors” group, False otherwise. The built-in is_group_member factory will come in handy:
>>> is_editor = rules.is_group_member('editors') >>> is_editor <Predicate:is_group_member:editors object at 0x10eee1350>
We could combine it with the is_book_author predicate to create a new one that checks for either condition:
>>> is_book_author_or_editor = is_book_author | is_editor >>> is_book_author_or_editor <Predicate:(is_book_author | is_group_member:editors) object at 0x10eee1390>
We can now update our can_edit_book rule:
>>> rules.set_rule('can_edit_book', is_book_author_or_editor) >>> rules.test_rule('can_edit_book', adrian, guidetodjango) True >>> rules.test_rule('can_delete_book', adrian, guidetodjango) True
Let’s see what happens with another user:
>>> martin = User.objects.get(username='martin') >>> list(martin.groups.values_list('name', flat=True)) ['editors'] >>> rules.test_rule('can_edit_book', martin, guidetodjango) True >>> rules.test_rule('can_delete_book', martin, guidetodjango) False
Awesome.
So far, we’ve only used the underlying, generic framework for defining and testing rules. This layer is not at all specific to Django; it may be used in any context. There’s actually no import of anything Django-related in the whole app (except in the rules.templatetags module). rules however can integrate tightly with Django to provide authorization.
Using Rules with Django
rules is able to provide object-level permissions in Django. It comes with an authorization backend and a couple template tags for use in your templates.
Permissions
In rules, permissions are a specialised type of rules. You still define rules by creating and combining predicates. These rules however, must be added to a permissions-specific rule set that comes with rules so that they can be picked up by the rules authorization backend.
Creating permissions
The convention for naming permissions in Django is app_label.action_object, and we like to adhere to that. Let’s add rules for the books.change_book and books.delete_book permissions:
>>> rules.add_perm('books.change_book', is_book_author | is_editor) >>> rules.add_perm('books.delete_book', is_book_author)
See the difference in the API? add_perm adds to a permissions-specific rule set, whereas add_rule adds to a default shared rule set. It’s important to know however, that these two rule sets are separate, meaning that adding a rule in one does not make it available to the other.
Checking for permission
Let’s go ahead and check whether adrian has change permission to the guidetodjango book:
>>> adrian.has_perm('books.change_book', guidetodjango) False
When you call the User.has_perm method, Django asks each backend in settings.AUTHENTICATION_BACKENDS whether a user has the given permission for the object. When queried for object permissions, Django’s default authentication backend always returns False. rules comes with an authorization backend, that is able to provide object-level permissions by looking into the permissions-specific rule set.
Let’s add the rules authorization backend in settings:
AUTHENTICATION_BACKENDS = ( 'rules.permissions.ObjectPermissionBackend', 'django.contrib.auth.backends.ModelBackend', )
Now, checking again gives adrian the required permissions:
>>> adrian.has_perm('books.change_book', guidetodjango) True >>> adrian.has_perm('books.delete_book', guidetodjango) True >>> martin.has_perm('books.change_book', guidetodjango) True >>> martin.has_perm('books.delete_book', guidetodjango) False
NOTE: Calling has_perm on a superuser will ALWAYS return True.
Permissions in models
NOTE: The features described in this section work on Python 3+ only.
It is common to have a set of permissions for a model, like what Django offers with its default model permissions (such as add, change etc.). When using rules as the permission checking backend, you can declare object-level permissions for any model in a similar way, using a new Meta option.
First, you need to switch your model’s base and metaclass to the slightly extended versions provided in rules.contrib.models. There are several classes and mixins you can use, depending on whether you’re already using a custom base and/or metaclass for your models or not. The extensions are very slim and don’t affect the models’ behavior in any way other than making it register permissions.
If you’re using the stock django.db.models.Model as base for your models, simply switch over to RulesModel and you’re good to go.
If you already have a custom base class adding common functionality to your models, add RulesModelMixin to the classes it inherits from and set RulesModelBase as its metaclass, like so:
from django.db.models import Model from rules.contrib.models import RulesModelBase, RulesModelMixin class MyModel(RulesModelMixin, Model, metaclass=RulesModelBase): ...
If you’re using a custom metaclass for your models, you’ll already know how to make it inherit from RulesModelBaseMixin yourself.
Then, create your models like so, assuming you’re using RulesModel as base directly:
import rules from rules.contrib.models import RulesModel class Book(RulesModel): class Meta: rules_permissions = { "add": rules.is_staff, "read": rules.is_authenticated, }
This would be equivalent to the following calls:
rules.add_perm("app_label.add_book", rules.is_staff) rules.add_perm("app_label.read_book", rules.is_authenticated)
There are methods in RulesModelMixin that you can overwrite in order to customize how a model’s permissions are registered. See the documented source code for details if you need this.
Of special interest is the get_perm classmethod of RulesModelMixin, which can be used to convert a permission type to the corresponding full permission name. If you need to query for some type of permission on a given model programmatically, this is handy:
if user.has_perm(Book.get_perm("read")): ...
Permissions in views
rules comes with a set of view decorators to help you enforce authorization in your views.
Using the function-based view decorator
For function-based views you can use the permission_required decorator:
from django.shortcuts import get_object_or_404 from rules.contrib.views import permission_required from posts.models import Post def get_post_by_pk(request, post_id): return get_object_or_404(Post, pk=post_id) @permission_required('posts.change_post', fn=get_post_by_pk) def post_update(request, post_id): # ...
Usage is straight-forward, but there’s one thing in the example above that stands out and this is the get_post_by_pk function. This function, given the current request and all arguments passed to the view, is responsible for fetching and returning the object to check permissions against – i.e. the Post instance with PK equal to the given post_id in the example. This specific use-case is quite common so, to save you some typing, rules comes with a generic helper function that you can use to do this declaratively. The example below is equivalent to the one above:
from rules.contrib.views import permission_required, objectgetter from posts.models import Post @permission_required('posts.change_post', fn=objectgetter(Post, 'post_id')) def post_update(request, post_id): # ...
For more information on the decorator and helper function, refer to the rules.contrib.views module.
Using the class-based view mixin
Django includes a set of access mixins that you can use in your class-based views to enforce authorization. rules extends this framework to provide object-level permissions via a mixin, PermissionRequiredMixin.
The following example will automatically test for permission against the instance returned by the view’s get_object method:
from django.views.generic.edit import UpdateView from rules.contrib.views import PermissionRequiredMixin from posts.models import Post class PostUpdate(PermissionRequiredMixin, UpdateView): model = Post permission_required = 'posts.change_post'
You can customise the object either by overriding get_object or get_permission_object.
For more information refer to the Django documentation and the rules.contrib.views module.
Checking permission automatically based on view type
If you use the mechanisms provided by rules.contrib.models to register permissions for your models as described in Permissions in models, there’s another convenient mixin for class-based views available for you.
rules.contrib.views.AutoPermissionRequiredMixin can recognize the type of view it’s used with and check for the corresponding permission automatically.
This example view would, without any further configuration, automatically check for the "posts.change_post" permission, given that the app label is "posts":
from django.views.generic import UpdateView from rules.contrib.views import AutoPermissionRequiredMixin from posts.models import Post class UpdatePostView(AutoPermissionRequiredMixin, UpdateView): model = Post
By default, the generic CRUD views from django.views.generic are mapped to the native Django permission types (add, change, delete and view). However, the pre-defined mappings can be extended, changed or replaced altogether when subclassing AutoPermissionRequiredMixin. See the fully documented source code for details on how to do that properly.
Permissions and rules in templates
rules comes with two template tags to allow you to test for rules and permissions in templates.
Add rules to your INSTALLED_APPS:
INSTALLED_APPS = ( # ... 'rules', )
Then, in your template:
{% load rules %} {% has_perm 'books.change_book' author book as can_edit_book %} {% if can_edit_book %} ... {% endif %} {% test_rule 'has_super_feature' user as has_super_feature %} {% if has_super_feature %} ... {% endif %}
Permissions in the Admin
If you’ve setup rules to be used with permissions in Django, you’re almost set to also use rules to authorize any add/change/delete actions in the Admin. The Admin asks for four different permissions, depending on action:
- <app_label>.add_<modelname>
- <app_label>.view_<modelname>
- <app_label>.change_<modelname>
- <app_label>.delete_<modelname>
- <app_label>
Note: view permission is new in Django v2.1 and should not be added in versions before that.
The first four are obvious. The fifth is the required permission for an app to be displayed in the Admin’s “dashboard”. Overriding it does not restrict access to the add, change or delete views. Here’s some rules for our imaginary books app as an example:
>>> rules.add_perm('books', rules.always_allow) >>> rules.add_perm('books.add_book', is_staff) >>> rules.add_perm('books.view_book', is_staff | has_secret_access_code) >>> rules.add_perm('books.change_book', is_staff) >>> rules.add_perm('books.delete_book', is_staff)
Django Admin does not support object-permissions, in the sense that it will never ask for permission to perform an action on an object, only whether a user is allowed to act on (any) instances of a model.
If you’d like to tell Django whether a user has permissions on a specific object, you’d have to override the following methods of a model’s ModelAdmin:
- has_view_permission(user, obj=None)
- has_change_permission(user, obj=None)
- has_delete_permission(user, obj=None)
rules comes with a custom ModelAdmin subclass, rules.contrib.admin.ObjectPermissionsModelAdmin, that overrides these methods to pass on the edited model instance to the authorization backends, thus enabling permissions per object in the Admin:
# books/admin.py from django.contrib import admin from rules.contrib.admin import ObjectPermissionsModelAdmin from .models import Book class BookAdmin(ObjectPermissionsModelAdmin): pass admin.site.register(Book, BookAdmin)
Now this allows you to specify permissions like this:
>>> rules.add_perm('books', rules.always_allow) >>> rules.add_perm('books.add_book', has_author_profile) >>> rules.add_perm('books.change_book', is_book_author_or_editor) >>> rules.add_perm('books.delete_book', is_book_author)
To preserve backwards compatibility, Django will ask for either view or change permission. For maximum flexibility, rules behaves subtly different: rules will ask for the change permission if and only if no rule exists for the view permission.
Permissions in Django Rest Framework
Similar to rules.contrib.views.AutoPermissionRequiredMixin, there is a rules.contrib.rest_framework.AutoPermissionViewSetMixin for viewsets in Django Rest Framework. The difference is that it doesn’t derive permission from the type of view but from the API action (create, retrieve etc.) that’s tried to be performed. Of course, it also requires you to declare your models as described in Permissions in models.
Here is a possible ModelViewSet for the Post model with fully automated CRUD permission checking:
from rest_framework.serializers import ModelSerializer from rest_framework.viewsets import ModelViewSet from rules.contrib.rest_framework import AutoPermissionViewSetMixin from posts.models import Post class PostSerializer(ModelSerializer): class Meta: model = Post fields = "__all__" class PostViewSet(AutoPermissionViewSetMixin, ModelViewSet): queryset = Post.objects.all() serializer_class = PostSerializer
By default, the CRUD actions of ModelViewSet are mapped to the native Django permission types (add, change, delete and view). The list action has no permission checking enabled. However, the pre-defined mappings can be extended, changed or replaced altogether when using (or subclassing) AutoPermissionViewSetMixin. Custom API actions defined via the @action decorator may then be mapped as well. See the fully documented source code for details on how to properly customize the default behavior.
Advanced features
Custom rule sets
You may create as many rule sets as you need:
>>> features = rules.RuleSet()
And manipulate them by adding, removing, querying and testing rules:
>>> features.rule_exists('has_super_feature') False >>> is_special_user = rules.is_group_member('special') >>> features.add_rule('has_super_feature', is_special_user) >>> 'has_super_feature' in features True >>> features['has_super_feature'] <Predicate:is_group_member:special object at 0x10eeaa500> >>> features.test_rule('has_super_feature', adrian) True >>> features.remove_rule('has_super_feature')
Note however that custom rule sets are not available in Django templates – you need to provide integration yourself.
Invocation context
A new context is created as a result of invoking Predicate.test() and is only valid for the duration of the invocation. A context is a simple dict that you can use to store arbitrary data, (eg. caching computed values, setting flags, etc.), that can be used by predicates later on in the chain. Inside a predicate function it can be used like so:
>>> @predicate ... def mypred(a, b): ... value = compute_expensive_value(a) ... mypred.context['value'] = value ... return True
Other predicates can later use stored values:
>>> @predicate ... def myotherpred(a, b): ... value = myotherpred.context.get('value') ... if value is not None: ... return do_something_with_value(value) ... else: ... return do_something_without_value()
Predicate.context provides a single args attribute that contains the arguments as given to test() at the beginning of the invocation.
Binding “self”
In a predicate’s function body, you can refer to the predicate instance itself by its name, eg. is_book_author. Passing bind=True as a keyword argument to the predicate decorator will let you refer to the predicate with self, which is more convenient. Binding self is just syntactic sugar. As a matter of fact, the following two are equivalent:
>>> @predicate ... def is_book_author(user, book): ... if is_book_author.context.args: ... return user == book.author ... return False >>> @predicate(bind=True) ... def is_book_author(self, user, book): ... if self.context.args: ... return user == book.author ... return False
Skipping predicates
You may skip evaluation by returning None from your predicate:
>>> @predicate(bind=True) ... def is_book_author(self, user, book): ... if len(self.context.args) > 1: ... return user == book.author ... else: ... return None
Returning None signifies that the predicate need not be evaluated, thus leaving the predicate result up to that point unchanged.
Logging predicate evaluation
rules can optionally be configured to log debug information as rules are evaluated to help with debugging your predicates. Messages are sent at the DEBUG level to the 'rules' logger. The following dictConfig configures a console logger (place this in your project’s settings.py if you’re using rules with Django):
LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', }, }, 'loggers': { 'rules': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': True, }, }, }
When this logger is active each individual predicate will have a log message printed when it is evaluated.
Best practices
Before you can test for rules, these rules must be registered with a rule set, and for this to happen the modules containing your rule definitions must be imported.
For complex projects with several predicates and rules, it may not be practical to define all your predicates and rules inside one module. It might be best to split them among any sub-components of your project. In a Django context, these sub-components could be the apps for your project.
On the other hand, because importing predicates from all over the place in order to define rules can lead to circular imports and broken hearts, it’s best to further split predicates and rules in different modules.
rules may optionally be configured to autodiscover rules.py modules in your apps and import them at startup. To have rules do so, just edit your INSTALLED_APPS setting:
INSTALLED_APPS = ( # replace 'rules' with: 'rules.apps.AutodiscoverRulesConfig', )
Note: On Python 2, you must also add the following to the top of your rules.py file, or you’ll get import errors trying to import rules itself:
from __future__ import absolute_import
API Reference
The core APIs are accessible from the root rules module. Django-specific functionality for the Admin and views is available from rules.contrib.
Class rules.Predicate
You create Predicate instances by passing in a callable:
>>> def is_book_author(user, book): ... return book.author == user ... >>> pred = Predicate(is_book_author) >>> pred <Predicate:is_book_author object at 0x10eeaa490>
You may optionally provide a different name for the predicate that is used when inspecting it:
>>> pred = Predicate(is_book_author, name='another_name') >>> pred <Predicate:another_name object at 0x10eeaa490>
Also, you may optionally provide bind=True in order to be able to access the predicate instance with self:
>>> def is_book_author(self, user, book): ... if self.context.args: ... return user == book.author ... return False ... >>> pred = Predicate(is_book_author, bind=True) >>> pred <Predicate:is_book_author object at 0x10eeaa490>
Instance methods
- test(obj=None, target=None)
- Returns the result of calling the passed in callable with zero, one or two positional arguments, depending on how many it accepts.
Class rules.RuleSet
RuleSet extends Python’s built-in dict type. Therefore, you may create and use a rule set any way you’d use a dict.
Instance methods
- add_rule(name, predicate)
- Adds a predicate to the rule set, assigning it to the given rule name. Raises KeyError if another rule with that name already exists.
- set_rule(name, predicate)
- Set the rule with the given name, regardless if one already exists.
- remove_rule(name)
- Remove the rule with the given name. Raises KeyError if a rule with that name does not exist.
- rule_exists(name)
- Returns True if a rule with the given name exists, False otherwise.
- test_rule(name, obj=None, target=None)
- Returns the result of calling predicate.test(obj, target) where predicate is the predicate for the rule with the given name. Returns False if a rule with the given name does not exist.
Decorators
- @predicate
Decorator that creates a predicate out of any callable:
>>> @predicate ... def is_book_author(user, book): ... return book.author == user ... >>> is_book_author <Predicate:is_book_author object at 0x10eeaa490>
Customising the predicate name:
>>> @predicate(name='another_name') ... def is_book_author(user, book): ... return book.author == user ... >>> is_book_author <Predicate:another_name object at 0x10eeaa490>
Binding self:
>>> @predicate(bind=True) ... def is_book_author(self, user, book): ... if 'user_has_special_flag' in self.context: ... return self.context['user_has_special_flag'] ... return book.author == user
Predefined predicates
- always_allow(), always_true()
- Always returns True.
- always_deny(), always_false()
- Always returns False.
- is_authenticated(user)
- Returns the result of calling user.is_authenticated(). Returns False if the given user does not have an is_authenticated method.
- is_superuser(user)
- Returns the result of calling user.is_superuser. Returns False if the given user does not have an is_superuser property.
- is_staff(user)
- Returns the result of calling user.is_staff. Returns False if the given user does not have an is_staff property.
- is_active(user)
- Returns the result of calling user.is_active. Returns False if the given user does not have an is_active property.
- is_group_member(*groups)
- Factory that creates a new predicate that returns True if the given user is a member of all the given groups, False otherwise.
Shortcuts
Managing the permissions rule set
- add_perm(name, predicate)
- Adds a rule to the permissions rule set. See RuleSet.add_rule.
- set_perm(name, predicate)
- Replace a rule from the permissions rule set. See RuleSet.set_rule.
- remove_perm(name)
- Remove a rule from the permissions rule set. See RuleSet.remove_rule.
- perm_exists(name)
- Returns whether a rule exists in the permissions rule set. See RuleSet.rule_exists.
- has_perm(name, user=None, obj=None)
- Tests the rule with the given name. See RuleSet.test_rule.
Licence
django-rules is distributed under the | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/rules/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Bgfx is a rendering library that supports Direct3D, Metal and OpenGL variants across 11 platforms and counting. It's easy to build and there are many examples available to help submerse you in the details. Understanding the line of separation between bgfx and the example code was relatively easy but took me more time than I would have thought. If you're interested in a quick example on how to use bgfx with your own project, read on.
I assume you have some prior graphics programming experience and that you've already followed the build instructions here. I'll be borrowing from the example-01-cubes project so make sure you build using the --with-examples option if you'd like to follow along.
You'll find the debug and release libraries in a folder named something like winXX_vs20XX located inside the .build directory (making sure you link bgfx*.lib, bimg*.lib and bx*.lib). To test if all is well, call the bgfx::init function.
#include "bgfx/bgfx.h" int main(void) { bgfx::init(); return 0; }
With all this in place, you should be able to initialize the system without error.
We'll need a window to render to. I use GLFW but SDL or anything else will be fine.
#include "bgfx/bgfx.h" #include "GLFW/glfw3.h" #define WNDW_WIDTH 1600 #define WNDW_HEIGHT 900 int main(void) { glfwInit(); GLFWwindow* window = glfwCreateWindow(WNDW_WIDTH, WNDW_HEIGHT, "Hello, bgfx!", NULL, NULL); bgfx::init(); return 0; }
Now we have to make sure bgfx has a handle to the native window. This is done via the bgfx::PlatformData struct and the 'nwh' member. If you're using GLFW, make sure you define GLFW_EXPOSE_NATIVE_WIN32 and include the glfw3native header. Now is also a good time to properly define a bgfx::Init object.
... #define GLFW_EXPOSE_NATIVE_WIN32 #include "GLFW/glfw3native.h" ... bgfx::PlatformData pd; pd.nwh = glfwGetWin32Window(window); bgfx::setPlatformData(pd); bgfx::Init bgfxInit; bgfxInit.type = bgfx::RendererType::Count; // Automatically choose a renderer. bgfxInit.resolution.width = WNDW_WIDTH; bgfxInit.resolution.height = WNDW_HEIGHT; bgfxInit.resolution.reset = BGFX_RESET_VSYNC; bgfx::init(bgfxInit); ...
Let's render something. We'll set the view clear flags and create a simple render loop.
... bgfx::setViewClear(0, BGFX_CLEAR_COLOR | BGFX_CLEAR_DEPTH, 0x443355FF, 1.0f, 0); bgfx::setViewRect(0, 0, 0, WNDW_WIDTH, WNDW_HEIGHT); unsigned int counter = 0; while(true) { bgfx::frame(); counter++; } ...
You should see a window with a purple background. Soak in the awesome.
At this point, we're ready to take on something more interesting. We'll steal a cube mesh from one of the example files.
struct PosColorVertex { float x; float y; float z; uint32_t abgr; }; static PosColorVertex cubeVertices[] = { {-1.0f, 1.0f, 1.0f, 0xff000000 }, { 1.0f, 1.0f, 1.0f, 0xff0000ff }, {-1.0f, -1.0f, 1.0f, 0xff00ff00 }, { 1.0f, -1.0f, 1.0f, 0xff00ffff }, {-1.0f, 1.0f, -1.0f, 0xffff0000 }, { 1.0f, 1.0f, -1.0f, 0xffff00ff }, {-1.0f, -1.0f, -1.0f, 0xffffff00 }, { 1.0f, -1.0f, -1.0f, 0xffffffff }, }; static const uint16_t cubeTriList[] = { 0, 1, 2, 1, 3, 2, 4, 6, 5, 5, 6, 7, 0, 2, 4, 4, 2, 6, 1, 5, 3, 5, 7, 3, 0, 4, 1, 4, 5, 1, 2, 3, 6, 6, 3, 7, };
Now we need to describe the mesh in terms of the vertex declaration, bgfx::VertexDecl.
... bgfx::setViewRect(0, 0, 0, WNDW_WIDTH, WNDW_HEIGHT); bgfx::VertexDecl pcvDecl; pcvDecl.begin() .add(bgfx::Attrib::Position, 3, bgfx::AttribType::Float) .add(bgfx::Attrib::Color0, 4, bgfx::AttribType::Uint8, true) .end(); bgfx::VertexBufferHandle vbh = bgfx::createVertexBuffer(bgfx::makeRef(cubeVertices, sizeof(cubeVertices)), pcvDecl); bgfx::IndexBufferHandle ibh = bgfx::createIndexBuffer(bgfx::makeRef(cubeTriList, sizeof(cubeTriList))); unsigned int counter = 0; ...
We're almost there. We just need to load a bgfx shader which we'll borrow from the example files in the examples/runtime/shaders directory. To do that we need to load the shader file contents inside a bgfx::Memory object before passing that to bgfx::createShader.
bgfx::ShaderHandle loadShader(const char *FILENAME) { const char* shaderPath = "???"; switch(bgfx::getRendererType()) { case bgfx::RendererType::Noop: case bgfx::RendererType::Direct3D9: shaderPath = "shaders/dx9/"; break; case bgfx::RendererType::Direct3D11: case bgfx::RendererType::Direct3D12: shaderPath = "shaders/dx11/"; break; case bgfx::RendererType::Gnm: shaderPath = "shaders/pssl/"; break; case bgfx::RendererType::Metal: shaderPath = "shaders/metal/"; break; case bgfx::RendererType::OpenGL: shaderPath = "shaders/glsl/"; break; case bgfx::RendererType::OpenGLES: shaderPath = "shaders/essl/"; break; case bgfx::RendererType::Vulkan: shaderPath = "shaders/spirv/"; break; } size_t shaderLen = strlen(shaderPath); size_t fileLen = strlen(FILENAME); char *filePath = (char *)malloc(shaderLen + fileLen); memcpy(filePath, shaderPath, shaderLen); memcpy(&filePath[shaderLen], FILENAME, fileLen); FILE *file = fopen(FILENAME, "rb"); fseek(file, 0, SEEK_END); long fileSize = ftell(file); fseek(file, 0, SEEK_SET); const bgfx::Memory *mem = bgfx::alloc(fileSize + 1); fread(mem->data, 1, fileSize, file); mem->data[mem->size - 1] = '\0'; fclose(file); return bgfx::createShader(mem); }
Now we can create a shader program and wrap up rendering our cube. The bx library has matrix helper methods or use your own. Either way, building the projection matrix and setting the view transform should look familiar. Don't forget to set the vertex and index buffers and submit the program we just created before advancing the next frame.
... bgfx::ShaderHandle vsh = loadShader("vs_cubes.bin"); bgfx::ShaderHandle fsh = loadShader("fs_cubes.bin"); bgfx::ProgramHandle program = bgfx::createProgram(vsh, fsh, true); unsigned int counter = 0; while(true) { const bx::Vec3 at = {0.0f, 0.0f, 0.0f}; const bx::Vec3 eye = {0.0f, 0.0f, -5.0f}; float view[16]; bx::mtxLookAt(view, eye, at); float proj[16]; bx::mtxProj(proj, 60.0f, float(WNDW_WIDTH) / float(WNDW_HEIGHT), 0.1f, 100.0f, bgfx::getCaps()->homogeneousDepth); bgfx::setViewTransform(0, view, proj); bgfx::setVertexBuffer(0, vbh); bgfx::setIndexBuffer(ibh); bgfx::submit(0, program); bgfx::frame(); counter++; } ...
Behold! A cube. Let's make it move.
... bgfx::setViewTransform(0, view, proj); float mtx[16]; bx::mtxRotateXY(mtx, counter * 0.01f, counter * 0.01f); bgfx::setTransform(mtx); bgfx::submit(0, program); ...
And we're done!
You can check out the completed example here. Note that I kept error handling and callbacks out to better highlight how bgfx is used. Hopefully this will give you a basic idea of how things work and enable you to layer in more advanced techniques. Be sure to take some time to scan through the example code and API documentation. Good luck and happy rendering!
Спасибо Бранимир Караџић!
Discussion (12)
bgfx::VertexDeclhas been renamed to
bgfx::VertexLayout.
In order to use the
bgfx::setPlatformDatafunction, you also need to include
bgfx/platform.hheader file.
This line:
Should be
this returns null for me.
filePath ends up being "shaders/dx11/vs_cubes.binýýýý"
hard coding this to "shaders/dx11/vs_cubes.bin" results in no change despite the file being present...
Any idea why?
Because he uses memcpy, a null terminator never gets written to the end of the string. You need to allocate with "char* filePath = (char*)calloc(1, shaderLen + fileLen + 1);", then there will be a zero at the end of the string
For platform compatibility, you also need to add
bgfx/include/compat/***to your include paths.
For example, for Visual Studio 2012 and higher, add
bgfx/include/compat/msvcto your include paths.
My program stops at "nULLER7" - dont mind the language its my native lang word
using Tdmgcc 9.2;
Sorry figured it out it was silly
In order to get bgfx to clear the screen, you also need to call
bgfx::touch(0)before
bgfx::frame().
For
bx::Vec3you also need to include
bx/math.h.
For
FILE, and
fopenfunction to compile, you also need to include
cstdio.
Thanks for the great tutorial! I learned a lot about using bgfx!
|
https://dev.to/pperon/hello-bgfx-4dka
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
XML Streaming#
Qt provides two classes for reading and writing XML through a simple streaming API:
QXmlStreamReader and
QXmlStreamWriter .) ... def readXBEL(): def readTitle(item): def readSeparator(item): def readFolder(item): def readBookmark(item): createChildItem = QTreeWidgetItem(QTreeWidgetItem item) xml = QXmlStreamReader() treeWidget = QTreeWidget() ...
The
read() function accepts a
QIODevice and sets it with
setDevice() . The
raiseError() function is used to display a custom error message, inidicating that the file’s version is incorrect.
def read(self, QIODevice device): xml.setDevice(device) if (xml.readNextStartElement()) { if (xml.name() == "xbel" and xml.attributes().value(versionAttribute()) == "1.0") { readXBEL() else: xml.raiseError(QObject.tr("The file is not an XBEL version 1.0 file.")) return not:
def writeFile(self, QIODevice device): xml.setDevice(device) xml.writeStartDocument() xml.writeDTD(QStringLiteral("<not DOCTYPE xbel>")) xml.writeStartElement(QStringLiteral("xbel")) xml.writeAttribute(XbelReader::versionAttribute(), QStringLiteral("1.0")) for i in range(0, treeWidget.topLevelItemCount()): writeItem(treeWidget.topLevelItem(i)) xml.writeEndDocument() return True
|
https://doc-snapshots.qt.io/qtforpython-dev/overviews/xml-streaming.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
On 10/29/2010 10:09 PM, Greg Ewing wrote:
Guido van Rossum wrote::
Yes, but if you want close() to cause the generator to finish normally, you *don't* want that to happen. You would have to surround the yield-from call with a try block to catch the GeneratorExit, and even then you would lose the return value from the inner generator, which you're probably going to want.
Ok, after thinking about this for a while, I think the "yield from" would be too limited if it could only be used for consumers that must run until the end. That rules out a whole lot of pipes, filters and other things that consume-some, emit-some, consume-some_more, and emit-some_more.
I think I figured out something that may be more flexible and insn't too complicated.
The trick is how to tell the "yield from" to stop delegating on a particular exception. (And be explicit about it!)
# Inside a generator or sub-generator. ...
next(<my_gen>) # works in this frame.
yield from <my_gen> except <exception> #Delegate until <exception>
value = next(<my_gen>) # works in this frame again.
...
The explicit "yield from .. except" is easier to understand. It also avoids the close and return issues. It should be easier to implement as well. And it doesn't require any "special" framework in the parent generator or the delegated sub-generator to work.
Here's an example.
# I prefer to use a ValueRequest exception, but someone could use # StopIteration or GeneratorExit, if it's useful for what they # are doing.
class ValueRequest(Exception): pass
# A pretty standard generator that emits # a total when an exception is thrown in. # It doesn't need anything special in it # so it can be delegated.
def gtally(): count = tally = 0 try: while 1: tally += yield count += 1 except ValueRequest: yield count, tally
# An example of delegating until an Exception. # The specified "exception" is not sent to the sub-generator. # I think explicit is better than implicit here.
def gtally_averages(): gt = gtally() next(gt) yield from gt except ValueRequest #Catches exception count, tally = gt.throw(ValueRequest) #Get tally yield tally / count
# This part also already works and has no new stuf in it. # This part isn't aware of any delegating!
def main(): gavg = gtally_averages() next(gavg) for x in range(100): gavg.send(x) print(gavg.throw(ValueRequest))
main()
It may be that a lot of pre-existing generators will already work with this. ;-)
You can still use 'yield from <gen>" to delegate until <gen> ends. You just won't get a value in the same frame <gen> was used in. The parent may get it instead. That may be useful in it self.
Note: you *can't* put the yield from inside a try-except and do the same thing. The exception would go to the sub-generator instead. Which is one of the messy things we are trying to avoid doing.
Cheers, Ron
|
https://mail.python.org/archives/list/python-ideas@python.org/message/V25YDRNLD5QSCVEYOYJ2FZDLZ2C5OWNU/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
table of contents
NAME¶
shutdown - shut down part of a full-duplex connection
SYNOPSIS¶
#include <sys/socket.h>
int shutdown(int sockfd, int how);
DESCRIPTION¶¶
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS¶
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, 4.4BSD (shutdown() first appeared in 4.2BSD).
NOTES¶
The constants SHUT_RD, SHUT_WR, SHUT_RDWR have the value 0, 1, 2, respectively, and are defined in <sys/socket.h> since glibc-2.1.91.
BUGS¶
Checks for the validity of how are done in domain-specific code, and before Linux 3.7 not all domains performed these checks. Most notably, UNIX domain sockets simply ignored invalid values. This problem was fixed for UNIX domain sockets in Linux 3.7.
SEE ALSO¶
close(2), connect(2), socket(2), socket(7)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
|
https://dyn.manpages.debian.org/unstable/manpages-dev/shutdown.2.en.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Modernizing the Netflix TV UI Deployment Process
by Ashley Yung
About two-thirds of the total Netflix streaming happens on TV. In a previous post, we talked about how we ship the same, updatable JavaScript User Interface layer across thousands of TV devices. For UI engineers, their goal is to present the newest and greatest UI to our members as soon as they finish building and testing it. In this post, we are going to talk about a recent revamp of the UI delivery process to help us achieve this goal.
Motivation
At Netflix, we are always looking for ways to improve our member experience. We discovered that while we were developing the TV UI at a great velocity, the real bottleneck was in the canary deployment process of the UI. Let’s take a look at our existing ESN-based canary approach.
Our existing ESN-based-canary deployment process prevented us from increasing the frequency of delivering joy to our members.
First of all, what is an ESN?
An ESN, or Netflix Electronic Serial Number, is a globally-unique identifier for each device. Even if you have two of the same TVs at home: same brand, model and year, and they both have the Netflix TV app installed, they do not have the same ESN.
How do ESN-based canaries work?
Our hash algorithm indiscriminately hashes a given ESN to one of the 100 device buckets. If your hashed device ID falls between the buckets 0–24, your device would always receive the canary build with the ESN-based canary approach. Similarly, buckets 25–49 receive the “baseline” build, which is used for comparison against the canary build. The rest of the buckets continue to receive the baseline, or main production build.
Problems with ESN-based canaries
Since each Netflix device has a unique ESN, if a customer turns on their TV when our canary deployment is in progress, they might get served the canary build on one device and the baseline or regular build on another — even if they have the exact same TVs at home and on the same Netflix account! This causes a discrepancy in our users’ Netflix TV experience across different devices.
The discrepancy above only happens when the startup of the app coincides with our 2 hour canary deployment window, twice a week. A much bigger issue, as illustrated in the above diagram, is that since the same TV device (users don’t buy new TVs often!) with the same ESN, and given the same hash algorithm, always gets assigned to the same device bucket. Thus, the same set of devices always gets selected to receive the canary build.
Those users are always the most susceptible to potential issues that we only identify at canary time, which translates into a poor Netflix experience to our members.
Another implication of having the same devices get the canary build each time is that we are guaranteed a skewed representation of devices between the control and the canary. The skewness in our device sampling persisted across canary allocations and impacted the outcomes of the statistical analysis; we were unable to catch device-specific issues with a ESN-based canary approach, which significantly lowered our confidence in the ACA reports that we had set up for our canaries. In fact, the push captains (the operators of the deployment) had ended up resorting to manual monitoring and eyeballing the graphs instead.
Other than the ESN-based approach to canaries, there were other dissatisfactions we had with our deployment process in general. Our deployment workflow consisted of a complex Spinnaker pipeline that relied on dozens of Jenkins jobs with configurations defined outside of source control — which also restricted us from testing these configurations outside of production. A graphical representation of the workflow looks something like this:
The end-to-end deployment workflow interacted with many different services through Jenkins. Our Spinnaker pipeline would start a Jenkins job, which then administered a command to a downstream application that we wanted to interact with (e.g. Jira), and then returned the result of the command to Spinnaker. There was a lot of friction in passing the results from the upstream to the downstream services, and vice versa. This was made apparent in the case of intermittent unavailability in the downstream dependencies, where the redirections made retries difficult.
The complexity of the workflow also meant that the learning curve to the process was steep, especially for developers outside the tooling team. We had to rely on two push captains who knew the ins and outs of the pipeline to perform the deployments. It was a joke that both of them had to coordinate their vacation plans :)
We wanted something more resilient, something defined in code, and something that any engineer can pick up easily. So we decided to re-engineer the delivery process from the ground up.
The Journey
We had several goals in mind when we started looking at an overhaul of the process. The must-haves:
- Reduce friction between Jenkins, Spinnaker, Jira, and our backend metadata service
- Resiliency
- Full automation and config-as-code
- Testability
- A modern canary approach, e.g. A/B-test-based canary. (For further reading, What is an A/B test? and Safe Delivery of Client applications at Netflix)
Murphy (named after the robocop) was the framework that helped bridge our needs.
What is Murphy?
Murphy is an internal framework for automating the delivery of Netflix client applications. Murphy runs as a Titus service, is composable, pluggable and testable. As a client team, we are able to automate all of the tasks that were previously done via Jenkins by writing plugins. Each plugin is a Javascript class that accomplishes a unit of work. Plugins may be specific to a project (e.g. computing metadata to be used in a deployment) or generally useful (e.g. enabling posting messages to a slack channel). Most importantly, Murphy provides libraries which abstract the backend ABlaze (our centralized A/B testing platform) interactions, which makes A/B-test-based canaries possible.
How do we leverage Murphy?
Each project defines a config that lists which plugins are available to its namespace (also sometimes referred to as a config group). At Murphy server runtime, an action server will be created from this config to handle action requests for its namespace. As mentioned in the previous section, each Murphy plugin represents the automation of a unit of task. Each unit of task is represented as an action, which is simply a Murphy client command. Actions run inside isolated Titus containers, which get submitted to the TVUI Action Server. Our deployment pipeline leverages Spinnaker to chain these actions together, which can be configured to automatically perform retries on Titus jobs to minimize any potential infrastructure impact.
Each plugin takes in a request object, and returns a response object. Using the BootstrappingHandler as an example:
Here’s a brief description of some of the main plugins that we built or leveraged for our migration:
Bootstrapping: Fetches all the necessary build metadata for the deployment from our backend build metadata service and outputs a JSON-formatted file which is used as the input for all subsequent stages.
Create AB Test: A common Murphy plugin shared across all client teams. Creates an AB test on ABlaze and returns a JSON-formatted file with the created AB test ID and other metadata. Creates or updates the deployment-related fast properties.
Start AB Test: A common Murphy plugin shared across all client teams. Starts allocating users to the AB test.
Run ACA: A common Murphy plugin shared across all client teams. Kicks off the Automatic Canary Analysis process and generates ACA reports for the push captain to review.
Regional Rollout: Rollout the canary build in an AWS region.
Full Rollout: Rollout the canary build in all AWS regions.
Cleanup Regional Rollout: Roll up all the regionally-scoped Fast Properties into one globally-scoped Fast Property.
Abort Canary: Abort the canary deployment. Clean up any AB test and Fast Properties that is set up as part of the deployment process.
Deployment Workflow Improvements
Our deployment slack channel has become the single source of truth for tracing our deployment process since the adoption of Murphy. Slack notifications are posted by our custom slack bot (you guessed it, it’s called MurphyBot). MurphyBot posts a slack message to our deployment slack channel when the canary deployment begins; the message also contains the link to the Spinnaker deployment pipeline, as well as the link to rollback to the previous build. Throughout the deployment process, it keeps updating the same slack thread with links to the ACA reports and deployment status.
What about A/B-test-based canaries?
A/B-test-based canaries have unlocked our ability to perform an “apples-to-apples” comparison of the baseline and canary builds. Users allocated to test cell 1 receive the baseline build, while users allocated to test cell 2 receive the canary build. Leveraging the power of our ABlaze platform, we are now confident that the population of cell 1 and cell 2 are close to identical in terms of their device representations across cells.
A/B-test-based canaries have been working really well for us. Since the adoption of A/B-test-based canaries in Feb 2021, the improved ACA has already saved us from rolling out a couple of problematic builds to our members which most likely would’ve slipped through the cracks if we had still relied on manually reviewing those ACA reports. Below are a couple of examples (Note: All metric values on the Y-axis in the screenshots below were removed intentionally. The blue line represents the baseline build served in cell 1, while the red line presents the canary build served in cell 2):
First, in March 2021, we were able to detect an increase in Javascript exceptions that would’ve gone unnoticed before had we been eyeballing the results.
Second, in July 2021, we also discovered a 10% background app memory increase in the ACA and was able to make the right decision to halt the rollout of the problematic build.
The new, simplified workflow has made the deployment process much more resilient to failures. We now also have a reliable way of making the “go or no go” decision with our new A/B-test-based canaries. Together, the re-engineered canary deployment process has greatly boosted confidence in our production rollouts.
The Future
In engineering, we always strive to make a good process even better. In the near future, we plan to explore the idea of device cohorts in our ACA reports. However, there will inevitably be new devices that Netflix wants to support and older devices that have such a low volume of traffic that become hard to monitor statistically. We believe that grouping and monitoring devices with similar configurations and operating systems is going to provide better statistical power than monitoring individual devices alone (there are only so many of those “signature” devices that we can keep track of!). An example device cohort, grouped by operating system would be “Android TV devices”, and another example would be “low memory devices” where we would be monitoring devices with memory constraints.
|
https://netflixtechblog.medium.com/modernizing-the-netflix-tv-ui-deployment-process-28e022edaaef?source=user_profile---------4----------------------------
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
WSGI middleware for recording requests/responses.
Project description
WSGI middleware for conditionally recording request/response information.
Install it:
$ pip install wsgim-record ...
and use it:
import wsgim_record class RecordMiddleware(wsgim_record.RecordMiddleware) # tell what to record def record_input(self, environ) return True def record_errors(self, environ) return False def record_output(self, environ, status, headers, exc_info=None) return True # what was recorded def recorded(self, environ, input, errors, status, headers, output): ... wrapped = RecordMiddleware(app)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
wsgim-record-0.1.1.tar.gz (3.7 kB view hashes)
|
https://pypi.org/project/wsgim-record/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
With.
We don’t have to write code over and over again. It also allows easy code modification and readability by simply adding or removing code chunks. Only when we call or invoke the method is it executed. The main() method is the most significant method in Java.
Assume you need to make a program to draw a circle and color it. To overcome this difficulty, you can devise two approaches:
- a method for drawing a circle
- a method for coloring the circle
Values or arguments can be inserted inside methods, and they will only be executed when the method is called. Functions are another name for them. The following are the most common usage of methods in Java:
- It allows for code reuse (define once and use multiple times)
- An extensive program can be broken down into smaller code parts.
- It improves the readability of code.
Methods in Java
By breaking down a complex problem into smaller bits, you can create an easier to comprehend and reuse program. There are two sorts of methods in Java: static and dynamic.
User-defined Methods: We can develop our method based on our needs.
Standard Library Methods: These are Java’s built-in methods that can be used.
Declaration of the Method
Method properties such as visibility, return type, name, and parameters are all stated in the method declaration. As seen in the following diagram, it consists of six components known as method headers.
(Access Specifier) (Return Type) (Method Name) (Parameter List) --> Method Header { // Method Body }
For example:
public int sumValues(int x, int y){ // method body }
Where sumValues(int x, int y) is the Method signature
Method Signature: A method signature is a string that identifies a method. It’s included in the method declaration. It contains the method name as well as a list of parameters.
Access Specifier: The method’s access specifier, also known as a modifier, determines the method’s access type. It specifies the method’s visibility. There are four different types of access specifiers in Java:
- Public: When we utilize the public specifier in our application, all classes can access the method.
- Private: The method is only accessible in the classes declared when using a private access specifier.
- Protected: The method is accessible within the same package or subclasses in a different package when using the protected access specifier.
- Default: When no access specifier is specified in the method declaration, Java uses the default access specifier. It can only be seen from the same package.
Return Type: The return type of a method is the data type it returns. For example, it could be a primitive data type, an object, a collection, or avoid. The void keyword is used when a method does not return anything.
Method Name: The name of a method is defined by its method name, which is a unique name.
It must be appropriate for the method’s functionality. If we’re making a method for subtracting two numbers, the method’s name must be subtraction(). The name of a method is used to call it.
Parameter List: The parameter list is a collection of parameters separated by a comma and wrapped in parentheses. It specifies the data type as well as the name of the variable. Leave the parenthesis blank if the method has no parameters.
Method Body: The method declaration includes a section called the method body. It contains all of the actions that must be completed. Further, it is protected by a pair of curly braces.
Choosing a Method Name
When naming a method, keep in mind that it must be a verb and begin with a lowercase letter. If there are more than two words in the method name, the first must be a verb, followed by an adjective or noun. Except for the first word, the initial letter of each word in the multi-word method name must be in uppercase. Consider the following scenario:
- sum(), area() are two single-word methods
- areaOfCircle(), stringComparision() are two multi-word methods
It’s also conceivable for a method to have the same name as another method in the same class; this is called method overloading.
User-defined methods
Let’s start by looking at user-defined methods. To declare a method, use the following syntax:
returnType methodName() { // method body }
As an example,
int sumValues() { // code }
The method above is named sumValues(), whose return type is an int. The syntax for declaring a method is as follows. The complete syntax for declaring a method, on the other hand, is
modifier static returnType nameOfMethod (parameter1, parameter2, ...) { // method body }
Here,
modifier – It specifies the method’s access kinds, such as public, private, etc. Visit Java Access Specifier for further information.
static -It can be accessed without creating objects if we use the static keyword.
The sqrt() method in the standard Math class, for example, is static. As a result, we may call Math.sqrt() without first establishing a Math class instance. The values parameter1/parameter2 are supplied to a method. A method can take any number of arguments.
Method call in Java
We’ve declared a method called sumValues() in the previous example. To use the method, we must first call it. The sumValues() method can be called in the following way.
// calls the method sumValues(); Example: Using Methods in Java class Codeunderscored { // create a method public int sumValues(int num_1, int num_2) { int sumVal = num_1 + num_2; // return the results return sumVal; } public static void main(String[] args) { int num1 = 67; int num2 = 33; // create an object of Codeunderscored Codeunderscored code = new Codeunderscored(); // calling method int resultVal = code.sumValues (num1, num2); System.out.println("The resultant sum value is: " + resultVal); } }
We defined a method called sumValues() in the previous example. The num_1 and num_2 parameters are used in the method. Take note of the line,
int resultVal = code.sumValues (num1, num2);
The procedure was invoked by giving two arguments, num_1 and num_2. We’ve placed the value in the result variable because the method returns a value. It’s worth noting that the method isn’t static. As a result, we’re utilizing the class’s object to invoke the method.
The keyword void
We can use the void keyword to create methods that don’t return a value. In the following example, we’ll look at a void method called demoVoid. It is a void method, which means it returns nothing. A statement must be used to call a void method, such as demoVoid(98);. As illustrated in the following example, it is a Java statement that concludes with a semicolon.
public class Codeunderscored { public static void main(String[] args) { demoVoid(98); } public static void demoVoid(double points) { if (points >= 100) { System.out.println("Grade:A"); }else if (points >= 80) { System.out.println("Grade:B"); }else { System.out.println("Grade:C"); } } }
Using Values to Pass Parameters
Arguments must be passed while working on the calling procedure. These should be listed in the method specification in the same order as their corresponding parameters. Generally, parameters can be given in two ways: a value or a reference.
Calling a method with a parameter is known as passing parameters by value. The argument value is provided to the parameter this way. The program below demonstrates how to pass a parameter by value. Even after using the procedure, the arguments’ values stay unchanged.
public class Codeunderscored { public static void main(String[] args) { int x = 20; int y = 62; System.out.println("Items initial order, x = " + x + " and y = " + y); // Invoking the swap method swapValues(x, y); System.out.println("\n**Order if items, before and after swapping values **:"); System.out.println("Items after swapping, x = " + x + " and y is " + y); } public static void swapValues(int a, int b) { System.out.println("Items prior to swapping(Inside), x = " + x + " y = " + y); // Swap n1 with n2 int temp = x; x = y; y = temp; System.out.println("Items post swapping(Inside), x = " + x + " y = " + y); } }
Overloading of Methods
Method overloading occurs when a class contains two or more methods with the same name but distinct parameters. It’s not the same as overriding. When a method is overridden, it has the same name, type, number of parameters, etc.
Consider the example of finding the smallest integer numbers. Let’s say we’re looking for the smallest number of double types. Then, to build two or more methods with the same name but different parameters, the notion of overloading will be introduced.
The following example clarifies the situation:
public class Codeunderscored { public static void main(String[] args) { int x = 23; int y = 38; double numOne = 17.3; double numTwo = 29.4; int resultOne = smallestValue(x, y); // invoking function name with different parameters double resultTwo = smallestValue(numOne, numTwo); System.out.println("The Minimum number is: = " + resultOne); System.out.println("The Minimum number is: = " + resultTwo); } // for integer public static int smallestValue(int numOne, int numTwo) { int smallestVal; if ( numOne > numTwo) smallestVal = numTwo; else smallestVal = numOne; return smallestVal; } // for double public static double smallestValue(double numOne, double numTwo) { double smallestVal; if ( numOne > numTwo) smallestVal = numTwo; else smallestVal = numOne; return smallestVal; } }
Overloading methods improve the readability of a program. Two methods with the same name but different parameters are presented here. The result is the lowest number from the integer and double kinds.
Using Arguments on the Command Line
When you execute a program, you may want to feed some information into it. It is performed by invoking main() with command-line arguments.
When a program is run, a command-line argument is information that appears after the program’s name on the command line. It’s simple to retrieve command-line parameters from within a Java program. They’re saved in the String array supplied to main() as strings. The following program displays all of the command-line arguments that it is invoked.
public class Codeunderscored { public static void main(String args[]) { for(int i = 0; i<args.length; i++) { System.out.println("args[" + i + "]: " + args[i]); } } }
‘This’ keyword
A Java keyword is used to reference the current class’s object in an instance method or constructor. You can use this to refer to class members like constructors, variables, and methods. It’s worth noting that the keyword this is only used within instance methods and constructors.
In general, the term this refers to:
- Within a constructor or a method, distinguish instance variables from local variables if their names are the same.
class Employee { int age; Employee(int age) { this.age = age; } }
- In a class, call one sort of constructor (parametrized constructor or default constructor) from another. Explicit constructor invocation is what it’s called.
class Employee { int age Employee() { this(20); } Employee(int age) { this.age = age; } }
This keyword is used to access the class members in the following example. Copy and paste the program below into a file called thisKeyword.java.
public class Codeunderscored { // Instance variable num int num = 10; Codeunderscored() { System.out.println("This is a program that uses the keyword this as an example. "); } Codeunderscored(int num) { // Using the default constructor as a starting point this(); // num is assigned to the instance variable num by assigning the local variable num to the instance variable num. this.num = num; } public void greet() { System.out.println("Hello and welcome to Codeunderscored.com. "); } public void print() { // declaration of the num Local variable int num = 20; // The local variable is printed. System.out.println("num is the value of a local variable. : "+num); // The instance variable is printed. System.out.println("num is the value of the instance variable. : "+this.num); // Invoking a class's greet method this.greet(); } public static void main(String[] args) { // Creating an instance of the class Codeunderscored code = new Codeunderscored(); // The print technique is used to print a document. code.print(); // Through a parameterized constructor, a new value is passed to the num variable. Codeunderscored codeU = new Codeunderscored(30); // Using the print technique once more codeU.print(); } }
Arguments with Variables (var-args)
You can give a variable number of parameters of the same type to a method in JDK 1.5. The method’s parameter is declared as follows:
typeName... parameterName
You specify the type followed by an ellipsis in the method definition (…). In a method, just one variable-length parameter can be supplied, and it must be the last parameter. Any regular parameters must precede it.
public class VarargsCode { public static void main(String args[]) { // Calling of a method with variable args showMax(54, 23, 23, 22, 76.5); showMax(new double[]{21, 22, 23}); } public static void showMax( double... numbers) { if (numbers.length == 0) { System.out.println("No argument passed"); return; } double result = numbers[0]; for (int i = 1; i < numbers.length; i++) if (numbers[i] > result) result = numbers[i]; System.out.println("The max value is " + result); } }
Return Type of a Java Method
The function call may or may not get a value from a Java method. The return statement is used to return any value. As an example,
int sumValues() { ... return sumVal; }
The variable sumVal is returned in this case. Because the function’s return type is int, the type of the sumVal variable should be int. Otherwise, an error will be generated.
// Example : Return Type of a Method class Codeunderscored { // creation of a static method public static int squareValues(int numVal) { // return statement return numVal * numVal; } public static void main(String[] args) { int result; // call the method // store returned value to result resultVal = squareValues(13); System.out.println("The Squared value of 13 is: " + resultVal); } }
In the preceding program, we constructed a squareValues() method. The method accepts an integer as an input and returns the number’s square. The method’s return type has been specified as int here.
As a result, the method should always return a positive number. Note that we use the void keyword as the method’s return type if the method returns no value.
As an example,
public void squareValues(int i) { int resultVal = i * i; System.out.println("The Square of the given number is: " + resultVal); }
Java method parameters
A method parameter is a value that the method accepts. A method, as previously stated, can have any number of parameters. As an example,
// method with two parameters int sumValues(int x, int y) { // code } // method with no parameter int sumValues(){ // code }
When calling a parameter method, we must provide the values for those parameters. As an example,
// call to a method with two parameters sumValues(29, 21); // call to a method with no parameters sumValues()
Example : Method Parameters
class Codeunderscored { // method with no parameter public void methodWithNoParameters() { System.out.println("Method without parameter"); } // method with single parameter public void methodWithParameters(int a) { System.out.println("Method with a single parameter: " + a); } public static void main(String[] args) { // create an object of Codeunderscored Codeunderscored code = new Codeunderscored(); // call to a method with no parameter code.methodWithNoParameters (); // call to a method with the single parameter code.methodWithParameters (21); } }
The method’s parameter is int in this case. As a result, the compiler will throw an error if we pass any other data type than int. Because Java is a tightly typed language, this is the case. The actual parameter is the 32nd argument supplied to the methodWithParameters() method during the method call.
A formal argument is the parameter num that the method specification accepts. The kind of formal arguments must be specified. Furthermore, the types of actual and formal arguments should always be the same.
Static Method
A static method has the static keyword. In other terms, a static method is a method that belongs to a class rather than an instance of that class. We can also construct a static method by prefixing the method name with the term static.
The fundamental benefit of a static method is that it can be called without requiring the creation of an object. It can change the value of static data members and access them. It is employed in the creation of an instance method. The class name is used to call it. The main() function is the best example of a static method.
public class Codeunderscored { public static void main(String[] args) { displayStatically(); } static void displayStatically() { System.out.println("Codeunderscored example of static method."); } }
Instance Method in Java
A class method is referred to as an instance method. It is a class-defined non-static method. It is essential to construct an object of the class before calling or invoking the instance method. Let’s look at an instance method in action.
public class CodeunderscoredInstanceMethod { public static void main(String [] args) { //Creating an object of the class CodeunderscoredInstanceMethod code = new CodeunderscoredInstanceMethod(); //invoking instance method System.out.println("The numbers' sum is: "+code .sumValues(39, 51)); } int s; //user-defined method because we have not used static keyword public int sumValues(int x, int y) { resultVal = x+y; //returning the sum return resultVal; } }
Instance methods are divided into two categories:
- Mutator Method
- Accessor Method
Accessor Method
The accessor method is the method(s) that reads the instance variable(s). Because the method is prefixed with the term obtain, we can recognize it. Getters is another name for it. It returns the private field’s value. It’s used to get the private field’s value.
public int getAge() { return age; }
Mutator Method
The method(s) read and modify the instance variable(s) values. Because the method preceded the term set, we can recognize it. Setters or modifiers are other names for it. Even though it doesn’t give you anything, it accepts a field-dependent parameter of the same data type. It’s used to set the private field’s value.
public void setAge(int age) { this.age = age; }
Example: Instance methods – Accessor & Mutator
public class Employee { private int empID; private String name; public int getEmpID() //accessor method { return empID; } public void setEmpID(int empID) //mutator method { this.empID = empID; } public String getName() { return name; } public void setName(String name) { this.name = name; } public void display() { System.out.println(" Your Employee NO is.: "+empID); System.out.println("Employee name: "+name); } }
Methods for a Standard Library
The standard library methods are Java built-in methods that can be used immediately. These standard libraries are included in a Java archive (*.jar) file with JVM and JRE and the Java Class Library (JCL).
Examples include,
- print() is a java.io method.
- In PrintSteam, the print(“…”) method displays a string enclosed in quotation marks.
- sqrt() is a Math class method. It returns a number’s square root.
Here’s an example that works:
// Example: Method from the Java Standard Library public class Codeunderscored { public static void main(String[] args) { // the sqrt() method in action System.out.print("The Square root of 9 is: " + Math.sqrt(9)); } }
Abstract Method
An abstract method does not have a method body. In other terms, an abstract method does not have an implementation. It declares itself in the abstract class at all times. If a class has an abstract method, it must be abstract itself. The keyword abstract is used to define an abstract procedure.
The syntax is as follows:
abstract void method_name();
abstract class CodeTest //abstract class { //abstract method declaration abstract void display(); } public class MyCode extends CodeTest { //method impelmentation void display() { System.out.println("Abstract method?"); } public static void main(String args[]) { //creating object of abstract class CodeTest code = new MyCode(); //invoking abstract method code.display(); } }
Factory method
It’s a method that returns an object to the class where it was created. Factory methods are all static methods. A case sample is as follows:
NumberFormat obj = NumberFormat.getNumberInstance().
The finalize( ) Method
It is possible to define a method that will be called immediately before the garbage collector destroys an object. This function is called finalize(), ensuring that an object is terminated correctly. Finalize(), for example, can be used to ensure that an open file held by that object is closed.
Simply define the finalize() method to add a finalizer to a class. When the Java runtime recycles an object of that class, it calls that method. In the finalize () method, you’ll specify the actions that must be completed before an object is destroyed in the finalize() method.
This is the general form of the finalize() method:
protected void finalize( ) { // finalization code here }
The keyword protected is a specifier that prevents code declared outside the class from accessing finalize(). It implies you have no way of knowing when or if finalize() will be called. For example, if your application stops before garbage collection, finalize() will not be called.
What are the benefits of employing methods?
The most significant benefit is that the code may be reused. A method can be written once and then used several times. We don’t have to recreate the code from scratch every time. Think of it this way: “write once, reuse many times.”
Example 5: Java Method for Code Reusability
public class Codeunderscored { // definition of the method private static int calculateSquare(int x){ return x * x; } public static void main(String[] args) { for (int i = 5; i <= 10; i++) { //calling the method int resultVal = calculateSquare(i); System.out.println("The Square of " + i + " is: " + resultVal); } } }
We developed the calculateSquare() method in the previous program to calculate the square of a number. The approach is used to find the square of numbers between five and 10 in this case. As a result, the same procedure is employed repeatedly.
- Methods make the code more readable and debuggable.
The code to compute the square in a block is kept in the calculateSquare() method. As a result, it’s easier to read.
Example: Calling a Method several times
public class Codeunderscored { static void showCode() { System.out.println("I am excited about CodeUnderscored!"); } public static void main(String[] args) { showCode(); showCode(); showCode(); showCode(); } } // I am excited about CodeUnderscored! // I am excited about CodeUnderscored! // I am excited about CodeUnderscored! // I am excited about CodeUnderscored!
Example: User-Defined Method
import java.util.Scanner; public class Codeunderscored { public static void main (String args[]) { //creating Scanner class object Scanner scan=new Scanner(System.in); System.out.print("Enter the number: "); //reading value from user int num=scan.nextInt(); //method calling findEvenOdd(num); } //user defined method public static void findEvenOdd(int num) { //method body if(num%2==0) System.out.println(num+" is even"); else System.out.println(num+" is odd"); } }
Conclusion
In general, a method is a manner of accomplishing a goal. In Java, a method is a collection of instructions that accomplishes a specified goal. It ensures that code can be reused. In addition, Methods can also be used to alter code quickly.
A method is a section of code that only executes when invoked. It has Parameters which are data that can be passed into a method. Methods, often known as functions, carry out specific tasks. Further, some of the benefits of using methods include code reuse, creating it once, and using it multiple times.
Within a class, a method must be declared. It is defined by the method’s name, preceded by parenthesis(). Although Java has several pre-defined ways, such as System.out.println(), you can also write your own to handle specific tasks.
|
https://www.codeunderscored.com/methods-in-java-with-examples/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Description
A motor that enforces the rotation angle r(t) between two frames on two bodies, using a rheonomic constraint.
The r(t) angle of frame A rotating on Z axis of frame B, is imposed via an exact function of time f(t), and an optional angle offset: r(t) = f(t) + offset (e.g., very good and reactive controllers). By default it is initialized with linear ramp: df/dt= 1. Use SetAngleFunction() to change to other motion functions.
#include <ChLinkMotorRotationAngle.h>
Member Function Documentation
◆ ArchiveIN()
Method to allow deserialization of transient data from archives.
Method to allow de serialization of transient data from archives.
Reimplemented from chrono::ChLinkMotorRotation.
◆ IntLoadConstraint_Ct()
Takes the term Ct, scale and adds to Qc at given offset: Qc += c*Ct.
- Parameters
-
Reimplemented from chrono::ChLinkMateGeneric.
◆ SetAngleFunction()
Set the rotation angle function of time a(t).
This function should be C0 continuous and, to prevent acceleration spikes, it should ideally be C1 continuous.
◆ SetAngleOffset()
Get initial angle offset for f(t)=0, in [rad].
Rotation on Z of the two axes will be r(t) = f(t) + offset. By default, offset = 0
The documentation for this class was generated from the following files:
- /builds/uwsbel/chrono/src/chrono/physics/ChLinkMotorRotationAngle.h
- /builds/uwsbel/chrono/src/chrono/physics/ChLinkMotorRotationAngle.cpp
|
https://api.projectchrono.org/development/classchrono_1_1_ch_link_motor_rotation_angle.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Hi,
I got a splunk search that monitors, how many different hosts there were in the chosen timespan.
| stats dc(host) as hostcount
Now I would like to generate a pie chart, that compares succesful hosts with the unseccesful ones. Therefore I got a field "errors". All hosts with error > 50 should be counted as unseccesful. The others should be counted as succesful. The pie chart should show the succesful/unsuccesful ratio.
| makeresults | eval _raw="host errors
abc
def 50
ghi 51
abc 2
def 50
ghi 51" | multikv forceheader=1 | fields - _raw _time linecount
| eval unsuccessful = if(errors > 50, "unsuccessful", null)
| stats values(unsuccessful) as unsuccessful by host
| eval status=if(unsuccessful = "unsuccessful","unsuccessful", "successful")
| stats count by status
View solution in original post
| stats sum(errors) as errortotal by host
| eval status=if(errortotal > 50,"unsuccessful", "successful")
| stats count by status
Thank you, that already helps. However I did a mistake while explaining my situation. I dont want the sum of errors to be > 50. If there was one event with error >50 within the timespan, the host should be classified unsuccesful. The sum of errors per host is not important for me. Only if there was one single event with error >50. Do you understand what I mean?
|
https://community.splunk.com:443/t5/Splunk-Search/pie-chart/m-p/541927
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
1,2
Suggested by the Yellowstone permutation A098550 except that now the key conditions in the definition have been reversed.
Let Ker(k), the kernel of k, denote the set of primes dividing k. Thus Ker(36} = {2,3}, Ker(1) = {}. Then Product_{p in Ker(k)} p = A000265(k), which is denoted by ker(k).
Theorem 1: For n>2, a(n) is the smallest number m not yet in the sequence such that
(i) Ker(m) intersect Ker(a(n-1)) is nonempty,
(ii) Ker(m) intersect Ker(a(n-2)) is empty, and
(iii) The set Ker(m) \ Ker(a(n-1)) is nonempty.
(Without condition (iii), every prime dividing m might also divide a(n-1), which would make it impossible to find a(n+1).)
Idea of proof: m always exists and is unique; no smaller choice for a(n) is possible; and taking a(n)=m does not lead to a contradiction. So a(n) must be m.
Theorem 2: For n>2, Ker(a(n)) contains at least two primes. (Immediate from Theorem, since a(n) must contain a prime in a(n-1) and a prime not in a(n-1)).)
It follows that no odd prime p or even-or-odd prime power q^k, k>1, appears in the sequence. Obviously this sequence is not a permutation of the positive integers.
Theorem 3. For any M there is an n_0 such that n > n_0 implies a(n) > M. (This is a standard property of any sequence of distinct positive terms - see the Yellowstone paper).
Theorem 4. For any prime p, some term is divisible by p.
Proof. Take p=17 for concreteness. If 17 does not divide any term, then 19 cannot either (because the first time 19 appears, we could have used 17 instead).
So all terms are products only of 2,3,5,7,11,13. Go out a long way, use Theorem 2, and consider two huge successive terms, A*B, C*D, where Ker(B) = Ker(C) and Ker(A) intersect Ker(D) is empty. Either C or D must contain a huge prime power q^k, 2 <= q <= 13. If it is in C, replace it by q and multiply D by 17. If it is in D, replace it by 17. Either way we get a smaller legal candidate for C*D that is a multiple of 17. QED
Theorem 5. There are infinitely many even terms.
Proof. Suppose the prime p appears for the first times as a factor of a(n). Then we have a(n-1) = x*q^i, a(n) = q*p, where q<p is a prime and i >= 1. If q=2 then a(n) is even. So we may suppose q is odd. If x is odd then a(n+1) = 2*p. If x is even then obviously a(n-1) is even. So one of a(n-1), a(n), or a(n+1) is even for every prime p. So there are infinitely many even terms. QED - N. J. A. Sloane, Aug 28 2020
Theorem 6: For any prime p, infinitely many terms are divisible by p. - N. J. A. Sloane, Sep 09 2020. (I thought I had a proof that for any odd prime p, there is a term equal to 2p, but there was a gap in the argument. - N. J. A. Sloane, Sep 23 2020)
Theorem 7: There are infinitely many odd terms. - N. J. A. Sloane, Sep 12 2020
Conjecture 1: Every number with at least two distinct prime factors is in the sequence. In other words, apart from 1 and 2, this sequence is the complement of A000961.
[It seems very likely that the arguments used to prove Theorem 1 of the Yellowstone Permutation paper can be modified to prove the conjecture.]
The conditions permit us to start with a(1)=1, a(2)=2, and that does not lead to a contradiction, so those are the first two terms.
After 1, 2, the next term cannot be 4 or 5, but a(3) = 6 works.
For a(4), we can rule out 3, 4, 5, 7, 8, 9 11, 13 (powers of primes), and 10, 12, and 14 have a common factor with a(2). So a(4) = 15.
The graph of the first 100000 terms (see link) is similar to that of the Yellowstone permutation, but here the points lie on more lines.
The sequence has fixed points at n = 1, 2, 10, 90, 106, 150, 162, 246, 394, 398, 406, 410, ... (see A338050). - Scott R. Shannon, Aug 13 2020
The initial pattern of odd and even terms: (odd, even, even, odd), repeat, is misleading as it does not persist. (See A337644 for more about this point.)
Discussion of when primes first divide some term, from N. J. A. Sloane, Oct 21 2020: (Start)
When an odd prime p first divides a term of the Enots Wolley sequence (the present sequence), that term a(n) is equal to q*p where q<p is also a prime. We say that p is introduced by q. It appears q is almost always 2 (the corresponding values of p form A337648), that there are precisely 34 instances when q = 3 (see A337649), and q>3 happens just once, at a(5) = 35 when q=5 and p=7.
We conjecture that even if p is introduced by some prime q>2, 2*p appears later.
Sequence A337275 lists the index k such that a(k) = 2*prime(n), or -1 if 2*prime(n) is missing, and A338074 lists the indices k such that a(k) is twice a prime.
Comparison of those two sequences shows that they appear to be essentially identical (see the table in A337275).
The differences between the two sequences are caused by the fact that although normally if p and q are odd primes with p < q, then 2p precedes 2q, this is not true for the following primes: (7,5), (31,29), and (109, 113, 107), which appear in the order shown. We conjecture that these are the only exceptions.
Combining the above observations, we conjecture that for n >= 755 (at which point we have seen all the primes <= 367), every prime p is introduced by 2*p, and the terms 2*p appear in their natural order.
(End)
Scott R. Shannon, Table of n, a(n) for n = 1..20000.
David L. Applegate, Hans Havermann, Bob Selcoe, Vladimir Shevelev, N. J. A. Sloane, and Reinhard Zumkeller, The Yellowstone Permutation, arXiv preprint arXiv:1501.01669 [math.NT], 2015. Also Journal of Integer Sequences, Vol. 18 (2015), Article 15.6.7
Scott R. Shannon, The first million terms (7-Zip compressed file)
Scott R. Shannon, Image of the first 100000 terms. The green line is y=x.
Scott R. Shannon, Image of the first 1000000 terms. The green line is y=x.
Scott R. Shannon, Graph of 11.33 million terms, based on F. Stevenson's data, plotted with colors indicating the least prime factor (lpf). Terms with a lpf of 2 are shown in white, terms with a lpf of 3,5,7,11,13,17,19 are shown as one of the seven rainbow colors from red to violet, and terms with a lpf >= 23 are shown in grey.
Scott R. Shannon, Graph of the terms with lpf = 2. This, and the similar graphs below, are using F. Stevenson's data of 11.33 million terms. The y-axis scale is the same as the above multi-colored image. The green line is y = x.
Scott R. Shannon, Graph of the terms with lpf = 3.
Scott R. Shannon, Graph of the terms with lpf = 5.
Scott R. Shannon, Graph of the terms with lpf = 7.
Scott R. Shannon, Graph of the terms with lpf = 11.
Scott R. Shannon, Graph of the terms with lpf = 13.
Scott R. Shannon, Graph of the terms with lpf = 17.
Scott R. Shannon, Graph of the terms with lpf = 19.
Scott R. Shannon, Graph of the terms with lpf >= 23.
N. J. A. Sloane, Table of n, a(n) for n = 1..161734
N. J. A. Sloane, Graph of 11.33 million terms, based on F. Stevenson's table. The red line is y=x. It is hard to believe, but there are as many points above the red line as there are below it (see the next graph). Out of 11333576 points, 46% (5280697), all even, lie below the red line. All the odd points lie above the red line.
N. J. A. Sloane, Blowup of last 1.133 million points of the previous graph. There are a very large number of points in a narrow band below the red line.
N. J. A. Sloane, Conant's Gasket, Recamán Variations, the Enots Wolley Sequence, and Stained Glass Windows, Experimental Math Seminar, Rutgers University, Sep 10 2020 (video of Zoom talk).
Frank Stevenson, First five million terms (zipped file, starting with a(4)=15)
Frank Stevenson, First 11333573 terms (zipped file, starting with a(4)=15)
with(numtheory);
N:= 10^4: # to get a(1) to a(n) where a(n+1) is the first term > N
B:= Vector(N, datatype=integer[4]):
for n from 1 to 2 do A[n]:= n: od:
for n from 3 do
for k from 3 to N do
if B[k] = 0 and igcd(k, A[n-1]) > 1 and igcd(k, A[n-2]) = 1 then
if nops(factorset(k) minus factorset(A[n-1])) > 0 then
A[n]:= k;
B[k]:= 1;
break;
fi;
fi
od:
if k > N then break; fi;
od:
s1:=[seq(A[i], i=1..n-1)]; # N. J. A. Sloane, Sep 24 2020, based on Theorem 1 and Robert Israel's program for sequence A098550
M = 1000;
A[1] = 1; A[2] = 2;
Clear[B]; B[_] = 0;
For[n = 3, True, n++,
For[k = 3, k <= M, k++,
If[B[k] == 0 && GCD[k, A[n-1]] > 1 && GCD[k, A[n-2]] == 1, If[Length[ FactorInteger[k][[All, 1]] ~Complement~ FactorInteger[A[n-1]][[All, 1]]] > 0, A[n] = k; B[k] = 1; Break[]]]]; If[k > M, Break[]]];
Array[A, n-1] (* Jean-François Alcover, Oct 20 2020, after Maple *)
(Python)
from math import gcd
from sympy import factorint
from itertools import count, islice
def agen(): # generator of terms
a, seen, minan = [1, 2], {1, 2}, 3
yield from a
for n in count(3):
an, fset = minan, set(factorint(a[-1]))
while True:
if an not in seen and gcd(an, a[-1])>1 and gcd(an, a[-2])==1:
if set(factorint(an)) - fset > set():
break
an += 1
a.append(an); seen.add(an); yield an
while minan in seen: minan += 1
print(list(islice(agen(), 70))) # Michael S. Branicky, Jan 22 2022
Cf. A000961, A098550, A098548, A064413, A255582, A020639, A006530, A337648, A337649, A338050 (fixed points), A338051 (a(n)-n).
A337007 and A337008 describe the overlap between successive terms.
See A337066 for when n appears, A337275 for when 2p appears, A337276 for when 2k appears, A337280 for when p first divides a term, A337644 for runs of three odd terms, A337645 & A338052 for smallest missing legal number, A337646 & A337647 for record high points, A338056 & A338057 for record high values for a(n)/n.
See A338053 & A338054 for the "early" terms.
Further properties of the present sequence are studied in A338062-A338071.
A338059 has the missing prime powers inserted (see also A338060, A338061).
See A338055, A338351 for variants.
A280864 is a different but very similar lexicographically earliest sequence.
Sequence in context: A221719 A095380 A287012 * A338055 A336799 A340779
Adjacent sequences: A336954 A336955 A336956 * A336958 A336959 A336960
nonn
Scott R. Shannon and N. J. A. Sloane, Aug 09 2020
Added "infinite" to definition. - N. J. A. Sloane, Sep 03 2020
Added Scott R. Shannon's name "Enots Wolley" (Yellowstone backwards) for this sequence to the definition, since that has been mentioned in several talks. - N. J. A. Sloane, Oct 11 2020
approved
|
https://oeis.org/A336957
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
{
"action": "new_repo",
"branch": "master",
"bug_id": 1759883,
"description": "",
"exception": false,
"monitor": "monitoring",
"namespace": "rpms",
"repo": "dolfin",
"summary": "FEniCS computational backend and problem solving environment",
"upstreamurl": ""
}
The Pagure project already exists
Metadata Update from @limb:
- Issue close_status updated to: Invalid
- Issue status updated to: Closed (was: Open)
Hmm, it seems the repo was created, but I'm not the owner.
says:
Created 15 minutes ago
Maintained by limb
Created 15 minutes ago
Maintained by limb
Metadata Update from @zbyszek:
- Issue status updated to: Open (was: Closed)
to comment on this ticket.
|
https://pagure.io/releng/fedora-scm-requests/issue/18680
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
59597/operands-could-not-be-broadcast-with-shapes-19-0-knn
I am working on how to use KNN to predict a rating for a movie. I use a video and a book to teach myself how to go about it
I tried to run the code I found in the book but it gave me error message. I googled the error message so as to understand it and fix my problem but I don't think I know how to adapt the solutions to my problem. The code is given below:
import numpy as np
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('C:/Users/dell/Downloads/DataScience/DataScience-Python3/ml-100k/u.data', sep='\t', engine='python', names=r_cols, usecols=range(3)) # please enter your file path here. The file is u.data
print(ratings.head())
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
print(movieProperties.head())
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
print(movieNormalizedNumRatings.head())
movieDict = {}
with open('C:/Users/dell/Downloads/DataScience/DataScience-Python3/ml-100k/u.item') as f: # The file is u.item
temp = ''
for line in f:
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, genres, movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
print(movieDict[1])
from scipy import spatial
def ComputeDistance(a, b):
genresA = np.array(list(a[1]))
genresB = np.array(list(b[1]))
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = np.array(a[2])
popularityB = np.array(b[2])
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
print(ComputeDistance(movieDict[2], movieDict[4]))
import operator
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distance[x][0])
return neighbors
K = 10
avgRating = 0
neighbors = getNeighbors(1, K)
I got this error message from PowerShell:
Traceback(most recent call last):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
genreDistance = spatial.distance.cosine(genresA, genresB)
return correlation(u, v, w=w, centered=False)
uv = np.average(u*v, weights=w)
ValueError: operands could not be broadcast together with shape (19,)(0,)
I got this error message when I tried to debug the problem from ipython terminal:
c:\programdata\anaconda3\lib\site-packages\scipy\spatial\distance.py(695)correlation()
693 u = u - umu
694 v = v - vmu
---> 695 uv = np.average(u*v, weights=w)
696 uu = np.average(np.square(u), weights=w)
697 vv = np.average(np.square(v), weights=w)
This is the part of my code, why ...READ MORE
There are better ways of achieving the ...READ MORE
I was implementing gillespie algorithm and when ...READ MORE
Hey @Giorgio,
You can try this hope this ...READ MORE
Hi@akhtar,
In the above error it shows could not ...READ MORE
Hi@akhtar,
You may get this error if you ...READ MORE
Hi@akhtar,
You can download the shapely module in ...READ MORE
Hi, @Varshap
It’s a TypeError, which tells us ...READ MORE
Hi@akhtar,
If you are trying to install with pip ...READ MORE
img_shape = 224
test_data = []
test_labels = []
for ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/59597/operands-could-not-be-broadcast-with-shapes-19-0-knn
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
List
Box
List Box
List Box
List Box
Class
Definition
public : class ListBox : Selector, IListBox, IListBox2
struct winrt::Windows::UI::Xaml::Controls::ListBox : Selector, IListBox, IListBox2
public class ListBox : Selector, IListBox, IListBox2
Public Class ListBox Inherits Selector Implements IListBox, IListBox2
<ListBox .../> -or- <ListBox ...> oneOrMoreItems </ListBox>
- Inheritance
-
- Attributes
-
Examples
This example demonstrates how to add a collection of FontFamily objects directly to a ListBox control.
<ListBox> <TextBlock Text="Arial" FontFamily="Arial"/> <TextBlock Text="Courier New" FontFamily="Courier New"/> <TextBlock Text="Times New Roman" FontFamily="Times New Roman"/> </ListBox>
This example uses data binding to fill a ListBox control with a collection of FontFamily objects.
<ListBox x:
ObservableCollection<FontFamily> fonts = new ObservableCollection<FontFamily>(); public BlankPage() { this.InitializeComponent(); fonts.Add(new FontFamily("Arial")); fonts.Add(new FontFamily("Courier New")); fonts.Add(new FontFamily("Times New Roman")); }
Dim fonts As New ObservableCollection(of FontFamily) Public Sub New() MyBase.New() InitializeComponent() fonts.Add(New FontFamily("Arial")) fonts.Add(New FontFamily("Courier New")) fonts.Add(New FontFamily("Times New Roman")) End Sub
Remarks
ListBox lets users select from a pre-defined list of options presented like a text control. Use a ListBox when you want the options to be visible all the time or when users can select more than one option at a time. ListBox controls are always open, so several items can be displayed without user interaction.
Note
ListBox is useful when you are upgrading a Universal Windows 8 app that uses ListBox, and need to minimize changes. For new apps in Windows 10, we recommend using the ListView control instead.
Using a ListBox
Use a ListBox control to present a list of items that a user can select from. More than one item in a ListBox control is visible at a time. You specify whether the ListBox control allows multiple selections by setting the SelectionMode property. You can get or set the selected items for the list box by using the SelectedItems property.
Populating a ListBox
You populate the ListBox control by adding UIElement items directly to the Items collection, or by binding the ItemsSource property to a data source. ItemsSource items from data will initially clear the Items collection when the binding is evaluated, so don't set both properties.
ListBox has a dedicated control for its items, ListBoxItem. But when you populate the Items collection, you can use elements or data, you don't typically use explicit ListBoxItem objects. What happens internally is that when the ListBox composes its visual tree from its templates, specifically when expanding the ItemTemplate, it creates a ListBoxItem wrapper for each of the objects it's including as items. At run time, the Items collection still contains the original items you declared. The created ListBoxItem wrappers are deeper in the visual tree, inside the items panel (see ItemsPanel ) as its children. You don't usually need direct access to a ListBoxItem object. But if you want to access the created ListBoxItem wrappers, you can use Microsoft UI Automation techniques, or use VisualTreeHelper APIs, to walk into the object tree representation and find them.
ListBox vs. ListView and GridView
ListBox has many similarities with ListView or GridView (they share the parent class ItemsControl ), but each control is oriented towards different scenarios. ListBox is best for general UI composition, particularly when the elements are always intended to be selectable, whereas ListView or GridView are best for data binding scenarios, particularly if virtualization or large data sets are involved. For more info on virtualization, see Using virtualization with a list or grid.
|
https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.listbox
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Opened 11 years ago
Last modified 10 years ago
#1632 new Bugs
Default Interval rounding policies incomplete
Description
Report originates at
The rounding policy requirements () list e.g. tan_down(), but none of the implementations in rounded_arith.hpp implement it.
The result is that this code fails to compile:
#include <boost/numeric/interval.hpp> int main( int ac, char* av[] ) { boost::numeric::interval<double> I(0.1, 0.2); I = tan(I); return 0; }
Change History (3)
comment:1 Changed 11 years ago by
comment:2 Changed 10 years ago by
comment:3 Changed 10 years ago by
According to the documentation at in the "transcendental function" section, the standard library routines for tan, etc, do not typically satisfy the needed rounding properties and therefore the templates which implement them are disabled by default.
I have used tan with the interval library. If you pass it a policy based on rounded_transc_std, it works fine. e.g. I have written a rounded_control specialization for the mpfr_class type from the gmpfrxx interface to mpfr. () With it I can declare the specialization
template<> struct rounded_math<mpfr_class> : save_state_nothing<rounded_transc_std<mpfr_class> > {};
and then code like
j = boost::numeric::interval<mpfr_class> (0.1,0.2); j = boost::numeric::tan(j); std::cout << "[" << j.lower() << "," << j.upper() << "]" << "\n";
compiles, executes, and gives plausible looking results. Therefore, I think this is working as designed.
This bug is still present in 1.35.0.
|
https://svn.boost.org/trac10/ticket/1632
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
AD7705/AD7706 Library Revisited
About a year ago, I wrote a simple library for interfacing AD7705/AD7706 with Arduino. The library works, but it requires some decent knowledge of the underlying chip, which had made it somewhat difficult to use. Most issues users reported can be resolved by adjusting the timing in user code, but I admit that it is somewhat difficult for users who are not familiar with the chip. For a library, I should have made it easier to use to begin with. So, I decided to add a few long-awaited features and hopefully these tweaks will make the library easier to use.
One of the changes to the original code is the addition of the dataReady function. This function queries the DRY bit in the communication register and returns true when the data ready bit is cleared.
bool AD770X::dataReady(byte channel) { setNextOperation(REG_CMM, channel, 1); digitalWrite(pinCS, LOW); byte b1 = spiTransfer(0x0); digitalWrite(pinCS, HIGH); return (b1 & 0x80) == 0x0; }
Using this function, we can wait till the converted data is ready before reading out the conversion result (rather then using delay statements):
double AD770X::readADResult(byte channel, float refOffset) { while (!dataReady(channel)) { }; setNextOperation(REG_DATA, channel, 1); return readADResult() * 1.0 / 65536.0 * VRef - refOffset; }
In readADResult, I added an optional parameter refOffset. If your Vref- is not tied to the ground then you can use this variable to set the offset voltage to be subtracted from the conversion result. The default operating mode is set to be bipolar. For AD7705 and AD7706, the difference between unipolar and bipolar operation is simply how the the input signal is referenced so by setting the input mode to bipolar, you can still measure unipolar voltages. All that is needed is to tie the Vref- to the ground and leave the refOffset with the default value (i.e. 0).
I have also added a reset function. By calling this function first in your setup code, you are guaranteed that the chip is brought to a known state. Some of the difficulties users faced using the original library is that, depending on how the system is powered up, the AD770x may not be in a consistent mode and thus the A/D results seemed to be random. The chip reset can be achieved by either using the RESET pin or code. In my opinion, implementing in code is the desired method unless you need highest performance possible. Another benefit is that this implementation requires one less MCU pin.
Finally, I added a few parameters to the alternative constructor. In case you want to fine-tune your setup (e.g. setup different gain/speed), you can use the alternative constructor instead.
The following example shows how to use this library to read ADC results from multiple channels:
#include <AD770X.h> AD770X ad7706(2.5); double v; void setup() { Serial.begin(9600); ad7706.reset(); ad7706.init(AD770X::CHN_AIN1); ad7706.init(AD770X::CHN_AIN2); } void loop() { v = ad7706.readADResult(AD770X::CHN_AIN1); Serial.print(v); v = ad7706.readADResult(AD770X::CHN_AIN2); Serial.print(" : "); Serial.println(v); }
Download: AD770X1.1.tar.gz (compatible with Arduino 1.0 IDE)
Hallo Kerry D.Wong!
It’s from Russia again :)
Thank you for the new library! Now it is compiled into the first version of the Arduino! But I have a question, is it possible to increase the polling rate channels? Or you can tell which part of the code of your library, can I change the time delay? So I can experiment with delay …
Thank you again! With the new library, all channels are working fine!
Yes, there is another constructor that takes the update rate as a parameter:
init(byte channel, byte clkDivider, byte polarity, byte gain, byte updRate)
and you can use the constants defined in the header to set your rates.
The clock divider can be set accordingly based on your oscillator frequency.
Hello Kerry,
Thanks for the library…very useful. I was wondering if it would be possible to use the library with an AD7715 which is pin compatible with the AD7706 but is only a single channel device. Which parts would need to be changed in order to make it work. I think the byte codes for accessing the different registers are the same??
Thanks again
Alex
Thanks Alex. As you observered, AD7705 and AD7706 are almost identical. You can use the exact same code for either AD7705 and AD7706. The only change would be the interpretation of the input pin s used.
For instance, if the channel setting is CH1=0 and CH0=0, for AD7705 AIN1+ and AIN- are used whereas for AD7706 AIN1 and COMMON are used.
Kerry,
I was actually looking at using the AD7715 which is not quite the same as an AD7706 – it is very similar though but this device only has one channel not two like the AD7706. There are other differences too though, there is no clock register in the AD7715. How difficult would it be to modify the AD7706 library to work with the AD7715? Please note that my coding skill is almost zero!!
Cheers
Alex
Sorry I didn’t quite read it right. I thought you were trying to use AD7705 (but actually you were trying to use AD7715).
Kerry,
My sincerest apologies, I have it working and it’s excellent!! I did not actually just try the library! Someday I will have to learn how to code properly and understand how libraries are written! Thank you so much for your library and your response…
Cheers
Alex
First thanks for this library kwong!
Alex, have you realy had success with this library and the AD7715? Because I didn’t…
I suppose you modify it because of the difference between AD770X and the AD7715 like clock register, and channel…
Would you offer us your modified code?
(French man writing…)
Cheers
Damien
Damien,
My code used Kerry’s library without being modified. I’m not a very good programmer so I can’t be certain how it worked with the AD7715. The critical part was adding a 100ms delay to Kerry’s example code. Once I did that the device started to work. I tested it on a breadboard with minimal components as specified in the datasheet with a variable resistor to provide analogue data! It worked first time…best of Luck
Here is my code:
/*
* AD770X Library
* sample code (AD7706)
* Kerry D. Wong
*
* 3/2011
*/
#include
AD770X ad7706(2.5f);
unsigned int v;
void setup()
{
Serial.begin(115200);
ad7706.reset();
ad7706.init(AD770X::CHN_AIN1);
// ad7706.init(AD770X::CHN_AIN2);
Serial.println(“Here we go…”);
}
void loop()
{
v = ad7706.readADResult(AD770X::CHN_AIN1);
Serial.println(v);
delay(100); //100ms delay for AD7715
}
Thanks Alex for replying!
But…
I can’t believe how it worked for you with the AD7715…
I’ve tried you sketch, but it didn’t work for me…
This library doesn’t work for me, but I wrote one for our AD7715.
I didn’t spend a lot of time on, and even it seem’s to work quite
well (thanks to kwong) there are a lots of difference between AD770X
and AD770X…
Hi Damien
Would you offer me your modified code for the AD7715?
Best regards
In that case: my email is jkuj@teknologisk.dk :)
Hi Kerry,
i’m using your library for AD7705. I still don’t understand how to get data. To take data from AIN1, this is the code i use:
#define LOOP_DELAY 120
setup:
ad7705.reset();
ad7705.init(AD770X::CHN_AIN1);
ad7705.init(AD770X::CHN_AIN3);
loop:
delay(LOOP_DELAY);
temp1 = ad7705.readADResult(AD770X::CHN_AIN1);
delay(LOOP_DELAY);
temp2 = ad7705.readADResult(AD770X::CHN_AIN3);
total[m]=temp2-temp1;
I’m working with UPDATE_RATE_60. Is there other way to get data?? If i only read AIN3 or AIN1, sometimes it doesn’t get data. Depending on the delay time (loop_delay), accuracy is higher or lower. When i read AIN1, i only have ground and i supposed that AIN1 and AIN3 should be for AIN1+/-. Why do i have to read AIN1 and AIN3 for AIN1+/-??.
I’m working with a duemilanove ATM328.
Thanks in advance
Hi Bodhi,
You can configure the AD7705 in either differential (bipolar) or single-ended (unipolar) operation. The simple constructor you used defaults to unipolar. If you need to take differential measurement, you can use the overloaded constructor: AD770X::init(byte channel, byte clkDivider, byte polarity, byte gain, byte updRate) , you can take a look at the cpp file to see how it can be used.
I am not sure why you need to include delay in your code though.
Hi kwong,
If i don’t use the delay, sometimes it doesn’t measure anything and accuracy is lower. I have to measure thermocouples and accuracy is very important.
I tried to use bipolar but values were wrong and the only way i could work was the one i wrote in the past post.
Thanks for your response.
Hi again kwong,
i tried to test your test program and i also need a delay. if i change the update_rate to other value different to 25 it doesn’t work, only measure 0.0000. With the gain, happens the same, measure something but out of the reality.
I’m using a crystal of 4.000M, can this be the reason?.
Thanks in advance
This is really strange.. I used the same design in one of my later projects () also with a 4Mhz crystal but I didn’t experiencing any delays… the code actually checks the status of the dataready bit and returns only when the conversion results are stable.
I guess as an alternative, you can poll the dataready pin, which should achieve the same result as the implementation in my code.
Hi again Kwong,
how can i poll the dataready? only checking the pin 12 of the ad7705?
Yes, according to the datasheet, the DRDY pin (12) goes low whenever the data is ready.
Hello ,
Can you check you library for issuse :
ad7706.init(AD770X::CHN_AIN1,AD770X::CLK_DIV_1, AD770X::BIPOLAR, AD770X::GAIN_4, AD770X::UPDATE_RATE_25);
Init must setup CHN_AIN1 , but it sets CHN_AIN2 … The same with AIN2. This settings must be swaped.
Regards,
Linas
Nice librarry , great job !!
Greetings!
I will try to connect AD7705 to the arduino Leonardo. The question is, in the datasheet used DRDY. It is also connected to the arduino or leave free? Also the datasheet CS goes to the ground, and you have it connected to the arduino.
Sorry for the horrible English. I’m from Ukraine.
Hi Bogdan,
DRDY can either be read from the register or from the pin. The method I used was reading from the register and thus the pin is left unused. The CS pin is used in the SPI protocol and is controlled within the SPI library.
Hello! This again I am! Another question: what is the face value of capacitors that connect to ground quartz? I understand that stupid question, but thanks for the answer!)
I used a 2MHz ceramic oscillator so the caps were not used. But can also use a crystal with 2 load capacitors. The value varies based on the crystal you choose, but anything between 18pF and 33pF will work.
Hi! One more question on the library. I’m using the Arduino Leonardo. It SPI pins are different from UNO. For normal operation of A can change them in the file library?Thanks for the fast reply!
I have a quick question, I’m using the AD7705 with the arduino mega 2560 chip. I modified the header file to use the correct pins for the mega 2560: MISO(pin 50), MOSI(pin 51), and SCK(pin 52). I also set pinCS to pin 24 (since that is what the AD7705 CS pin is connected to) and I setup pin 10 to be a 2 MHz clock (using timers) to go into MCLK IN since I don’t have a crystal. My current issue is that I will only get values equal to 0 or whatever I pass in as the reference voltage. If I pass 2.5 as my reference voltage then my output randomly alternates between 0 and 2.5, if I pass in 5, then it flips between 0 and 5. So could this be an issue with the chip or is this an issue because I’m using the mega 2560?
Hard to tell… the first thing I’d do is short the input (or provide a steady voltage) of the ADC and see how the output behave. AD7705/06 does have a minimum recommended clock frequency of 4MHz so not sure if it would work with a 2MHz clock.
Right now I have the AD7705 connected to a simple voltage divider across a 10k pot, if I plug the input into ground or if I pull the input high the data coming in is still 0.00. I setup a 4 MHz clock and my serial output to my screen stopped altogether, any ideas?
Sorry, my bad. AD7705’s minimum operating freq is 400kHz not 4MHz so 2MHz should be fine (and that’s the clock frequency I was using). Could you try a reset() immediately before reading the ADC value? Also, could you add a small delay (e.g. delay(10)) before reading the ADC to see if you observe any difference?
Currently I have a 500ms delay in my loop between readings, I tried the reset just before reading the ADC value and still nothing. At this point I’m inclined to think that I may have a bad chip
Kwong,
Firstly I’d like to thank you for putting together this library, I was wondering if you could help me with a problem I’ve got.
I am in the exact same boat as drewman2000; I’m on a Mega2560, have altered the header file for the Mega pins (50-52), using pin 10 as a 2MHz clock with the AIN connected to a 50k pot and all I get out is 0.00 with about a ~100ms delay between readings. I’ve tried adding the small delay and also the reset but neither work, do you have any ideas?
Is it also normal that it should take about 2-3 mins to complete the setup routine and start giving readings back?
Separately, is it possible to read the analog input as a decimal number rather than as a voltage?
Thanks,
Rob
Assuming you are using the revisited library querying the DRY bit instead of using delays. Since I don’t have an Arduino mega board I can’t test for sure, but something is definitely not right.
It shouldn’t take that long to get the initial reading, and the only way it could take that long is the dataReady function keeps returning “false”. In this case the readADResult function would be in a wait loop.
Could you try using a crystal oscillator instead of clock generated via Mega? Although it really shouldn’t matter.
Hi kwong,
i tried to operate your code program and i read H1= 0.0 H2=0.0 H1=0.0 H2=0.0 i can not reading Anything !!!!!!!!!!!!!!!
#include
#include
/****************************************************************************/
LiquidCrystal lcd(8, 9, 4, 5, 6, 7);
/****************************************************************************/
//set reference voltage to 2.5 V
AD770X ad7705(2.5);
float v1;
float v2;
float H1;
float H2;
void setup()
{
//initializes channel 1
ad7705.init(AD770X::CHN_AIN1);
//initializes channel 2
ad7705.init(AD770X::CHN_AIN2);
Serial.begin(9600);
lcd.begin(16, 2);
}
void loop()
{
//read the converted results (in volts)
v1 = ad7705.readADResult(AD770X::CHN_AIN1);
//read the converted results (in volts)
v2 = ad7705.readADResult(AD770X::CHN_AIN2);
H1 =(v1*1);
H2 =(v2*1);
Serial.println(“H1”);
Serial.println(H1);
Serial.println(“H2”);
Serial.println(H2);
lcd.setCursor(0,0);
lcd.print( H1);
lcd.setCursor(0,1);
lcd.print(H2);
unsigned int data=0;
delay(500);
}
Could you try reading just one channel and see what values you get?
You must place before AD7705 initialisation this: void AD770X::reset() Input shift register of AD7705 must be in known state before initialisation and communicating.
2Keryy: reset routine simplified:
void AD770X::reset() {
digitalWrite(pinCS, LOW);
spiTransfer(0xff);
digitalWrite(pinCS, HIGH);
}
I use cheap TM7705 modules from ebay. After corrrect interface reset this fake chips work fine.
Hello Brother,
Your library is the only thing that I got to get my work done,Actually I am Building a Data Logger so I need to port this code to ATMEGA 32 Mcu But I am could not do this,Is there any way that this library can be ported to a Off the market Atmega32 Mcu.
Thanks a lot for your work
Thank you very much for the AD770X library!
Is it possible to use pins other than 11, 12, 13 for MOSI/MISO/SCK?
I am asking because two of my devices do not seem to be compatible with each other (they work well independently with the Arduino), and I have extra digital lines available. I did not see a reference to SPI.h in your library and example, so I am hoping this is possible. Thanks again.
For ATMega328/or 328p those pins are fixed for SPI as this is a hardware function. But if you implement the SPI protocol on your own, you can use pretty much any pins.
Hi to to all,
I’m using the AD7705 and i found this lib very helpful, thanks a lot!
i just have quite a problem on the measuring:
i have connected to the GND the ai1(-)
i’m giving to the ai2(+) a known voltage from the arduino (voltage that i also measure with the multimeter)
The configuration is exactly the one shown in the scheme by kerry wong.
when i run the program, i get from the ad7705 a value lower than the one i expect, for example:
if i set 1 V from the arduino, i measure 1.00V with the multimeter, but i get 0.95V on the AD7705.
if i set 2 V from the arduino, i measure 2.00V with the multimeter, but i get 1.92V on the AD7705.
it seems like i have a gain problem (i set a gain og 1 btw), because the ratio voltge_expected/volytage_measured=1.045 ish, which is averagely constat.
Can it be a calibration problem? or am i making some stupid mistake? or is the IC broken?
hope someone can help me,
cheers,
Francesco
sorry,
my mistake i was talking about the ai1(+) not the ai2(+)
cheers,
francesco
Dear Kwong,
Thanks a lot for your library it works fine when i connect a potentiometer to the channel where the middle pin is connected to the +ve In and -ve In is connected to ground , now when i try to enter a sine wave as an input i get very unreasonable values , I’ve tried both making an offset so that the value of the input is between 0 and 1V for example or without offsite such that the input would vary between -50 and +50 mV (because it is said in the data sheet that there won’t be correct readings below -100 mV) do u have any ideas where the problem could be ?!
Best regards,
A question for anyone using the cheap boards off eBay that use the TM7705 chip (supposedly AD7705 compatible).
I’m using both inputs, and when I do, the data from AIN1 data comes out when I request data from AIN2 and the data from AIN2 come out when I request data from AIN1.
However when I use just AIN1 by itself, reading data from AIN1 works as expected, but when using only AIN2, the data can only be read by reading AIN1.
Does the real AD7705 exhibit this behavior or this is probably just a problem because the chip is (probably) a Chinese knock off?
Jeff
Hi, Jeff.
I am using TM7705 module with Arduino MEGA 2560 board, the input is unipolar, both AIN1(-) and AIN2(-) connect to GND. But I ran into a similar situation as you mentioned above. I do not have any infomation about the usage of TM7705 module. Have you made any progress on sovling the problem?
Based on MEGA 2560 schematic the pin connections are:
MISO(50), MOSI(51), SCK(50) and CS(53)
Thanks,
John
Sorry for typo, SCK(50) should be sck(52)
John
Wondering how you got data from the tm7705 chip. perhaps you could send me a picture or schematic of your wiring and the code you used. I cannot figure out how to get data using the example code. my email is koryherber@gmail.com Thanks!
Hello Kwong!
I have a module with AD 7705.
It has AD 7705 and reference voltage of 2.5 V. I connect it to the Arduino Uno with your library. I connect outputs in accordance with the protocol library SPI. I use a simple sketch of an example for testing. On the input AIN 1, I tried to connect the various signal sources (electret microphone, temperature sensor, battery 1.5 V). All values displayed in the serial monitor – zero. Help me to understand, Why it’s not working?
I am also having the same problem using the same module listed by Amir-aka. any ideas??
Hi, I am having the same problem with the TM7705 ( value is always zero). Does anyone know what can be the problem?
Thank you Wong. Your library is very useful.
I am also having problems with the module described by Amir (AD7705 Dual 16 bit ADC Data Acquisition Module Input Gain Programmable SPI Interface TM7705 ). I have changed the the bit CLK to account for the crystal of 4 MHz but still doesn’t work. It seems the module has some sort of hardware problem.
Does anyone had success using the module 7705?
Hi,
I am having compilation errors when using the program with the DUE. It has to do with the library. Has anybody made it work with a DUE? what modifications to the library are necessary? thank you
Since it was not working for the DUE I have tryed first with the UNO. It compiles correctly but all my readings are 0 (except for some which seem like noise). Has anyone had a similar problem? Any help? ty
Hi Jorge,
A lot of people seemed to have similar issues like what you were having. I am not entire sure whether there was some slight change in the chip or what. The ones I developed my code with are all working fine… So unless I can get a sample of the ones that people are having trouble with, I am not quite sure what’s going on.
It works on my TM7705, but I have only two digits after dot. How increase this to 4 digits?
Hi Mark, how did you manage to make it work? Can you please tell us, I am desperately facing the same issue of other users…
Hi Kerry,
I’m building an Arduino musical instrument, and I’m using your library and the AD7705.
Will it be OK to publish the code, including your library, as a blog post or as an instructable?
I’m asking because I couldn’t find a license reference in the code.
Thanks,
UriSh
Absolutely! Everything here is open sourced in the hope that it would be useful for other people.
Thanks for your reply!
I sure hope it will work out. I’ve been using the AD7705 to get a precise reading of a 10-turn potentiometer, and it worked perfectly fine, until I added a LCD+keypad shield (DFRobot). For some reason, when I connect both the LCD and the ADC, the ADC hangs. I’ve checked to see if I fried the hardware or something like that, checked if the pinout matches, and most everything else that came to my mind. Nothing works so far, which is very frustrating. If you have some insight on how to get this to work, I would sure like to know.
Thank you very much,
UriSh
For anyone using the TM7705 red boards of ebay.
I had a lot of issues getting mine going until I attached 5v to the reset pin. I’m not great at reading wiring diagrams but I thought that it was already connected, but it wasn’t.
I get readings from a pot using the default code above on Arduino uno with the tm7705 above and the Pot connected to 5v, ain+, ain-.
it reads up until 2.5v.
if you touch a wire to ain- and the REFIN+ (The fourth pin from the dot(ground) on the ad7705 chip), you get readings from 0-5v.
I have a question for Kerry D Wong. (Thanks so much for this library, its so great that you did all the work and than shared it!)
I am trying to get a reading from an RTD sensor on the board. While I would love your help with that too my actual question is:
To get a bipolar reading, do I need to read ain+ and ain- (by putting ad7705.readADResult(AD770X::CHN_COMM)). Or is that a question that would not even make sense if I knew more aobut electronics?
(I ask because when I read Ain+ with the REF as the agitation for the RTD, I only get the agitation value (2.5) as a reading)
Thanks for all your help in both writing this library and then reviewing it to make it easier :)!
Hi, to use bipolar mode with AD7705, you need to call the overloaded initialization function (definition below):
void AD770X::init(byte channel, byte clkDivider, byte polarity, byte gain, byte updRate)
and passing in AD770x::BIPOLAR into the polarity variable.
Also, in bipolar mode, the measurable range between ain+ and ain- is between -2.5V to 2.5V if the reference is 2.5V.
Thanks for the great library, but im not able to get correct values on the ouptut, it is always the same even if i change the input voltage, my module is the red from ebay i also tried connecting reset pin to 5v but nothing changes.
Hi Kerry,
i´m quite desperate to bring AD7706 alive. Im using uno board with AT328, all wires are connected correctly and still i cant get nothing but 0.00 in serial monitor. Would you be so kind and give me an advice how to fix it. It looks like AD7006 doesnt comunicate with uC.
Thanks for reply
Hello, if you are interested, i’ve ported this library in python to use it with spidev (for raspi for ex). Here is the link :
Awesome, thanks!
Ps. I also use the red one from eBay connected on an pi and it work, even without connected the reset pin.it has a vref = 2.5v.
Dear Kerry!
In file AD7705.h we see:
static const byte UNIPOLAR = 0x0;
static const byte BIPOLAR = 0x1;
In datasheet DOC000143693.pdf page 12
B/U Bipolar/Unipolar Operation. A “0” in this bit selects Bipolar Operation. A “1” in this bit selects Unipolar Operation.
Where is the mistake?
I also found it and I suppose datasheeet is right.
Dear Kerry D. Wong, I am facing the same issue of other users. Any updates for TM7705?
Severals errors:
1. in AD770x.h
static const byte UNIPOLAR = 0x0; // MUST BE 0X1
static const byte BIPOLAR = 0x1; // MUST BE 0X0
2.
static const byte CLK_DIV_1 = 0x1; // MUST BE 0x00
static const byte CLK_DIV_2 = 0x2; // MUST BE 0x01
Thanks Roman for your reply. However, I still have some issues and my readings are always zero. I think I have figured out a problem: while using the scope for the DRDY pin I noticed that it does not go LOW.
Hi kerry,
thanks for you library. it is realy promising.
ıf ı change
AD770X ad7706(2.5);
double v;
to
AD770X ad7706(2.5);
double v1,v2;
the outputs fluctuate randomly. Both channel can be read easiliy however two channel give unexpected outcome.
could you comment on that? And do we need to change (as Roman said)?
static const byte UNIPOLAR = 0x0; // MUST BE 0X1
static const byte BIPOLAR = 0x1; // MUST BE 0X0
thanks a lot.
thanks
Hi. In the monitor serial see. Why?
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
did you fix the problem my ad7705 also show 0.0
Hello Sir
Thank you for the library.
I wanted to know weather this code will run in Arudino Due.
can you please do tell me what can be the code to interface AD7705 in Arudino Due.
Thanks
Mark
Hello everybody
first of all thanks for the lib.
I have problem with AC7705 conversion. my sketch is the following.
#include
#include
AD770X ad7705(65536);
unsigned int ADCValue1;
unsigned int ADCValue2;
void setup() {
// Open serial communications and wait for port to open:
Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for Leonardo only
}
ad7705.reset();
delay(1000);
ad7705.init(AD770X::CHN_AIN1,AD770X::CLK_DIV_1, AD770X::BIPOLAR, AD770X::GAIN_1, AD770X::UPDATE_RATE_25);
ad7705.init(AD770X::CHN_AIN2,AD770X::CLK_DIV_1, AD770X::BIPOLAR, AD770X::GAIN_1, AD770X::UPDATE_RATE_25);
delay(1000);
}
void loop() {
ADCValue1 = ad7705.readADResult(AD770X::CHN_AIN1);
ADCValue2 = ad7705.readADResult(AD770X::CHN_AIN2);
Serial.print(“AD7705 analog 16bit input 1: “);
Serial.println(ADCValue1);
Serial.print(“AD7705 analog 16bit input 2: “);
Serial.println(ADCValue2);
Serial.println(“-“);
delay(1000);
}
but the serial output still zero forever.
AD7705 analog 16bit input 1: 0
AD7705 analog 16bit input 2: 0
–
AD7705 analog 16bit input 1: 0
AD7705 analog 16bit input 2: 0
even if i put 3 volt at the input.
somebody can help me?
thanks.
Hey Miki
I’ve also been there, so in my case what worked was to pull up the rst pin of the ADC using a pullup resistor.
First, your line does not look good to me “AD770X ad7705(65536);”
you might want to provide the ref voltage like so:
AD770X ad7705(2.5);
Second, you must expect double values from readResult, not unsigned int
Best
Dear,
Please note that I had a reading problem with the AD7705 (DRDY never going to level Zero).
Board with PIC16F690 XTAL @8 MHz, PortC Driving all ADC pins.
Programming with MikroBasic v7 Pro and Pickit2.
Driving AD7705 pins with own software and simple functions (not really SPI protocol).
I noted that “reading” was subject to setting, when reading datas, the AD7705 Din input … MUST be set to level ONE !
I Don’t understand why ( possibly inputting command while reading ?) but it WORKED really !
All is now OK !
Perhaps some solution to others softwares and users.
If you are interrested for all users, please give me your e-mail where I can send my software (test).
Antonio
Hello everyone, tried to use the library from Kerry Dwong with the TM7705 chip from Ebay (red breakboard), link:. It has a reference voltage of 2.5V and a oscillator the matches the requirement stated in the AD7705 datasheet.
I Have rewritten the library from Kerry Dwong for the AD7705, and now added SPI lib to it, so it can be used for all type of controllers. The library can be found here:. Enjoy it :)
However, I am like everyone else getting readings of only zeroes. I want to know if anyone else has solved this problem, and if so how? Furthermore is there a chance that I can get a look into the library you used/made to understand how you made it work with the TM7705 version? Also how did you guys connect the breakboard with arduino controller and to what ever sensor you are using – I am using a load cell.
My connections to arduino due and load cell:
For Arduino due and TM7705:
GND -> GND
VCC -> 5V
RST -> 5V or nothing (as you can see in the circuit diagram)
CS -> pin 10
SCK -> pin 76
DIN -> pin 75
DOUT -> pin 74
DRDY -> Nothing, since i am using the internal resistor.
For load cell and TM7705:
AIN1 + -> Output data
AIN1 – -> Output data
The two left wires coming from the load cell goes to GND and 5V for excitation.
If you look into my library you can see that I am using SPI_MODE = 3. I am not sure if that us correct, but i think so since it states that in the sample code in the datasheet.
Furthermore I am using Unipolar mode which I think is the correct mode, though not sure?
Please if you find any mistakes in the library, do tell me so I can correct it :)
Hope you guys have an answer for my questions :)
Hello
Please … I Don’t know very well C language and how is written SPI library … I am only a Mikroe Basic language writer.
Verify that before reading datas … the DIN of AD7705 is SET … DIN=1
In my program, DIN of AD7705 is connected to Spi_SDO output of my board with PIC16F690 minimum system …
Here are the définitions of my SPI pins for PIC16F690.
dim Chip_Select as sbit at RC0_bit
Spi_DRDY as sbit at RC1_bit
Spi_RST as sbit at RC2_bit
Spi_CLK as sbit at RC3_bit
Spi_SDI as sbit at RC4_bit // connected to Dout AD7705
Spi_SDO as sbit at RC5_bit // connected to Din AD7705
dim Chip_Select_Direction as sbit at TRISC0_bit
Spi_DRDY_Direction as sbit at TRISC1_bit
Spi_RST_Direction as sbit at TRISC2_bit
Spi_CLK_Direction as sbit at TRISC3_bit
Spi_SDI_Direction as sbit at TRISC4_bit // =1 so Spi_SDI is an input connected to AD7705 Dout output
Spi_SDO_Direction as sbit at TRISC5_bit // =0 so Spi_SDO is an output connected to AD7705 Din input.
So Spi_SDO is an output … and I had to make Spi_SDO = 1 –> Din = 1 before reading datas of AD7705.
If Spi_SDO = 0 … then I only read 0000 !
See the changes …
Spi_SDO = 1 The last column is the mean of the first 8 samples
(speed = ~ 100 samples /second, turning slowly input voltage with pot 10 turns)
8BD7 8B90 8B46 8B03 8AC4 8A85 8A46 8A06 00008AE9
89C4 8982 893C 88FA 88BB 887F 8848 8813 000088E2
87E3 87B9 8792 876E 874C 872C 870C 86EC 00008761
86CC 86A9 8684 865F 8638 8615 85F7 85E0 0000864F
85CC 85BD 85B2 85A6 859E 8599 8598 8597 000085A9
8596 8597 8596 8595 8595 8593 8591 8592 00008594
8595 8599 859D 85A6 85B5 85C8 85DE 85F8 000085B8
8612 862A 8643 865D 8675 868B 86A3 86BF 00008668
Spi_SDO = 0
I hope this can help you …
Best Regards to everyone
|
http://www.kerrywong.com/2012/04/18/ad7705ad7706-library-revisited/comment-page-1/
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Production time profiling On-Demand with Java Flight Recorder
- Beverley Blair
- 2 years ago
- Views:
Transcription
1
2
3 Production time profiling On-Demand with Java Flight Recorder Using Java Mission Control & Java Flight Recorder Klara Ward Principal Software Developer Java Platform Group, Oracle Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle Confidential
4 About me Developer in the Oracle Java Mission Control team in Stockholm, Sweden Sometimes Celebrating Java 20 Years at JFokus
5. 5
6 Agenda Overview of Java Mission Control Overview of Java Flight Recorder Demo Customization, Future, Links Q&A 6
7 Maurizio Cimadamore, Oracle (Java LangTools) Allan Thrane Andersen, Trygg I managed to do in one day what I've failed JMC is my main tool for getting insight into to do in 2+ weeks using <profiling tool> and the rhythm of a JVM and the running <another profiling tool>. applications. I have used recordings to resolve critical production issues caused by latency, memory-leaks or threading.
8 RebelLabs Developer Productivity Report 2015, Java Performance Survey
9 Java Mission Control profiling tool What do they mean? Probably: Data from Java Flight Recorder Visualized in Java Mission Control
10 Java Flight Recorder (JFR) & Java Mission Control (JMC) Brief overview JDK JVM JFR Events Low overhead JFR Engine Control recordings start/stop/dump Recording data myrecording.jfr java -XX:StartFlightRecording JDK/bin/jcmd <pid> <cmd> JMC java -XX:+FlightRecorder JDK/bin/jmc or Eclipse plug-ins
11 Overview of Java Mission Control The graphical client 11
12 Java Mission Control Overview A tools suite for production use (fine in development too) Basic monitoring Production time profiling and diagnostics Free for development and evaluation Tool usage is free, data creation in production requires a commercial license tiny.cc/javalicense
13 History of Mission Control JRockit Flight Recorder Appeal(JRockit) -> BEA Systems -> Oracle <- Sun Best JRockit features -> HotSpot JVM JFR and JMC released with 7u40, September 2013
14 Java Mission Control Main Tools Two main tools: JMX Console Online monitoring Flight Recorder Offline low overhead profiler Control and visualization in JMC JRockit Mission Control also had the Memory Leak Analyzer
15 Experimental Plugins Downloadable from within Mission Control DTrace JFR style visualization of data produced by DTrace JOverflow Memory anti-pattern analysis from hprof dumps JMX Console plug-ins Java Flight Recorder plug-ins WLS JavaFX
16 JMC installation/startup <JDK>/bin/jmc Mac: (/usr/bin/) jmc Add if needed: consolelog debug ( more 2>&1 ) Eclipse plug-ins Install from update site on OTN: Eclipse Update Site Experimental plug-ins: Install from within the JMC app, or from, Eclipse Experimental Update Site
17 Overview of Java Flight Recorder Low overhead profiling 17
18 Flight Recorder 101 High Performance Event Recorder Built into the JVM Already available runtime information Measuring the real behavior, doesn t disable JVM optimizations Binary recordings Self contained self describing chunks Very detailed information Extremely low overhead (~1-2%) Can keep it always on, dump when necessary
19 Java Flight Recorder (JFR) & Java Mission Control (JMC) Brief overview JDK Java Events JFR JMX API JFR Java API Control recordings start/stop/dump java -XX:StartFlightRecording JDK/bin/jcmd <pid> <cmd> JVM JVM Events JFR Engine Recording data myrecording.jfr JMC java -XX:+FlightRecorder JDK/bin/jmc or Eclipse plug-ins
20 Data collected by JFR Java application behavior Threads/Locks I/O Exceptions JVM behavior (indirect Java application behavior) Garbage collection, allocation JIT Compiler Implemented by the different subsystem teams
21 Method sampling Sampling profiler Not displaying every single call to your method This is partly why we get the low overhead Detects hot methods Does not require threads to be at safepoints (Flags currently needed to give more accurate non-safepoint data) -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints (will be default in coming releases) Not sampling threads in native
22 RebelLabs Developer Productivity Report 2015, Java Performance Survey
23 RebelLabs Developer Productivity Report 2015, Java Performance Survey
24 Different Kinds of Events Instant Event - Exception Duration Event Thread.sleep Configurable threshold Requestable Event Method profiling sample Polled from separate thread Configurable period Period and threshold settings impact the performance overhead
25 Event settings Predefined settings default designed to get max information within <= 1 % overhead profile even more information, ~2 % overhead Enabling of event types, configuring periods and thresholds jre/lib/jfr/*.jfc Design your own from the Mission Control GUI
26 Different Kinds of Recordings Continuous Recordings Have no end time Must be explicitly dumped Example use case: Enable at startup, dump the last X minutes when needed Time Fixed Recordings ( profiling recordings ) Have a fixed time If started from Java Mission Control, opened automatically in the GUI Example use case: Performance testing under load, do a 1 minute recording Resulting *.jfr file also called Recording
27 Creating recordings More than one way 27
28 Preparations Start the JVM from which to get recordings with: -XX:+UnlockCommercialFeatures -XX:+FlightRecorder In 8u40 and later, possible to enable at runtime if needed Using JMC or jcmd On-demand
29 Creating Recordings Using Mission Control 1. Find a JVM to do a recording on in the JVM Browser 2. Double click the Flight Recorder node under the JVM 3. Follow the wizard NEW: No need for the JVM flags, automatic enablement from JMC
30 Creating Recordings Using Startup Flags -XX:+UnlockCommercialFeatures -XX:+FlightRecorder Time fixed -XX:StartFlightRecording= delay=20s,duration=60s, filename=c:\tmp\myrecording.jfr,settings=profile,name=javaland Continuous w/ dumponexit -XX:StartFlightRecording=settings=default -XX:FlightRecorderOptions=dumponexit=true, dumponexitpath=c:\tmp\myrecordings (Needed before 8u20: -XX:FlightRecorderOptions=defaultrecording=true ) See documentation for Java options (google java options )
31 Creating Recordings Using JCMD Useful for controlling JFR from the command line Usage: jcmd <pid> <command> Starting a recording: jcmd 7060 JFR.start name=myrecording settings=profile delay=20s duration=2m filename=c:\tmp\myrecording.jfr Dumping a recording: jcmd 7060 JFR.dump name=myrecording filename=c:\tmp\dump.jfr Unlocking commercial features (if JVM not started with the flag): jcmd 7060 VM.unlock_commercial_features Don t forget to try jcmd <pid> help to see what else jcmd can do
32 Creating Recordings Using JMX Console triggers When realtime monitoring your application Start JMC Connect a JMX Console to your application Configure and enable rules on the Triggers tab Recording will be started or dumped when the trigger occurs
33 Remote production systems No GUI, security restrictions Remote access either using -Dcom.sun.management.jmxremote, connect with JMC or start recording from local commandline using jcmd, transfer JFR to workstation Enabling JFR at startup some very small initiation overhead at startup, threads allocate small amount of extra memory dynamically 0% overhead at startup, initiation overhead happens at time of enabling, might cause some classes to be deoptimized etc. If you want to avoid restart
34 Analyzing recordings Using the graphical client 34
35 How to think about the information shown When analyzing Flight Recordings Only you know what your application is supposed to be doing Batch job, or real time trading? Do you want the CPU usage to be high or low? If you have a theory about what is wrong, you can find out why
36 How to think about the information shown Common pitfalls CPU load Low load method sampling not very interesting Full load latencies not very interesting/likely Event thresholds Keeping performance up but still detecting outliers Is the thread running normally during 30%? Hint: Default threshold is 10 ms.
37 DEMO! Flight Recorder startup and analysis 37
38 Customization Unsupported 38
39 import com.oracle.jrockit.jfr.*; public class Example { private final static String PRODUCER_URI = ""; private Producer myproducer; private EventToken mytoken; } Adding Your Own Events (unsupported) public Example() throws URISyntaxException, InvalidEventDefinitionException, InvalidValueException { myproducer = new Producer("Demo Producer", "A demo event producer.", PRODUCER_URI); mytoken = myproducer.addevent(myevent.class); name = "My Event", description="an event triggered by dostuff.", stacktrace=true, thread=true) private class MyEvent extends TimedEvent description="the logged important stuff.") private String text; public MyEvent(EventToken eventtoken) { super(eventtoken); } } public void settext(string text) { this.text = text; } public void dostuff() { MyEvent event = new MyEvent(myToken); event.begin(); String importantresultinstuff = ""; // Generate the string, then set it... event.settext(importantresultinstuff); event.end(); event.commit(); } 39
40 Adding Your Own Events (unsupported) Reusing the event (not thread safe)... private MyEvent event = new MyEvent(myToken); public void dostuffreuse() { event.reset(); event.begin(); String importantresultinstuff = ""; // Generate the string, then set it... event.settext(importantresultinstuff); event.end(); event.commit(); }... NB: If JFR is not enabled, the custom events gives no (0%) overhead 40
41 Parsing recordings (unsupported) Two unsupported options The JDK parser import oracle.jrockit.jfr.parser.*; SAX style parser The JMC parser import com.jrockit.mc.flightrecorder.flightrecording; DOM style parser 41
42 Built in GUI editor (unsupported) Show view -> Designer Customize the existing GUI or produce entirely new GUIs for events Export the created GUI to share it with others
43 Future 43
44 Future JFR - Supported API for adding your own JFR events JMC - Automatic analysis of Flight Recordings You have used a JVM flag that is not recommended Your application is doing a lot of GC, investigate more here...
45 Resources Homepage: (Click Discussion to see the forum) @hirt Blog: Facebook: ( for JMC tutorial)
46 Questions? Q: Can you repeat those JVM flags again? A: -XX:+UnlockCommercialFeatures XX:+FlightRecorder 46
47 Safe Harbor Statement. 47
48
49
Java Mission Control
Java Mission Control Harald Bräuning Resources Main Resource: Java Mission Control Tutorial by Marcus Hirt includes sample projects! Local copy: /common/fesa/jmcexamples/jmc_tutorial
Oracle JRockit Mission Control Overview
Oracle JRockit Mission Control Overview An Oracle White Paper June 2008 JROCKIT Oracle JRockit Mission Control Overview Oracle JRockit Mission Control Overview...3 Introduction...3 Non-intrusive profiling...
BEAJRockit Mission Control. Using JRockit Mission Control in the Eclipse IDE
BEAJRockit Mission Control Using JRockit Mission Control in the Eclipse IDE Mission Control 3.0.2 Document Revised: June, 2008 Contents 1. Introduction Benefits of the Integration................................................
Oracle Java SE and Oracle Java Embedded Products
Oracle Java SE and Oracle Java Embedded Products This document describes the Oracle Java SE product editions, Oracle Java Embedded products, and the features available with them. It contains the following
2 2011 Oracle Corporation Proprietary and Confidential
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
Tool - 1: Health Center
Tool - 1: Health Center Joseph Amrith Raj 2 Tool - 1: Health Center Table of Contents WebSphere Application Server Troubleshooting... Error! Bookmark not defined. About
enterprise professional expertise distilled
Oracle JRockit The Definitive Guide Develop and manage robust Java applications with Oracle's high-performance Java Virtual Machine Marcus Hirt Marcus Lagergren PUBLISHING enterprise professional expertise
Oracle JRockit E15070-06
Oracle JRockit Flight Recorder Run Time Guide Release R28 E15070-06 May 2011 This document contains background on the Oracle JRockit Flight Recorder Run-time implementation and instructions for using this
Java Debugging Ľuboš Koščo
Java Debugging Ľuboš Koščo Solaris RPE Prague Agenda Debugging - the core of solving problems with your application Methodologies and useful processes, best practices Introduction to debugging tools >
THE BUSY DEVELOPER'S GUIDE TO JVM TROUBLESHOOTING
THE BUSY DEVELOPER'S GUIDE TO JVM TROUBLESHOOTING November 5, 2010 Rohit Kelapure Agenda 2 Application Server component overview Support
Mission-Critical Java. An Oracle White Paper Updated October 2008
Mission-Critical Java An Oracle White Paper Updated October 2008 Mission-Critical Java The Oracle JRockit family of products is a comprehensive portfolio of Java runtime solutions that leverages the base
Monitoring HP OO 10. Overview. Available Tools. HP OO Community Guides
HP OO Community Guides Monitoring HP OO 10 This document describes the specifications of components we want to monitor, and the means to monitor them, in order to achieve effective monitoring of HP Operations
CSE 403. Performance Profiling Marty Stepp
CSE 403 Performance Profiling Marty Stepp 1 How can we optimize it? public static String makestring() { String str = ""; for (int n = 0; n < REPS; n++) { str += "more"; } return str; } 2 How can we optimize
PTC System Monitor Solution Training
PTC System Monitor Solution Training Patrick Kulenkamp June 2012 Agenda What is PTC System Monitor (PSM)? How does it work? Terminology PSM Configuration The PTC Integrity Implementation Drilling Down
Oracle WebLogic Server Monitoring and Performance Tuning
Oracle WebLogic Server Monitoring and Performance Tuning Duško Vukmanović Principal Sales Consultant, FMW Stuck Threads A Label given to threads not returned to thread pool after
WebLogic Server Administration
ORACLE PRODUCT LOGO WebLogic Server Administration Roger Freixa Principal Product Manager 1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. WebLogic Concepts 2 Copyright 2011, Oracle
Java Troubleshooting and Performance
Java Troubleshooting and Performance Margus Pala Java Fundamentals 08.12.2014 Agenda Debugger Thread dumps Memory dumps Crash dumps Tools/profilers Rules of (performance) optimization 1. Don't optimize
TDA - Thread Dump Analyzer
TDA - Thread Dump Analyzer TDA - Thread Dump Analyzer Published September, 2008 Copyright 2006-2008 Ingo Rockel Table of Contents 1.... 1 1.1. Request Thread Dumps... 2 1.2. Thread
WEBAPP PATTERN FOR APACHE TOMCAT - USER GUIDE
WEBAPP PATTERN FOR APACHE TOMCAT - USER GUIDE Contents 1. Pattern Overview... 3 Features 3 Getting started with the Web Application Pattern... 3 Accepting the Web Application Pattern license
Java Garbage Collection Basics
Java Garbage Collection Basics Overview Purpose This tutorial covers the basics of how Garbage Collection works with the Hotspot JVM. Once you have learned how the garbage collector functions, learn how
Tutorial: Load Testing with CLIF
Tutorial: Load Testing with CLIF Bruno Dillenseger, Orange Labs Learning the basic concepts and manipulation of the CLIF load testing platform. Focus on the Eclipse-based GUI. Menu Introduction about Load
WebSphere Server Administration Course
WebSphere Server Administration Course Chapter 1. Java EE and WebSphere Overview Goals of Enterprise Applications What is Java? What is Java EE? The Java EE Specifications Role of Application Server What
WebLogic Server Admin
Course Duration: 1 Month Working days excluding weekends Overview of Architectures Installation and Configuration Creation and working using Domain Weblogic Server Directory Structure Managing and Monitoring
Securing SAS Web Applications with SiteMinder
Configuration Guide Securing SAS Web Applications with SiteMinder Audience Two application servers that SAS Web applications can run on are IBM WebSphere Application Server and Oracle WebLogic Server.
IBM WebSphere Server Administration
IBM WebSphere Server Administration This course teaches the administration and deployment of web applications in the IBM WebSphere Application Server. Duration 24 hours Course Objectives Upon completion
NetBeans Profiler is an
NetBeans Profiler Exploring the NetBeans Profiler From Installation to a Practical Profiling Example* Gregg Sporar* NetBeans Profiler is an optional feature of the NetBeans IDE. It is a powerful tool,
Configuring and Integrating JMX
Configuring and Integrating JMX The Basics of JMX 3 JConsole 3 Adding a JMX Component Monitor to SAM 6 This document includes basic information about JMX and its role with SolarWinds SAM 2 Configuring
Debugging Java performance problems. Ryan Matteson matty91@gmail.com
Debugging Java performance problems Ryan Matteson matty91@gmail.com Overview Tonight I am going to discuss Java performance, and how opensource tools can be used to debug
WEBLOGIC ADMINISTRATION
WEBLOGIC ADMINISTRATION Session 1: Introduction Oracle Weblogic Server Components Java SDK and Java Enterprise Edition Application Servers & Web Servers Documentation Session 2: Installation System Configuration
Rational Application Developer Performance Tips Introduction
Rational Application Developer Performance Tips Introduction This article contains a series of hints and tips that you can use to improve the performance of the Rational Application Developer. This article
Determine the process of extracting monitoring information in Sun ONE Application Server
Table of Contents AboutMonitoring1 Sun ONE Application Server 7 Statistics 2 What Can Be Monitored? 2 Extracting Monitored Information. 3 SNMPMonitoring..3 Quality of Service 4 Setting QoS Parameters..
2015 ej-technologies GmbH. All rights reserved. JProfiler Manual Index JProfiler help... 8 How to order... 9 A Help topics... 10 A.1 Profiling... 10 A.1.1 Profiling modes... 10 A.1.2 Remote profiling...
<Insert Picture Here> Java Application Diagnostic Expert
Java Application Diagnostic Expert Agenda 1. Enterprise Manager 2. Challenges 3. Java Application Diagnostics Expert (JADE) 4. Feature-Benefit Summary 5. Features Overview Diagnostic
WebLogic Server: Installation and Configuration
WebLogic Server: Installation and Configuration Agenda Application server / Weblogic topology Download and Installation Configuration files. Demo Administration Tools: Configuration
Orbix 6.3.7. Release Notes
Orbix 6.3.7 Release Notes Micro Focus The Lawn 22-30 Old Bath Road Newbury, Berkshire RG14 1QN UK Copyright Micro Focus 2014. All rights reserved. MICRO FOCUS, the Micro Focus
Web Load Stress Testing
Web Load Stress Testing Overview A Web load stress test is a diagnostic tool that helps predict how a website will respond to various traffic levels. This test can answer critical questions such as: How
Tuning WebSphere Application Server ND 7.0. Royal Cyber Inc.
Tuning WebSphere Application Server ND 7.0 Royal Cyber Inc. JVM related problems Application server stops responding Server crash Hung process Out of memory condition Performance degradation Check if the
Eclipse installation, configuration and operation
Eclipse installation, configuration and operation This document aims to walk through the procedures to setup eclipse on different platforms for java programming and to load in the course libraries for
HeapStats: Your Dependable Helper for Java Applications, from Development to Operation
: Technologies for Promoting Use of Open Source Software that Contribute to Reducing TCO of IT Platform HeapStats: Your Dependable Helper for Java Applications, from Development to Operation Shinji Takao,
TIBCO Hawk SNMP Adapter Installation
TIBCO Hawk SNMP Adapter Installation Software Release 4.9.0 November 2012 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED
<Insert Picture Here> What's New in NetBeans IDE 7.2
Slide 1 What's New in NetBeans IDE 7.2 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated
IBM Tivoli Composite Application Manager for WebSphere
Meet the challenges of managing composite applications IBM Tivoli Composite Application Manager for WebSphere Highlights Simplify management throughout the life cycle of complex IBM WebSphere-based J2EE
Practical Performance Understanding the Performance of Your Application
Neil Masson IBM Java Service Technical Lead 25 th September 2012 Practical Performance Understanding the Performance of Your Application 1 WebSphere User Group: Practical Performance Understand the Performance
A Practical Method to Diagnose Memory Leaks in Java Application Alan Yu
A Practical Method to Diagnose Memory Leaks in Java Application Alan Yu 1. Introduction The Java virtual machine s heap stores all objects created by a running Java application. Objects are created by,
Web Performance, Inc. Testing Services Sample Performance Analysis
Web Performance, Inc. Testing Services Sample Performance Analysis Overview This document contains two performance analysis reports created for actual web testing clients, and are a good example of the.
Actualtests.C2010-508.40 questions
Actualtests.C2010-508.40 questions Number: C2010-508 Passing Score: 800 Time Limit: 120 min File Version: 5.6 C2010-508 IBM Endpoint Manager V9.0 Fundamentals Finally, I got
CA Nimsoft Monitor. Probe Guide for Java Virtual Machine Monitoring. jvm_monitor v1.4 series
CA Nimsoft Monitor Probe Guide for Java Virtual Machine Monitoring jvm_monitor v1.4 series Legal Notices This online help system (the "System") is for your informational purposes only and is subject to
Extreme Performance with Java
Extreme Performance with Java QCon NYC - June 2012 Charlie Hunt Architect, Performance Engineering Salesforce.com sfdc_ppt_corp_template_01_01_2012.ppt In a Nutshell What you need to know about a
orrelog Ping Monitor Adapter Software Users Manual
orrelog Ping Monitor Adapter Software Users Manual mailto:info@correlog.com CorreLog, Ping Monitor Users Manual Copyright 2008-2015, CorreLog, Inc. All rights reserved. No part
HP Insight Diagnostics Online Edition. Featuring Survey Utility and IML Viewer
Survey Utility HP Industry Standard Servers June 2004 HP Insight Diagnostics Online Edition Technical White Paper Featuring Survey Utility and IML Viewer Table of Contents Abstract Executive Summary 3
Mobile Performance Management Tools Prasanna Gawade, Infosys April 2014
Mobile Performance Management Tools Prasanna Gawade, Infosys April 2014 Computer Measurement Group, India 1 Contents Introduction Mobile Performance Optimization Developer Tools Purpose and Overview Mobile
WINDOWS PROCESSES AND SERVICES
OBJECTIVES: Services o task manager o services.msc Process o task manager o process monitor Task Scheduler Event viewer Regedit Services: A Windows service is a computer program that operates in the background.
CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS
CHAPTER 1 - JAVA EE OVERVIEW FOR ADMINISTRATORS Java EE Components Java EE Vendor Specifications Containers Java EE Blueprint Services JDBC Data Sources Java Naming and Directory Interface Java Message
SilkTest Workbench. Getting Started with.net Scripts
SilkTest Workbench Getting Started with.net Scripts Borland Software Corporation 4 Hutton Centre Dr., Suite 900 Santa Ana, CA 92707 Copyright 2010 Micro Focus (IP) Limited. All Rights Reserved. SilkTest
Developing Android applications in Windows
Developing Android applications in Windows Below you will find information about the components needed for developing Android applications and other (optional) software needed to connect to the institution
|
http://docplayer.net/18317619-Production-time-profiling-on-demand-with-java-flight-recorder.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
How do I use strings to call functions/methods?
The best and most robust technique is to use a dictionary that maps strings to function objects, as described in this article:
why-isn-t-there-a-switch-or-case-statement-in-python
Alternative solutions include using locals or eval to resolve the function name:
def myFunc(): print "hello" fname = "myFunc" f = locals()[fname] f() f = eval(fname) f()
These are slower than using a custom dictionary, and also more dangerous. The locals approach makes it possible to call any function in the local scope, while eval makes it possible to execute arbitrary code. Only use these if you know exactly what you’re doing.
CATEGORY: programming
|
http://effbot.org/pyfaq/how-do-i-use-strings-to-call-functions-methods.htm
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
I was introduced to the javax.comm package of classes when I discovered they were used in the development kit for the Java Ring. (For details on javax.comm, see Rinaldo Di Giorgio's Java Developer column in the May issue of JavaWorld: "Java gets serial support with the new javax.comm package.") During my mad rush at JavaOne to get a program into my ring, I ran into a variety of problems, not the least of which was communicating with the ring. I downloaded the distribution from the Java Developer Connection and tried unsuccessfully to use it to talk to the Java Ring. Later, I discovered the problem with my ring: I didn't have Dallas Semiconductor's legacy APIs installed correctly. With the ring working, I basically forgot about the communications package. That is, until one weekend about a month ago, which is the starting point for this story..
On this special weekend not too long ago, I decided to bring the PDP-8 back to life, if only to relive those precious early memories and to show my daughter just how good she has it with her "measley old 133-MHz Pentium."
Reviving one classic by simulating another
To begin my revival effort, I had to get a program into the PDP-8. On the PDP-8, this is achieved by following a three-step process:
Using the front-panel switches, the user "keys" a short program into the magnetic core memory. This program is called the RIM Loader, and its purpose is to load another program from paper tape that is in Read-in-Mode, or RIM, format.
RIM Loader loads the paper tape in RIM format. This tape contains a program called a BIN Loader, which can load programs from paper tape in binary (BIN) format.
- Finally, you run BIN Loader to load the program you really want, which is on a paper tape in BIN format. Whew!
After going through these three steps, the program you want to run is stored in core memory. All the user needs to do then is set the starting address and tell the machine to "go."
In my effort to revive the machine, Step 1 was no problem, but Step 2 involved the use of the paper-tape reader in the Teletype -- and I didn't have a Teletype. Of course, I did have my desktop computer, so the logical step was to simulate a paper tape reader on my desktop.
From a logical and programming standpoint, simulating a paper-tape reader is trivial. You simply read a file that contains the data from the "tape," send it out to a serial port at 110 baud (yes, only 10 characters per second), until you have exhausted the file. I could write a program in C on my Solaris system or my FreeBSD system in about 10 minutes that could do this -- but, remember, I was on a Windows 95 system, not a Unix system.
From bad to ugly and back again
I knew I could easily write this program in C, so that was my language of choice. Bad choice. I brought up my copy of Visual C++ 5.0 and whipped out a simple program called sendtape.c that called
open() on the communications port. I tried to set it into RAW mode (the mode in Unix where the operating system doesn't try to interpret anything on the serial port as user input) and then tried to compile it. Whoops, no
ioctl() function or
tty functions -- nada, zip, zilch!
No problemo, I thought to myself, "I've got the whole Microsoft Software Developer's Network library on CD with my C compiler; I will do a quick search on the keywords 'COM port'."
The search turned up many references to the Microsoft Component Object Model (also called COM), and also references to MSComm. MSComm is a C++ class that Microsoft supplies to talk to the serial ports. I looked at the examples and was appalled at how much code it would take to do such a simple thing as write bytes to the serial port at 110 baud. All I wanted to do was open the darned serial port, set its baud rate, and stuff a few bytes down it -- not create a new class of serial communications-enhanced applications!
Sitting in front of my monitor was the Blue Dot receptor for my Java Ring, and I thought to myself, "Aha! The folks at Dallas Semiconductor have figured out how to talk to a serial port on the PC. Let's see what they do." After looking through the company's source code for Win32, it was clear that talking to serial ports was not going to be a simple task.
Java to the rescue
At this point in my weekend, I was thinking perhaps I'd drag one of my Unix machines to the lab in order to code the program on it instead of using what I already had. Then I remembered my experience with the Java Ring and the java.comm package from Sun. I decided to pursue that avenue instead.
What does java.comm provide?
The Java Communications API -- or java.comm -- provides a platform-independent method for accessing serial and parallel ports from Java. As with other Java APIs such as JFC, JDBC, and Java 3D, a certain level of indirection is forced on the programmer to isolate the platform's idea of "what a serial port is" from the programming model. In the case of the javax.comm design, items like device names, which vary from platform to platform, are never used directly. The three interfaces of the API provide platform-independent access to serial and parallel ports. These interfaces provide method calls to list the available communication ports, control shared and exclusive access to ports, and control specific port features such as baud rate, parity generation, and flow control.
When I saw the example SimpleWrite.java in the documentation, and compared its 40 lines of code to the 150 to 200 lines of code I was looking at writing in C, I knew the solution was at hand.
The high-level abstraction for this package is the class
javax.comm.CommPort. The
CommPort class defines the kinds of things you would typically do with a port, which includes getting
InputStream and
OutputStream objects that are the I/O channels for the port. The
CommPort class also includes methods for controlling buffer sizes and adjusting how input is handled. Since I knew these classes were supporting the Dallas Semiconductor One-Wire protocol (a protocol that involved dynamic changes in baud rate, and complete transparency to the bytes being transferred), I knew the javax.comm API had to be flexible. What came as a pleasant suprise was how tight the classes were: They had just enough flexibility to get the job done and no more. There was little to no unnecessary bloatware in the form of "convenience methods" or support of modem protocols like Kermit or xmodem.
A companion class to
CommPort is the
javax.comm.CommPortIdentifier class. This class abstracts the relationship between how a port is named on a particular system (that is, "/dev/ttya" on Unix systems, and "COM1" on Windows systems) and how ports are discovered. The static method
getCommPortIdentifiers will list all known communication ports on the system; furthermore, you can add your own port names for pseudo communication ports using the
addPortName method.
The
CommPort class is actually abstract, and what you get back from an invocation of
openPort in the
CommPortIdentifier is a subclass of
CommPort that is either
ParallelPort or
SerialPort. These two subclasses each have additional methods that let you control the port itself.
The power of Java
You can argue about the reality of "write once, run anywhere" all you want, but I will tell you from experience that for single- threaded or even simple multithreaded non-GUI applications, Java is there. Specifically, if you want to write a program that runs on Unix systems, Win32, and Mac systems, and can access the serial port, then Java is the only solution today.
The benefit here is that fewer resources are required to maintain code that runs on a large number of platforms -- and this reduces cost.
A number of applications share a requirement to have pretty low-level access to the serial port. The term low-level in this context means that a program has access to interfaces that allow it to change modes on-the-fly and directly sample and change the states of the hardware flow-control pins. Besides my PDP-8 project, Dallas Semiconductor needed to use its Blue Dot interfaces on serial ports to talk to the iButton with Java. In addition, the makers of microprocessors have evaluation boards that use a serial port for communications and program loading. All of these applications can now be completely, and portably, written in Java -- a pretty powerful statement.
All of this power to control the parallel and serial ports of the host machine comes from the javax.comm library. Giving Java programmers access to the ports opens up an entirely new set of applications that target embedded systems. In my case, it gave me the ability to write my TTY paper-tape reader emulator completely in Java.
How do you get to play with this stuff?
To get a copy of the latest javax.comm distribution, first you need to sign up as a developer on the Java Developer Connection (JDC) if you haven't done so already. (See Resources.) JDC is free, and as a member you will get early access to Java classes that will eventually be part of the final product.
Go to the Java Communications API section and download the latest javax.comm archive file. Unpack the file and install the shared libraries (yes, the Java virtual machine needs native code to talk to the ports -- fortunately for you, you don't have to write it), and install the comm.jar file. Finally, add the comm.jar file to your
CLASSPATH variable.
Once the comm.jar file is stored in the lib directory of your Java installation, and the win32comm.dll is stored in the bin directory of your Java installation, you can compile and run all the examples that come with the download. I encourage you to look them over as there is lots of good information nestled in with the source code.
Where does this leave the PDP-8?
So, what's happened with the PDP-8? I thought you'd never ask! After reading the README document that came with the javax.comm distribution, then scanning the JavaDocs for the javax.comm package, I put together an application class called
SendTape. This class simulates a paper-tape reader by opening the serial port and stuffing bytes over it at 110 baud. The code for this class is shown here:
import javax.comm.*; import java.io.*; public class SendTape { static final int LEADER = 0; static final int COLLECT_ADDR = 1; static final int COLLECT_DATA = 2; static final int COLLECT_DATA2 = 3; /* This array holds a copy of the BIN format loader */ static byte binloader[] = { (byte) 0x80,(byte) 0x80,(byte) 0x80,(byte) 0x80, ... (byte) 0x80,(byte) 0x80, };
The code fragment above is the first part of the
SendTape class. This class begins by implicitly importing all classes in the javax.comm package and the java.io packages. The
SendTape class then defines some constants and pre-initializes a byte array to contain the BIN Loader program I mentioned earlier. I included the BIN Loader because it is always needed when initializing the memory of the PDP-8 and I kept losing track of where I had last stored the file containing its image in RIM format. With this crucial paper tape image embedded in the class in this way, I always have the ability to load it with this class.
/** * This method runs a mini-state machine that gives * a useful human readable output of what is happening * with the download. */ static int newState(int oldState, byte b) { ... }
Following the initialization, you have the code for the method
newState, shown above, that tracks the contents of the paper tape (whether it is address information or programming information). The method above also prints out a message for each location of memory on the PDP-8 that is initialized.
Next you have the
main method, which is shown below; it opens the file and reads it in. Then the code opens the serial port and sets its communication parameters.
public
|
https://www.javaworld.com/article/2076755/java-se/opening-up-new-ports-to-java-with-javax-comm.amp.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
This C++ Program implements Shell Sort Algorithm.
Shell sort is a sorting algorithm. It is an in-place comparison sort and one of the oldest sorting algorithm.
Shell sort is a generalization of insertion sort that allows the exchange of items that are far apart. Shell sort is not stable sort. It takes O(1) extra space. The worst case time complexity of shell sort depends on the increment sequence.
Shell sort steps are :
1. Compare elements that are far apart.
2. Compare elements that are less far apart. Narrower array.
3. Do this repeatedly, reach to a point where compare adjancent elements.
4. Now the elements will be sufficiently sorted that the running time of the final stage will be closer to O(N).
It is also called diminishing increment sort.
The program has an input array of size 10 initialized with 10 values. This returns the array in non decreasing order using Shell Sort algorithm.
PROGRAM:
#include <iostream> using namespace std; int main(void) { int array[5]={4,5,2,3,6},i1=0; int i, j, increment, temp,number_of_elements=5; for(increment = number_of_elements/2;increment > 0; increment /= 2) { for(i = increment; i<number_of_elements; i++) { temp = array[i]; for(j = i; j >= increment ;j-=increment) { if(temp < array[j-increment]) { array[j] = array[j-increment]; } else { break; } } array[j] = temp; } } cout<<"After Sorting:"; for(i1=0;i1<5;i1++) {cout<<array[i1]; } }
|
http://proprogramming.org/shell-sort-in-c/
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Base class for highlight rules. More...
#include <highlightrule.h>
Base class for highlight rules.
This abstracts from the actual implementation for matching.
Creates a rule for the given element (Although each rule can concern more than one program element, we provide only this convenience constructor with only one name: if the rule concerns more than one element one can use addElem method)
Adds an element name to the list of this rule.
Implemented in srchilite::RegexHighlightRule.
Performs replacement of references in this rule.
Implemented in srchilite::RegexHighlightRule.
Try to match this rule against the passed string (implemented by calling the pure virtual function tryToMatch below).
The passed token is assumed to be reset (i.e., no existing matching information is stored in it when passing it to this method).
Try to match this rule against the passed string.
Implemented in srchilite::RegexHighlightRule.
|
http://www.gnu.org/software/src-highlite/api/classsrchilite_1_1HighlightRule.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Recently, I was asked to whip up a WPF code sample that mimicked how Expression Design uses Alpha Blending to merge two images together (specifically, the Darken and Lighten blend modes). Instead of pre-processing the image in Expression Design, the developers wanted a programmable, real-time way to blend images in their application.
Since the application was using WPF 4.0, a pixel shader was a natural fit for this problem. There have a been a number of great blog posts about pixel shaders, HLSL, and WPF, so I won’t go into detail about the inner workings. The posts below are the main ones I used to get an understanding how to solve this problem.
- A Series on GPU-based Effects for WPF
- Advanced Alpha Blending with HLSL
- Photoshop-like Alpha Blending
- Premultiplied alpha
While the above articles showed everything needed to get a working sample up and running, there was no overarching sample that brought everything together. Building a working WPF application that shows an image with a pixel shader effect tied to it is relatively straightforward. Adding a second image parameter to the pixel shader and an opacity factor is also very simple given the level of pixel shader support in WPF. One thing that you need to keep in mind is the fact that WPF uses pre-multiplied alpha values for optimization purposes. That affects the math that you need to do in the pixel shader.
My initial attempt worked great when the images were the same size, however when the images were different sizes, the second image was resized to the size of the base image. This again is an optimization done by WPF, but definitely doesn’t give the results we would want. Greg Schechter’s last blog post in his series on GPU-based effects in WPF explains this in detail. To make the ViewBox technique work with absolute X, Y positioning coordinates, there is some sizing math that needs to be done. Since I try to keep as little code behind as possible in my views, I wrote the converter below to do all of this work for me.
public class ViewboxRectConverter : IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { if ((values == null) || (values.Length < 4)) { throw new InvalidOperationException( string.Format( CultureInfo.InvariantCulture, "Invalid number of parameters. Found: {0} Minimum Required: 4", values.Length)); } double baseImageWidth = values[0] == null ? 0.0d : (double)values[0]; double baseImageHeight = values[1] == null ? 0.0d : (double)values[1]; int blendImageWidth = (values[2] == null) ? 1 : (int)values[2]; int blendImageHeight = (values[3] == null) ? 1 : (int)values[3]; // Just a shortcut to save some if statements. double x = (values.Length > 4) ? values[4] as double? ?? 0.0d : 0.0d; double y = (values.Length > 5) ? values[5] as double? ?? 0.0d : 0.0d; double scaleX = baseImageWidth / blendImageWidth; double scaleY = baseImageHeight / blendImageHeight; return new Rect((x / blendImageWidth) * -1, (y / blendImageHeight) * -1, scaleX, scaleY); } public object[] ConvertBack( object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new NotImplementedException(); }
The XAML usage is as follows.
<ImageBrush x: <ImageBrush.Viewbox> <MultiBinding Converter="{StaticResource rectConverter}"> <Binding RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType=Image}" Path="ActualWidth" /> <Binding RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType=Image}" Path="ActualHeight" /> <Binding RelativeSource="{RelativeSource Mode=Self}" Path="(ImageBrush.ImageSource).(BitmapSource.PixelWidth)" /> <Binding RelativeSource="{RelativeSource Mode=Self}" Path="(ImageBrush.ImageSource).(BitmapSource.PixelHeight)" /> <Binding ElementName="xSlider" Path="Value" /> <Binding ElementName="ySlider" Path="Value" /> </MultiBinding> </ImageBrush.Viewbox> </ImageBrush>
You might notice that I am binding to the actual width and height of the Image and ImageBrush respectively. This is to allow our converter to update appropriately when either image changes in our view. Binding to the ImageSource property doesn’t quite work the same as other bindings, so I took the path of least resistance.
You can download the sample application here. It contains free sample images from various sources on the web. I have resized all of the base images to 1024 x 768 and 96 ppi. If you want to add your own, just make sure that it is 96 ppi, or WPF will resize it automatically.
To make this sample application build, you will need to install the Microsoft DirectX SDK. I used the June 2010 DirectX SDK which is available here, but as long as you have a version that can compile Pixel Shader 2.0 FX files, it should be fine. Also, to make my sample easier for me to test, there are two pre-build events that compile the pixel shaders every time you build. My install path may be different than yours, so if you get errors during a build, check the path to the fxc.exe and make sure it is pointing to your install of the DirectX SDK.
There are many cool things that can be done with pixel shaders and this basic setup should allow you to add your own to try out. For a great set of shaders, check out the WPF Pixel Shader Effects Library on CodePlex.
|
https://blogs.msdn.microsoft.com/atoakley/2011/11/17/blending-two-images-in-real-time/
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I am writing a few programs in C. I am stuck. i am not sure how to use strcmp or strncmp. i cant find any websites that can help and my book does not give any good examples. This is what i have to do.
Frist..... Write a program that inputs a line of text with function gets into char array s[100]. Output the line in uppercase letters and in lowercase letters.
Second....Write a program that uses function strcmp to compare two strings input by the user. The program should state whether the first string is less than, equal to or greater than the second string.
Third..... Write a program that uses function strncmp to compare two strings input by the user. The program should input the number of characters to be compared. The program should state whether the first string is less than, equal to or greater than the second string.
These 3 programs should be combined into 1 program.
I am stuck and here is what i have so far. I have completed the first program but i am having trouble with the second and third program. Can you help me with? or maybe steer me in the right direction ? Thanks I appreciate it.
Code:
#include <stdio.h>
#include <ctype.h>
# define SIZE 80
int main()
{
int i;
char s[SIZE];
/* prompt user to enter line of text */
puts( "Enter line of text:\n" );
gets(s);
/* use gets to display sentence */
printf("Upper case lines:\n");
for (i=0; i < strlen(s); i++)
printf("%c", toupper(s[r]));
printf("\n");
printf("Lower case line:\n");
for(i=0; i < strlen(s); i++)
printf("%c", tolower(s[r])):
print("\n");
char s1{80};
char s2{80};
int x;
|
https://cboard.cprogramming.com/c-programming/96655-unsure-how-use-functions-strcmp-strncmp-printable-thread.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
libs/multi_index/example/random_access.cpp
/* Boost.MultiIndex example of use of random access indices. * * Copyright 2003-2008 Joaquin M Lopez Munoz. * Distributed under the Boost Software License, Version 1.0. * (See accompanying file LICENSE_1_0.txt or copy at *) * * See for library home page. */ #if !defined(NDEBUG) #define BOOST_MULTI_INDEX_ENABLE_INVARIANT_CHECKING #define BOOST_MULTI_INDEX_ENABLE_SAFE_MODE #endif #include <boost/multi_index_container.hpp> #include <boost/multi_index/identity.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> #include <boost/tokenizer.hpp> #include <algorithm> #include <iostream> #include <iterator> #include <string> using boost::multi_index_container; using namespace boost::multi_index; /* text_container holds words as inserted and also keep them indexed * by dictionary order. */ typedef multi_index_container< std::string, indexed_by< random_access<>, ordered_non_unique<identity<std::string> > > > text_container; /* ordered index */ typedef nth_index<text_container,1>::type ordered_text; /* Helper function for obtaining the position of an element in the * container. */ template<typename IndexIterator> text_container::size_type text_position( const text_container& tc,IndexIterator it) { /* project to the base index and calculate offset from begin() */ return project<0>(tc,it)-tc.begin(); } typedef boost::tokenizer<boost::char_separator<char> > text_tokenizer; int main() { std::string text= "chair,."; /* feed the text into the container */ text_container tc; tc.reserve(text.size()); /* makes insertion faster */ text_tokenizer tok(text,boost::char_separator<char>(" \t\n.,;:!?'\"-")); std::copy(tok.begin(),tok.end(),std::back_inserter(tc)); std::cout<<"enter a position (0-"<<tc.size()-1<<"):"; text_container::size_type pos=tc.size(); std::cin>>pos; if(pos>=tc.size()){ std::cout<<"out of bounds"<<std::endl; } else{ std::cout<<"the word \""<<tc[pos]<<"\" appears at position(s): "; std::pair<ordered_text::iterator,ordered_text::iterator> p= get<1>(tc).equal_range(tc[pos]); while(p.first!=p.second){ std::cout<<text_position(tc,p.first++)<<" "; } std::cout<<std::endl; } return 0; }
|
http://www.boost.org/doc/libs/1_45_0/libs/multi_index/example/random_access.cpp
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Implementing Nested Functions in C#
If you program in more than one language, sometimes you want to use an idiom from one language in the other, where the idiom doesn't exist. So, you have to invent it for that language.
These days I write very short, well-named methods with behaviors spread over many classes. However, in the 1990s I programmed in Delphi (Object Pascal) and wrote much longer functions, sometimes using nested functions. Nested functions are functions defined within functions. They can be useful for compartmentalizing behavior that is invoked many times and at different places within a function. As well, they can be useful for naming a block of behavior within a function—even short functions.
This article demonstrates how to implement nested functions for C#, and it might teach you a few things about C# in general. For the purposes of the demonstration, nested functions in C# have to behave like any other function to be useful. Therefore, they must have non-void return types, parameters, and recursion. The remaining sections demonstrate how you can effectively support all three of these elements. (Note: Some of the code isn't pretty, but it does work.)
Implementing Nested Functions Using Anonymous Delegates
To implement nested functions with parameters and return types, you need to know about delegates and another relatively new .NET capability, anonymous methods. Delegates and multicast delegates in C# (and .NET) add some additional safety nets and capabilities, but delegate really is just a fancy name for function pointer and event handler.
To begin, I picked a well-known behavior: calculating n! or n-factorial. The factorial algorithm takes a positive number n and returns 1*2*3*...n. For example, n! where n is 5 is 1*2*3*4*5 or 120. To map such a function by using a delegate signature, you can define a delegate that takes a long and returns a long, as follows:
public delegate long FactorialDelegate(long n);
The preceding defines a method signature, which permits you to define delegate instances that refer to methods matching this signature. The next thing you need to do is define an anonymous method inline where you define and initialize an instance of FactorialDelegate.
Defining an anonymous method
Think of anonymous methods as similar to C++ inline methods. The real difference is that anonymous methods are function headers with code blocks but without names. (Anonymous methods are used mostly as values that are assigned to events; that is, they are inline event handlers.)
To define an anonymous method, simply define a method whose header exactly matches the delegate signature but without the word delegate. Following on the previous code snippet, it would go like this:
public static long Factorial(long n) { if (n < 1) throw new ArgumentOutOfRangeException( "n", "argument must be greater than 0"); long result = 1; for (int i = 1; i <= n; i++) result *= i; return result; }
To convert the function Factorial to an anonymous method, you remove everything from the header except the parameter list (long n), add the word delegate, and leave the method body in place. The result looks like this:
delegate (long n) { if (n < 1) throw new ArgumentOutOfRangeException( "n", "argument must be greater than 0"); long result = 1; for (int i = 1; i <= n; i++) result *= i; return result; }
Now, to nest this anonymous method, you can delcare an instance of FactorialDelegate and assign it the anonymous method above, as follows:
FactorialDelegate Factorial = delegate(long n) { if (n < 1) throw new ArgumentOutOfRangeException( "n", "argument must be greater than 0"); long result = 1; for (int i = 1; i <= n; i++) result *= i; return result; };
In fact, what you now have is a delegate instance that can be defined in an outer method—a defacto nested function—and can be invoked just like a function. Listing 1 shows the previously defined Factorial anonymous method instance nested in the Main function of a console application.
Listing 1: A Console Application Demonstrating Nested Functions
using System; using System.Collections.Generic; using System.Diagnostics; using System.Reflection; using System.Text; namespace NestedFunction { class Program { public delegate long FactorialDelegate(long n); static void Main(string[] args) { // version 1 FactorialDelegate Factorial = delegate(long n) { if (n < 1) throw new ArgumentOutOfRangeException( "n", "argument must be greater than 0"); long result = 1; for (int i = 1; i <= n; i++) result *= i; return result; }; Console.WriteLine(Factorial {0} is {1}", 5, Factorial1(5)); Console.ReadLine(); } } }
Implementing Recursion
Now, you have seen a defacto nested function, but what about recursion? Unfortunately, Factorial exists after it is defined. Hence, you cannot implement n! bu using recursion. The recursive version using the very first Factorial algorithm would look like this:
public static long Factorial(long n) { return n > 1 ? n * Factorial(n - 1) : n; }
My computer science professors might argue the implementation is not complete unless you can implement the entire idiom. (Of course, in business you would just do without recursion.) However, it is possible to implement recursion with nested functions. It isn't pretty, but it does work.
Listing 1 is a de facto nested function, but as I mentioned you cannot recurse by calling the variable Factorial. However, you can recurse by using reflection because even though the name Factorial doesn't exist in the anonymous method, the function does exist on the stack. Hence, all you need to do is pluck the Factorial method off the stack and call stack using reflection. Listing 2 demonstrates how you can implement nested functions using recursion.
Listing 2: A Nested, Recursive Function Using Reflection
FactorialDelegate Factorial = delegate(long n) { if(n<1) throw new ArgumentOutOfRangeException( "n", "argument must be greater than 0"); MethodBase method = new StackTrace().GetFrame(0).GetMethod(); return n > 1 ? n * (long)method.Invoke(null, new object[] { n - 1 }) : n; }
The first half of the nested function in Listing 2 is the same as in Listing 1. To recurse, you need to pluck the method object off the stack; it will always be the first method entry in the stack frame. Because you also know the signature long—anonymous(long n), you can reliably invoke the method by passing the correct argument type and casting the return type. (Told you it wasn't pretty.)
Broad and Expressive Language
Because language—whether speaking or programming—limits what you can conceive, I prefer to have as broad and expressive a language as possible, one that doesn't require me to use advanced idioms but makes them available. This is a tough balancing act for any language vendor to manage. In Microsoft's case, whether they actually support nested functions in C# or not is up to them. Of course, they already exist technically.
By using the demonstration in this article, you can implement nested functions. You won't need to use them often, but now you can if need be. For those who aren't old Pascal programmers or aren't interested in nested functions, notice that I threw in reflection, delegates, anonymous methods, how to get data off the callstack, recursion, and how to program by contract using exceptions.
About the Author
Paul Kimmel is the VB Today columnist for and has written several books on object-oriented programming, including Visual Basic .NET Power Coding (Addison-Wesley) and UML Demystified (McGraw-Hill/Osborne). He is the president and co-founder of the Greater Lansing Area Users Group for .NET () and a Microsoft Visual Developer MVP.
|
http://www.developer.com/net/csharp/article.php/3638541/Implementing-Nested-Functions-in-C.htm
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
hello,
I am reading in a file and storing it into an array,
and i am trying to find which letter appears most frequently in the array.
how can this be accomplished?
this is what i have so far. oh ya, and the file has spaces , but when i read contents of array there are no spaces.
Code:#include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; int main() { string line; // also tried char line[200]; int count = 0; char reply; ifstream infile; string myfile; ifstream inputFile; inputFile.open ("encrypted.txt"); if(!inputFile) { cerr << "Can't open input file " << myfile << endl; cout << "Press enter to continue..."; exit(1); } while (inputFile.peek() != EOF) { inputFile >> line; cout << line; } cin >> reply; return 0; }
|
https://cboard.cprogramming.com/cplusplus-programming/111771-array-problem.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I am trying to create a .csv file of UUID numbers. I see how to make a single UUID number in python but can't get the correct syntax to make 50 numbers and save them to a .csv file. I've googled and found many ways to create .csv files and how to use For loop but none seem to pertain to this particular application. Thank you for any help.
Just combine a csv writer with an uuid generator
import csv import uuid with open('uuids.csv', 'w') as csvfile: uuidwriter = csv.writer(csvfile) for i in range(50): uuidwriter.writerow([uuid.uuid1()])
|
https://codedump.io/share/hwVfBLuAHqmx/1/saving-a-list-of-uuid-numbers-to-a-csv-file-in-python
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I'm learning python and i loop like this the json converted to dictionary: it works but is this the correct method? Thank you :)
import json
output_file = open('output.json').read()
output_json = json.loads(output_file)
for i in output_json:
print i
for k in output_json[i]:
print k, output_json[i][k]
print output_json['webm']['audio']
print output_json['h264']['video']
print output_json['ogg']
{
"webm":{
"video": "libvp8",
"audio": "libvorbis"
},
"h264": {
"video": "libx264",
"audio": "libfaac"
},
"ogg": {
"video": "libtheora",
"audio": "libvorbis"
}
}
> h264
audio libfaac video libx264
ogg
> audio libvorbis video libtheora webm
> audio libvorbis video libvp8 libvorbis
> libx264 {u'audio': u'libvorbis',
> u'video': u'libtheora'}
That seems generally fine.
There's no need to first read the file, then use loads. You can just use load directly.
output_json = json.load(open('/tmp/output.json'))
Using i and k isn't correct for this. They should generally be used only for an integer loop counter. In this case they're keys, so something more appropriate would be better. Perhaps rename
i as
container and
k as
stream? Something that communicate more information will be easier to read and maintain.
You can use
output_json.iteritems() to iterate over both the key and the value at the same time.
for majorkey, subdict in output_json.iteritems(): print majorkey for subkey, value in subdict.iteritems(): print subkey, value
|
https://codedump.io/share/m8GR7FOTWGwc/1/python-read-json-and-loop-dictionary
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
getpwuid()
Get information about the user with a given ID
Synopsis:
#include <sys/types.h> #include <pwd.h> struct passwd* getpwuid( uid_t uid );
Since:
BlackBerry 10.0.0
Arguments:
- uid
- The userid whose entry you want to find.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The getpwuid() function gets information about user uid. This function uses a static buffer that's overwritten by each call.
The getpwent(), getpwnam(), and getpwuid() functions share the same static buffer.
Returns:
A pointer to an object of type struct passwd containing an entry from the group database with a matching uid, or NULL if an error occurred or the function couldn't find a matching entry.
Examples:
/* * Print password info on the current user. */ #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <sys/types.h> #include <pwd.h> int main( void ) { struct passwd* pw; if( ( pw = getpwuid( getuid() ) ) == NULL ) { fprintf( stderr, "getpwuid: no password entry
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getpwuid.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
To accompany the last post – which raised some questions around when and where to call Dispose() on objects created or accessed via AutoCAD’s .NET API – today we’re going to look at a few concrete examples.
Thanks to Danny P for not only requesting some examples but also presenting some concrete areas he wasn’t fully clear on. Let’s start by looking at those (and feel free to compare the responses I’ve put below with the ones I made in direct response to Danny’s original comment):
Within a transaction where something is added to the database, some new objects (Xrecords, dictionaries) can be relatively simple and might only need a Using block. Conversely, complex objects (dimension styles, block definitions with attributes) have many properties and resulting code with potential points of failure. How would you deal with that in a Try...Catch block + Using block to handle exceptions at the object level and the transaction level?
The short answer is that anything managed by a transaction – whether newly created and added to the transaction or opened by it – will be disposed of automatically by the transaction. So in general you shouldn’t even need a using block around such entities.
For more complex scenarios the same is also true, although if you’re creating a database-resident definition object, for instance, then you’d need to make sure that it also gets added to the transaction.
In the case where a new object is “owned” by another new, more complex object (such as a FontDescriptor being owned by a TextStyleTableRecord via its Font property), then you can generally assume that just by setting the property on the owner you’ve absolved yourself of needing to Dispose() of it yourself. The example I’ve used isn’t a great one, admittedly – FontDescriptors don’t actually require disposal – but hopefully you get the point I’m trying to make.
A second thing that I found was casting Entities to specific object types, for example iterating through block definitions looking for a line, and casting that Entity to a Line object. Do I need to dispose of the Line because I didn't use line = transaction.GetObject(...), but rather line = TryCast(entity, Line)? In other words, is the new object a "transaction-managed" object, or do I need to dispose of it, or is it not a new object to the database and I don't need to worry about it?
This is an interesting point. In this situation, the object reference in the Line variable is not actually created by the cast (your TryCast() call in VB.NET). It was created by the call to Transaction.GetObject(): all the cast does is attempt to coerce the reference into a variable of another type (after it has first checked that the reference object also support the Line protocol). The object reference between the Entity and Line variables are one and the same – with the same, underlying, unmanaged pointer – it’s just they’re conveniently held in variables of different types that allow compile time features such as Intellisense and static type checking (even if the TryCast() could conceivably fail at runtime, if the object didn’t happen to be a Line). So as long as the original object reference is managed by a transaction – and in this case you’ve received it from the transaction – then all is well.
Now let’s take a look at a couple of examples of where you might add calls to Dispose() in new code. I started by looking for previous posts with “theoretically” problematic code, and found this one.
Here’s the code in question:
// Create the mirror line and the transformation matrix
Line3d ml = new Line3d(pt1, pt2);
MirrorEntity(doc.Database, per.ObjectId, ml, false);
Basically we’re not calling Dispose() on the temporary Line3d object after the MirrorEntity() method has completed. We could simply adjust the code as follows to have the Line3d get disposed at the end of the using block:
// Create the mirror line and the transformation matrix
using (Line3d ml = new Line3d(pt1, pt2))
{
MirrorEntity(doc.Database, per.ObjectId, ml, false);
}
Again, this is not a change that you absolutely need to go back and perform in your code – I have every expectation that this code will continue working properly in future versions of AutoCAD without Line3d being disposed of – but we’re really talking about avoiding problems that are theoretically possible.
Now let’s take a look at code in another post that uses the potentially problematic Brep API identified in the last post. (Both the examples I’ve linked to happen to have been provided by other people – that’s honestly not a deliberate choice, they just happen to have been the ones I found that best demonstrate the issue. :-)
When I ran this code again from the debugger, stepped through it and then closed AutoCAD, I actually did receive an exception that seems related to this issue:
It’s possible this isn’t due to this problem – which is by its nature intermittent and tricky to catch – but the fact it’s in the destructor of an AcGe object seems a giant red flag.
Here’s a revamped version of the C# code. I ended up going a bit crazy on introducing the use of var instead of specific types (although you shouldn’t worry – this code is still perfectly typesafe as all types are inferred at design time), which hopefully doesn’t distract you from the more important changes (which were to introduce a manual Dispose() and a number of using blocks).
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.BoundaryRepresentation;
using AcBr = Autodesk.AutoCAD.BoundaryRepresentation;
using Autodesk.AutoCAD.Colors;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.EditorInput;
using Autodesk.AutoCAD.Geometry;
using AcGe = Autodesk.AutoCAD.Geometry;
using Autodesk.AutoCAD.Runtime;
using System.Collections.Generic;
using System;
// Not mandatory, but improves loading performance
[assembly: CommandClass(typeof(HoleFeature.MyCommands))]
namespace HoleFeature
{
public class MyCommands
{
[CommandMethod("GETHOLES")]
public void GetHoles()
{
var doc = Application.DocumentManager.MdiActiveDocument;
var db = doc.Database;
var ed = doc.Editor;
var peo = new PromptEntityOptions("\nSelect a 3D solid: ");
peo.SetRejectMessage("\nMust be a 3D solid.");
peo.AddAllowedClass(typeof(Solid3d), true);
var per = ed.GetEntity(peo);
if (per.Status != PromptStatus.OK)
return;
var tr = db.TransactionManager.StartTransaction();
using (tr)
{
var solid =
tr.GetObject(per.ObjectId, OpenMode.ForWrite) as Solid3d;
var ids = new ObjectId[] { solid.ObjectId };
var path =
new FullSubentityPath(
ids,
new SubentityId(SubentityType.Null, IntPtr.Zero)
);
// For storing SubentityIds of cylinderical faces
var subentIds = new List<SubentityId>();
using (var brep = new Brep(path))
{
foreach (var face in brep.Faces)
{
using (var surf = face.Surface)
{
var ebSurf = surf as ExternalBoundedSurface;
// We are only looking only cylinders
if (ebSurf != null && ebSurf.IsCylinder)
{
var cyl = ebSurf.BaseSurface as Cylinder;
// And fully closed cylinders
if (cyl != null && cyl.IsClosed())
{
// Get normal and point on surface
var normal = new Vector3d();
var pt = new Point3d();
GetNormalAndPoint(surf, ref normal, ref pt);
if (IsHole(face, normal, pt, cyl))
{
subentIds.Add(face.SubentityPath.SubentId);
}
}
}
}
face.Dispose();
}
}
// Assign red color to hole features
if (subentIds.Count > 0)
{
short colorIdx = 1;
AssignColor(solid, subentIds, colorIdx);
}
tr.Commit();
}
}
// Get normal and point at mid U and V parameters
void GetNormalAndPoint(
AcGe.Surface surf, ref Vector3d normal, ref Point3d pt
)
{
var box = surf.GetEnvelope();
double p1 = box[0].LowerBound + box[0].Length / 2.0;
double p2 = box[1].LowerBound + box[1].Length / 2.0;
var ptParams = new Point2d(p1, p2);
var pos = new PointOnSurface(surf, ptParams);
normal = pos.GetNormal(ptParams);
pt = pos.GetPoint(ptParams);
}
// A cylinder is a hole if the normal points inwards
// and the normal after extending by radius intersects
// with axis of symmetry, the axis of symmetry is also
// extended by height of cylinder
Boolean IsHole(
AcBr.Face face, Vector3d normal, Point3d pt, Cylinder cyl
)
{
if (!face.IsOrientToSurface)
{
// Correct the normal and save back
normal = normal.Negate();
}
// Calculate another point on normal by extending the
// normal by radius of cylinder
var opt =
new Point3d(
pt.X + normal.X * cyl.Radius,
pt.Y + normal.Y * cyl.Radius,
pt.Z + normal.Z * cyl.Radius
);
// Get the cylinder's axis
var v1 = cyl.AxisOfSymmetry;
double dist = cyl.Height.Length;
// Calculate another point on axis by extending v1 by dist
var pt2 =
new Point3d(
cyl.Origin.X + v1.X * dist,
cyl.Origin.Y + v1.Y * dist,
cyl.Origin.Z + v1.Z * dist
);
// Create line segment representing the cylinder's normal
// Create line segment representing the cylinder's axis
Point3d[] intpt = null;
using (var ls1 = new LineSegment3d(pt, opt))
using (var ls2 = new LineSegment3d(cyl.Origin, pt2))
{
// Get intersection of normal and cylinder axis
intpt = ls1.IntersectWith(ls2);
}
return (intpt != null);
}
// Assign color to cylinderical surfaces which are holes
void AssignColor(
Solid3d solid, List<SubentityId> subentIds, short idx
)
{
foreach (SubentityId subentId in subentIds)
{
var col = Color.FromColorIndex(ColorMethod.ByColor, idx);
solid.SetSubentityColor(subentId, col);
}
}
}
}
One thing that’s a little peculiar with the Brep API is that you’re actually forcing a Dispose() on objects that are properties of the Brep object itself (objects that are instantiated when you access the property). That doesn’t always seem right – you would expect the using block around the Brep object to force a Dispose() on all associated objects created/exposed via its properties – but this a quirk of the Brep API that needs special attention, and a big reason the GC-driven finalisation of unmanaged objects seems to rear its ugly head more often when it’s used.
The good news is that the results of over-aggressive disposing – you should see some kind of exception at object disposal – are going to be obvious much more quickly than those of inadequate disposing, which will probably take some time to identity. So it’s generally better to dispose more often than perhaps strictly needed when using the Brep API, but to pay special attention to exceptions (if you blindly catch and ignore every exception that gets thrown, you may run the risk of missing such issues).
Hopefully looking at a few concrete examples has helped improve the understanding of this tricky area. Be sure to post a comment if you have follow-up questions regarding this topic.
|
http://through-the-interface.typepad.com/through_the_interface/2012/08/examples-of-calling-dispose-on-autocad-objects.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Detecting Code Indentation
The Firefox developer tools use CodeMirror as the source editor in many places (including the new WebIDE). CodeMirror provides auto-indentation when starting a new line and can convert tab presses into indents. Both of these features are frustrating however if the editor is set to insert the wrong indentation. That’s why many text editors will detect the indentation used in a file when it’s loaded into the editor. I set about to add this indentation detection to the devtools.
The first order of business was getting a good default value for new files. Every codebase and programmer has a different preferred indentation. Some use 2 spaces per indent, some use 4, and some even use tabs. I went through a rebellious 3-space indentation phase myself, which unfortunately was one of the most prolific open source months of my life (or did 3-space indents just increase my productivity??).
It can get kind of contentious, to be honest. So I thought I might back the decision up with some data. Using the GitHub gist API, I downloaded random code files from several different languages at different times throughout the day until I got 100 files of each language. Then I manually classified each file. Here’s the breakdown per language based on this limited sample size:
So, there you go. 2-spaces edges out others in Web languages, while Python is all for 4-space indents and Ruby is all for 2-space indents. At least for these 100 files. The 2 vs. 4 difference isn’t statistically significant for JavaScript with that small sample size, but is for the other languages.
As for detecting the indentation in a file, there were a few different algorithms out there. I looked at the two popular ones: greatest common divisor and minimum width, as well as two other experiments I came up with: comparing lines and neural network.
Indentation detection isn’t completely straightforward. A file that a human would classify as 2-spaces will have indent widths of all sizes as indents get nested: 4, 6, 8, 10, 12. An example of a not-straightforward one would be a 2-space file that mainly consists of a class. So function definitions would be indented by 2 spaces, but function bodies would be indented by 4. You might only have a tiny portion of the file indented by 2 spaces, and a majority by 4.
Another problem is outliers: too-long lines chopped off and indented by say 37 spaces to line up with something on the previous line. Multi-line comments royally throw things. These block comments often start with an even-indented width, and the body gets indented by one more space (at least in JavaScript). If it’s a really descriptive comment, a good portion of the file could have an indent width of say 5.
These problems are easier to solve if you can parse the file, but I wanted a language-agnostic approach.
All these algorithms were focused on determining the indentation if you were using spaces to indent. But tab indents are possible, so in all algorithms I classified a file as “tabs” if there were more tabs than space indents. All the algorithms discounted all-whitespace lines. Additionally, in the gcd and min-width algorithms, I threw out indent widths that were less than a certain percent of the file. I won’t show these parts as they were common to several of the algorithms.
The algorithms
Greatest common divisor
The greatest common divisor (gcd) is a math concept. The gcd of [4, 6, 8, 10] is 2. A file with indent widths of [4, 6, 8, 10] would also clearly be a 2-space indented file. Things go haywire when you get any outlier indents of say, 37 though. The gcd of [4, 6, 8, 37] is 1. Multi-line comments really throw this one off, so for practicality you have to throw out odd numbers. Here’s the JavaScript code for this:
We’ll see how this performs later.
Minimum width
The other common algorithm I saw was a simple one: just take the smallest indentation width you see in the file. This can also trip up a bit on multi-line comments. 1 is a common minimum indent in that case, so we have to chuck that:
Comparing lines
I thought about what I did when I manually classified a file. I noticed that when I opened a file, I would focus on a random line and scan until I hit the next indent, then I would make note of that, and do this for a few other random lines in the file. So I coded up something that mimics this procedure.
This method compares the indentation of each line with the previous line, and adds the difference to a tally. So if a line is indented by 10 spaces, and the previous by 8, one more vote would be added for 2-space indentation. The benefit to this is that a block comment could be any number of lines, but its indentation would only count twice. This means we don’t have to throw out odd-numbered widths (3-spacers rejoice!):
Neural network
I had to see how machine learning would stack up. I wish I could feed the raw data in — the indent widths of the first n lines. But I couldn’t figure out how to turn inputs like [4, 8, 10] into the continuous (non-discrete) signals that the network required (update: immediately upon writing this I thought of a way, but it didn’t perform as well). So I fed it the popularity of each width instead. Unlike the other algorithms, I didn’t throw out anything. Outlier widths and odd widths went in with the rest of them.
I trained the network on about 100 hand-classified files, separate from the test files. Here’s the classification function:
The results
I ran all the algorithms on the set of 500 total code files in JavaScript, HTML, CSS, Python, and Ruby. Here’s what percent of the files each algorithm detected correctly:
Unpatched (not removing outliers or odd widths):
% files correctly detected, out of 500 (unpatched):
gcd: 78.0%
minimum: 87.6%
compare: 95.4%
neuralnet: 94.8%
Patched (removing outliers and odd widths, where applicable):
% files correctly detected, out of 500 (patched):
gcd: 94.4%
minimum: 92.8%
compare: 96.8%
neuralnet: 94.8%
Out of this, the only statistically significant result is that the compare-lines algorithm is better than the minimum-width algorithm. That conclusion comes from a two-tailed Z test with p < 0.05. More differences could solidify if more gists were tested, however.
My summary is that once you patch the weaknesses of a particular one of these algorithms, it performs about as well as the other. At least on random files from the wild, where edge cases are few.
The neural network performed well, and could have performed better if I’d given it more samples. 100 training samples is not a lot. What’s nice about the neural network approach is you don’t have to figure out the outliers and special cases yourself. Look at this code I came up with for finding outliers, after several guess-and-check rounds:
function isOutlier(total, count) {
return (total > 40 && count < (total / 20))
|| (total > 10 && count < 3)
|| (total > 4 && count < 2);
}
Even after playing around with a bunch of different values, I haven’t found the best way to detect an outlier. The network, however, will probably figure out the best way for the data it’s trained on. It’ll learn that a single indentation of width 37 shouldn’t have much sway in the final decision. In that way, it doesn’t have the weaknesses the other algorithms have (me).
You can “see” (hopefully not even notice) the compare-lines algorithm in action in the latest Firefox release.
|
https://medium.com/firefox-developer-tools/detecting-code-indentation-eff3ed0fb56b
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Deduplication
The GraphLab Create deduplication tool ingests data in one or more SFrames and assigns an entity label to each row. Records with the same label likely correspond to the same real-world entity.
To illustrate usage of the deduplication tool, we use data about musical albums, downloaded originally from. For this example, we have extracted a random sample of about 20% of the original data, and split it into four SFrames based on genre. The preprocessed data can be downloaded (and saved to your machine) from the Turi public datasets bucket with the following code. This download is about 7MB.
import os import graphlab as gl import graphlab.aggregate as agg genres = ['rock', 'americana', 'classical', 'misc'] data = {} for g in genres: if os.path.exists('{}_albums'.format(g)): print "Loading genre '{}' from local SFrame....".format(g) data[g] = gl.load_sframe('{}_albums'.format(g)) else: print "Downloading genre '{}' from S3 bucket....".format(g) data[g] = gl.load_sframe( '{}_albums'.format(g)) data[g].save('{}_albums'.format(g))
As usual, our first step is to look at the data:
data['rock'].print_rows(5)
+---------+--------------+-----------------+-------------------------------+ | disc_id | freedbdiscid | artist_name | disc_title | +---------+--------------+-----------------+-------------------------------+ | 166 | 1075217 | Various | Mega Hits'80 S-03 | | 719 | 33670401 | Dead Can Dance | Sambatiki | | 829 | 34061313 | Alice Cooper | Anselmo Valencia Amphithea... | | 1810 | 51495699 | Kasabian | Kasabian-Ulimate Version- | | 2013 | 68222994 | Various Artists | Fear Candy14 | +---------+--------------+-----------------+-------------------------------+ +-------------+---------------+-------------+--------------+---------------+ | genre_title | disc_released | disc_tracks | disc_seconds | disc_language | +-------------+---------------+-------------+--------------+---------------+ | Rock | 1994 | 17 | 4200 | eng | | Alternative | 1999 | 1 | 453 | | | Hard Rock | 2003 | 1 | 1980 | | | Rock | 2005 | 19 | 4547 | | | Metal | 2005 | 18 | 4352 | eng | | ... | ... | ... | ... | ... | +-------------+---------------+-------------+--------------+---------------+ [23202 rows x 9 columns]
For this example, we define a distance that is a weighted sum of Euclidean distance on the album length (in seconds), weighted Jaccard distance on the artist name and album title, and Levenshtein distance on the genre. Note that if you pass a composite distance to the deduplication toolkit, there is no need to specify the 'features' parameter; the composite distance already defines the relevant features.
album_dist = [[('disc_seconds',), 'euclidean', 1], [('artist_name', 'disc_title'), 'weighted_jaccard', 4], [('genre_title',), 'levenshtein', 1]]
Grouping the data
In any dataset larger than about 10,000 records, grouping the records into smaller blocks (also known as "blocking") is critical to avoid computing the distance between all pairs of records.
In this example, we group on the number of tracks on each album ("disc_tracks"),
which means that the toolkit will first split the data into groups each of whose
members have the same number of tracks, then look for approximate matches only
within each group. As of GraphLab Create v1.4, grouping features are specified
with the
grouping_features parameter; previous versions used the standard
distance "exact" for this purpose, but this flag is no longer enabled.
Feature engineering
In the
nearest_neighbors toolkit, the
weighted_jaccard distance applies only
to dictionary-type features, but in our
album_dist we indicated that we want
to apply it to two string-type features ('artist_name' and 'disc_title'). The
deduplication toolkit does several feature engineering steps automatically so
you have less work to do manipulating the data and defining the composite
distance. In particular:
String features are cleaned by removing punctuation and extra white space, and converting all characters to lower case.
Strings are converted to dictionaries with 3-character shingling when used with dictionary-based distances (
jaccard,
weighted_jaccard,
cosine, and
dot_product).
String features specified for a single distance component are concatenated, separated by a space.
Missing values are imputed. Missing strings are imputed to be "", while missing numeric values are imputed to be the mean value for the appropriate feature within the exact match group (see previous section). Note that records with missing values in the "exact" match features are ignored in model training and assigned entity label "None".
The feature engineering that occurs within the deduplication toolkit does not alter the input data in any way.
Choosing a model
Currently the deduplication toolkit has only one model: "nearest_neighbor_deduplication", which labels a pair of records as duplicate if one record is a close neighbor of the other. To resolve the question of transitive closure---A and B are duplicates, B and C are duplicates, but A and C are not---this model constructs a similarity graph and finds the connected components. Each connected component corresponds to an entity in the final output.
In addition to the data and the distance function, the nearest neighbor
deduplication model takes two parameters. If k is specified, only the closest
k neighbors for a record are considered duplicates, while the radius
parameter indicates the maximum distance that a neighbor can be from a record to
still be considered a duplicate. The most typical usage leaves
k unspecified,
and uses a
radius that makes sense for the problem and the distance function.
m = gl.nearest_neighbor_deduplication.create(data, row_label='disc_id', grouping_features=['disc_tracks'], distance=album_dist, k=None, radius=3)
If two datasets are known to have records that match one-to-one, then setting
k=2 can be very useful. The
k parameter is also useful to get preliminary
results when we have no prior intuition about the problem or the distance
function. In addition, the top level
deduplication create method hides the
parameters, in the expectation that future versions of the toolkit will
automatically choose the best modeling solution.
m2 = gl.deduplication.create(data, row_label='disc_id', grouping_features=['disc_tracks'], distance=album_dist)
Returning to our original model, the usual GraphLab Create toolkit functions give us information about model training and access to the output entity labels.
m.summary()
Class : NearestNeighborDeduplication Schema ------ Number of input datasets : 4 Number of feature columns : 4 Number of neighbors per point (k) : None Max distance to a neighbor (radius) : 3 Number of entities : 129632 Total training time (seconds) : 268.9886 Training -------- Total training time (seconds) : 268.9886 Accessible fields : entities : Consolidated input records plus entity labels.
The model's
entities attribute contains the deduplication results. All input
data rows are appended into a single SFrame, and the column
__entity indicates
which records correspond to the same entity. Because we specified the datasets
in a dictionary, the keys of that dictionary are used to identify which SFrame
each record comes from; if the data were passed as a list this would be the
index of the list.
Aggregating records
The entity labels are not very interesting by themselves; the deduplication problem typically involves a final step of aggregating records to produce a final clean dataset with one record per entity. This can be done straightforwardly with the SFrame groupby-aggregate tool.
In this example we define an aggregator that returns the number of records belonging to the entity, the mean length of album in seconds, the shortest album title and the compilation of all constituent genre and artist names.
# add disc title length in number of characters entities = m['entities'] entities['title_length'] = entities['disc_title'].apply(lambda x: len(x)) # define the aggregation scheme and do the aggregation album_aggregator = { 'num_records': agg.COUNT, 'disc_seconds': agg.MEAN('disc_seconds'), 'genres': agg.CONCAT('genre_title'), 'artist_name': agg.CONCAT('artist_name'), 'title': agg.ARGMIN('title_length', 'disc_title') } sf_clean = entities.groupby('__entity', album_aggregator) # find the dupe IDs entity_counts = m['entities'].groupby('__entity', agg.COUNT) dupe_entities = entity_counts[entity_counts['Count'] > 1]['__entity'] # print the results for the dupes dupes = sf_clean.filter_by(dupe_entities, '__entity') dupes.print_rows(10, max_row_width=100, max_column_width=50)
+----------+------------------------------------+-------------+--------------+ | __entity | genres | num_records | disc_seconds | +----------+------------------------------------+-------------+--------------+ | 76316 | [Rock, Rock] | 2 | 3154.5 | | 16727 | [Speech, Speech] | 2 | 4770.0 | | 73607 | [Rock, Rock] | 2 | 3498.0 | | 17856 | [Jazz, Jazz] | 2 | 2817.0 | | 25833 | [Synthpop, Synthpop] | 2 | 4202.0 | | 16473 | [Country, Country] | 2 | 2055.0 | | 76742 | [Native American, Native American] | 2 | 4217.0 | | 80763 | [Country, Country] | 2 | 4520.5 | | 26116 | [Rock, Rock] | 2 | 4640.0 | | 43964 | [Jazz, Jazz] | 2 | 3357.0 | +----------+------------------------------------+-------------+--------------+ +------------------------------------------+-------------------------------------------+ | artist_name | title | +------------------------------------------+-------------------------------------------+ | [Cheap Trick, Cheap Trick] | Greatest Hits | | [Neville Jason, Neville Jason] | The Lives Of The Great Artists | | [Monster Magnet, Monster Magnet] | 4-Way Diablo | | [Patricia Barber, Patricia Barber] | Split | | [Various, Various] | Atmospheric Synthesizer Spectacular Vol.2 | | [Various, Various] | The Power Of Country | | [Paul Ortega, A. Paul Ortega-Two Worlds] | Three Worlds | | [Marvin Rainwater, Marvin Rainwater] | Classic Recordings Disc1 | | [Various, Various] | Rules Of Rock | | [Woong San, Woomg San] | Close Your Eyes | +------------------------------------------+-------------------------------------------+ [321 rows x 6 columns]
|
https://turi.com/learn/userguide/data_matching/deduplication.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
0
I have wrote this code but it shows me only the timing not the values, can any bady tell me where is the problem?
import time import random def procedure(): time.sleep(0.60) # measure process time t0 = time.clock() procedure() print time.clock() - t0, "seconds process time" #BUCKET SORT (up to 30) def bucket_sort(lst): bucket, bucket1, bucket2 = [], [], [] #The three empty buckets #Populating the buckets with the list elements for i in range(len(lst)): if lst[i] in range(11): bucket.append(lst[i]) elif lst[i] in range(21): bucket1.append(lst[i]) elif lst[i] in range(31): bucket2.append(lst[i]) #Prints the buckets and their contents print "Bucket:",bucket print "Bucket1:",bucket1 print "Bucket2:",bucket2 #The actual sorting bucket.sort() bucket1.sort() bucket2.sort() final_lst = bucket + bucket1 + bucket2 print "Sorted list:",final_lst
|
https://www.daniweb.com/programming/software-development/threads/486847/bucket-sort-with-timing
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
wubi-r129.exe crashes when python 2.2 is preinstalled. deleted and the process quits.
Note that the wubi GUI never starts, so no wubi logs get created.
I am happy to provide further information, but need pointing in the right direction :-)
. Is there a way of diagnosing/logging exactly what is happening on this machine?
. Are there any dependencies for unpacking wubi that may be missing from this machine?
EDIT:
The thinkpad has C:\IBMTOOLS\
Is this likely to affect the package?
None of the other machines that it worked successfully on have any python versions.
> try running without python22
I tried this, and it made no difference.
> It will require a special build in order to see more messages
Assuming that the "special" build will simply be a debug one, I tried
wubi-r117-debug.exe as that is available from the same place.
This is the error the thinkpad gets with wubi-r117-debug.exe
-------
C:\u-904>
C:\u-904>Traceback (most recent call last):
File "Z:\home\
File "Z:\home\
line 26, in ?
File "Z:\home\
line 1, in ?
File "Z:\home\
line 33, in ?
File "Z:\home\
line 1, in ?
File "Z:\home\
line 42, in ?
File "Z:\home\
line 29, in ?
File "Z:\home\
line 49, in ?
File "Z:\home\
line 47, in ?
File "Z:\home\
line 47, in ?
ImportError: No module named Util.number
C:\u-904>
-------
One of the other machines (that worked with r129) loads the GUI with
r117-debug and asks me if I want to uninstall.
I don't want to, so I stopped there, but the GUI loaded with no
traceback on the console.
A different machine (not previously installed with wubi) loads the
r117-debug GUI fine with no traceback.
As I don't want Ubuntu on that machine I stopped at that point.
If I still need to try a special build, I'll do that when you let me
know where to get it from.
Cheers,
Pete
Hmm you should have a file in the temp folder called lib/Crypto/
It would be interesting to see Util.__file__
I got a similar problem and I really don't know where to look to diagnose it.
On an Windows XP SP3 HP Pavilion machine with AMD Athlon CPU, the GUI seems to try hard to start, but it simply does not.
I attached the log file in temp folder. By the way, one interesting line of log:
04-24 10:52 DEBUG Distro: checking Ubuntu ISO D:\kubuntu-
I don't know what it means, but Wubi is started from another subdirectory (With kubuntu-
04-24 10:52 DEBUG CommonBackend: original_
I may provide additional details, if you say so.
Erdogan,
The content of .disk/info inside of your ISO seems incorrect, probably the ISO is corrupted or a partial download. That shouldn't be fatal though. The issue is different from parent, please open a different bug report
04-24 10:52 DEBUG WindowsBackend: extracting .disk\info from D:\kubuntu-
04-24 10:52 DEBUG Distro: info=ÖdʼPt”Gzyí¢ ¼k8
> Hmm you should have a file in the temp folder called lib/Crypto/
> It would be interesting to see Util.__file__
The temp folder gets immediately deleted before I can see the contents.
To try and get the chance to see what happens, I have pulled lp:wubi
and built wubizip on a separate machine.
Results on the thinkpad:
-------\
import Crypto.Util.number as NUM
File "C:\u-904\
import Crypto.Util.number as NUM
ImportError: No module named Util.number
C:\u-904\
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
'import site' failed; traceback:
ImportError: No module named site
# C:\u-904\
import warnings # precompiled from C:\u-904\
# C:\u-904\
import types # precompiled from C:\u-904\
# C:\u-904\
import linecache # precompiled from C:\u-904\
# C:\u-904\
import os # precompiled from C:\u-904\
import nt # builtin
# C:\u-904\
import ntpath # precompiled from C:\u-904\
# C:\u-904\
import stat # precompiled from C:\u-904\
# C:\u-904\
import UserDict # precompiled from C:\u-904\
# C:\u-904\
import copy_reg # precompiled from C:\u-904\
Python 2.3.5 (#62, Feb 8 2005, 16:23:02) [MSC v.1200 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> ^Z
# clear __builtin__._
# clear sys.path
# clear sys.argv
# clear sys.ps1
# clear sys.ps2
# clear sys.exitfunc
# clear sys.exc_type
# clear sys.exc_value
# clear sys.exc_traceback
# clear sys.last_type...
Please add some logging before openpgp\
log.debug(path)
import Util
log.debug(
To disable dir deletion edit src/pylauncher/
delete_
Then rebuild
it's actually
import sys
log.debug(sys.path)
import Crypto
log.debug(
import Crypto.Util
log.debug(
-------\
log.
NameError: name 'log' is not defined
----------
That didn't seem to work, so I tried manually:
--------
C:\u-904\
Python 2.3.5 (#62, Feb 8 2005, 16:23:02) [MSC v.1200 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import Crypto.Util
import Crypto # directory C:\u-904\
import Crypto # from C:\u-904\
# wrote C:\u-904\
import Crypto.Util # directory C:\u-904\
import Crypto.Util # from C:\u-904\
# wrote C:\u-904\
>>> Crypto.
'C:\\u-
>>> import Crypto
>>> Crypto.__file__
'C:\\u-
>>> import Crypto.Util.number
import Crypto.Util.number # from C:\u-904\
# wrote C:\u-904\
import Crypto.PublicKey # directory C:\u-904\
import Crypto.PublicKey # from C:\u-904\
# wrote C:\u-904\
import struct # builtin
>>> Crypto.
'C:\\u-
--------
2009/4/24 Agostino Russo <email address hidden>:
> it's actually
>
> import sys
> log.debug(sys.path)
> import Crypto
> log.debug(
> import Crypto.Util
> log.debug(
>
> --
> wubi-r129.exe does nothing.
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in Wubi, Windows Ubuntu Installer: Confirmed
>
> del...
Yes, you need the logging module as well. Manual import seems fine.
Any word on this? Wubi still doesn't work for me as of r134. I have a ThinkPad R50e with XP SP3, just like the OP. Originally used Wubi to install 8.10, but have had this issue with the 9.04 Wubi (when picking up the debug build I have the same Util.number import error as the OP, even after removing C:\IBMTOOLS\
I have tried to install python 2.2 but cannot replicate this. You will have to add some log.debug statements in the code and narrow it down.
Any news on that?
I downloaded Wubi from http://
Hi, same result. Nothing happens. Any other ideas? Fabian
I have this exactly problem too. I'm sorry, my english is very bad.
Please post the logs, they are in your user temp folder (%temp%)
Hi Agostino,
I have not been paying any attention to this recently, but it seems to be affecting a few other people too.
As I raised this, I feel morally obliged to help you diagnose it :-)
I *think* the problem all us thinkpad users are having is that nearly all the OEM tools installed on the thinkpad are written in python (2.2).
My gut feeling is that there is a conflict somewhere between the OEM tools and the wubi installer, but I don't know how to track it down.
I suspect it is in the logger itself - the OEM config on this thinkpad explicitly defines a PYTHONPATH environment variable (%SystemDrive%
If you tell me what to try, I'll give it a go in the next few days.
Cheers,
Pete
Quick update on this:
I pulled and built wubizip from trunk this morning, and have played with it a bit.
If I modify openpgp/
# import Crypto.Util.number as NUM
Then I get the wubi frontend and a logfile (attached).
Obviously wubi chokes when it tries to check the file signatures, but this is further than I have got the python wubi to run previously.
Let me know what else to try, and I'll get back to you.
Cheers,
Pete
UPDATE: Workaround available
You may close this issue as I can now run wubi on the thinkpad.
This is actually a system configuration problem, not a bug in wubi.
The thinkpad has the environment variable PYTHONCASEOK set.
It appears to be a common known issue with certain thinkpads, as a quick search for PYTHONCASEOK will confirm. There seem to be a lot of Python developers who think IBM is a four letter word :-D
I have tried a few variations of this, with mixed results.
Executing python.exe -E main.py - works with wubizip builds
Unsetting PYTHONCASEOK at a command prompt does not allow wubi to run.
Setting PYTHONCASEOK to any value other than 1 does not allow wubi to run.
Cheers,
Pete
Feel free to add this to the FAQs if you like.
-----
Thinkpad Workaround to use Wubi 9.04
1. Right click on My Computer, select Properties
OR
Go to Control Panel -> System
2. On the 'Advanced' tab click the 'Environment Variables' button.
3. In the bottom pane ('System Variables') scroll down the list until you find the PYTHONCASEOK variable.
4. Highlight the PYTHONCASEOK line and click the 'Delete' button. (Just changing the value will not work.)
5. Reboot your thinkpad.
6. Run wubi.exe, and enjoy :-)
7. After installing wubi, you need to re-add PYTHONCASEOK with a value of 1 or your assorted IBM tools will not work.
NOTE: You will also need to repeat steps 1-5 if you want to uninstall wubi in the future, as the wubi uninstaller will have the same problem. Don't forget to do step 7 again after running the wubi uninstaller.
Actually I found an easier method:
INSTALL:
Create a new text file containing these two lines:
SET PYTHONCASEOK=
wubi.exe
Save the file with the name wubi-install.bat in the same folder as your downloaded wubi.exe
Double click the wubi-install.bat batch file to install.
UNINSTALL
Create a new text file containing these two lines:
SET PYTHONCASEOK=
uninstall-wubi.exe
Save the file with the name wubi-uninstall.bat in the same folder as your uninstaller
(Default is C:\ubuntu)
Double click the wubi-uninstall.bat batch file to uninstall.
Both these batch files are in the attached zip file.
-
@Agostino :
I tried adding
SetEnvironmentV
to pyrun.c and rebuilding, but it doesn't seem to work.
Adding "-E" to argv[] wouldn't be a good idea, as that would prevent all the other PYTHON* environment variables working :-)
Could you printout sys.path and os.environ within wubi? You can use log.debug within application.py
Hi again Ago,
With PYTHONCASEOK set, application.py fails on
from wubi.backends.win32 import WindowsBackend
The cause of this, as mentioned above is openpgp/
import Crypto.Util.number as NUM
Because python is running case insensitive search, the first match this gets for Crypto is actually crypto in the openpgp/sap directory. This means the module is actually trying :
import openpgp.
with the results we have seen all along :
ImportError: No module named Util.number
You can investigate this further on your build machine by setting the environment variable:
export PYTHONCASEOK=1
Then when you 'make runbin' you will get exactly the same behaviour in wine.
Cheers,
Pete
-----
Extra info as requested.
# log extract (reformatted for clarity)
06-02 10:56 DEBUG root: ** extra info BEGIN **
06-02 10:56 DEBUG root: sys.path = [
'C:
'C:
'C:
'C:
'C:
'C:
'C:
'C:
'C:
]
06-02 10:56 DEBUG root: os.environ = {
'TMP': 'C:\\DOCUME~
'COMPUTERNAME': 'THINKPAD',
'USERDOMAIN': 'THINKPAD',
'COMMONPROG
'PROCESSOR_
'PROGRAMFILES': 'C:\\Program Files',
'PROCESSOR_
'SYSTEMROOT': 'C:\\WINDOWS',
'PATH': 'C:\\PROGRAM FILES\\
'IBMSHARE': 'C:\\IBMSHARE',
'TK_LIBRARY': 'C:\\IBMTOOLS\
'TEMP': 'C:\\DOCUME~
'PROCESSOR_
'ALLUSERSPR
'SESSIONNAME': 'Console',
'HOMEPATH': '\\Documents and Settings\\Stacey',
'RRU': 'C:\\Program Files\\IBM\\IBM Rapid Restore Ultra\\',
'USERNAME': 'Stacey',
'LOGONSERVER': '\\\\THINKPAD',
'PROMPT': '$P$G',
'COMSPEC': 'C:\\WINDOWS\
'PYTHONPATH': 'C:\\IBMTOOLS\
'TCL_LIBRARY': 'C:\\IBMTOOLS\
'PATHEXT': '.COM;.
'CLIENTNAME': 'Console',
'FP_
'WINDIR': 'C:\\WINDOWS',
'APPDATA': 'C:\\Documents and Settings\
'HOMEDRIVE': 'C:',
'SYSTEMDRIVE': 'C:',
'NUMBER_
'PROCESSOR_
'OS': 'Windows_NT',
'USERPROFILE': 'C:\\Documents and Settings\\Stacey'
}
06-02 10:56 DEBUG root: ** extra info END **
This bug has been reported on the Ubuntu ISO testing tracker.
A list of all reports related to this bug can be found here:
http://
It will require a special build in order to see more messages, we will provide one in the coming days. An existing python version should not change thing, but you might want to try running without python22 (it should be sufficient to remove it from PATH and PYTHONHOME env variables).
|
https://bugs.launchpad.net/wubi/+bug/365501
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Details
- Type:
Improvement
- Status:
Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 1.1-rc-2
-
- Component/s: groovy-runtime
- Labels:None
- Number of attachments :
Description
As discussed with Graeme, take a Grails plugin for example that might supply a proxy "render" method for controllers, that checks for certain parameters and if present does something, else defers to the pre-existing render() method. There are several default render(...) variants overloaded, but this plugin replaces only a specific one:
def oldRender = controllerClass.metaClass.render controllerClass.metaClass.render = { Map m, Closure c -> if (!something) oldRender(m, c) }
This may work sometimes if the render method retrieved from the EMC is the one that takes Map, Closure. Sometimes it may not be though, and then you get method invocation errors when it tries to pass in the Map and Closure.
Currently the workaround is to use metaClass.getMetaMethod( name, argTypes) but this gives you a different invocation paradigm - i.e. you must call invoke() on the MetaMethod.
A couple of features then offer themselves as part or whole solutions:
- Make MetaMethod support call()/doCall() as well as invoke()
- Make EMC's methodMissing return a new MultiMetaMethodProxy object that, when call() is invoked, automatically determines which overloaded method to invoke. This could be dangerous, needs further assessment. This is the most elegant approach in my opinion, but I don't believe anywhere else in groovy treats all overloaded forms of a method as a "single" method and as such the terminology may not work. The concept is great, but maybe Groovy needs a new term for this, such as "Message" as in other OO languages. i.e. a MetaMessage is some invokation you can perform on an object using MetaMessage.call(args) but the exact method that will be called is dependent on the args, i.e. it knows about all the possible MetaMethods.
- Add a new findMethod(name, argTypes) that returns a method reference instead of MetaMethod.
|
http://jira.codehaus.org/browse/GROOVY-2310
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
NAMEI(9) MidnightBSD Kernel Developer’s Manual NAMEI(9)
NAME
namei, NDINIT, NDFREE, NDHASGIANT — pathname translation and lookup operations
SYNOPSIS
#include <sys/param.h>
#include <sys/proc.h>
#include <sys/namei.h>
int
namei(struct nameidata *ndp);
void
NDINIT(struct nameidata *ndp, u_long op, u_long flags, enum uio_seg segflg, const char *namep, struct thread *td);
void
NDFREE(struct nameidata *ndp, const uint flags);
int
NDHASGIANT(struct nameidata *ndp);
DESCRIPTION
The namei facility allows the client to perform pathname translation and lookup operations. The namei functions will increment the reference count for the vnode in question. The reference count has to be decremented after use of the vnode, by using either vrele(9) or vput(9), depending on whether the LOCKLEAF flag was specified or not. If the Giant lock is required, namei will acquire it if the caller indicates it is MPSAFE, in which case the caller must later release Giant based on the results of NDHASGIANT().
The NDINIT() function is used to initialize namei components. It takes the following arguments:
ndp
The struct nameidata to initialize.
op
The operation which namei() will perform. The following operations are valid: LOOKUP, CREATE, DELETE, and RENAME. The latter three are just setup for those effects; just calling namei() will not result in VOP_RENAME() being called.
flags
Operation flags. Several of these can be effective at the same time.
segflg
UIO segment indicator. This indicates if the name of the object is in userspace (UIO_USERSPACE) or in the kernel address space (UIO_SYSSPACE).
namep
Pointer to the component’s pathname buffer (the file or directory name that will be looked up).
td
The thread context to use for namei operations and locks.
NAMEI OPERATION FLAGS
The namei() function takes the following set of ‘‘operation flags’’ that influence its operation:
LOCKLEAF
Lock vnode on return. This is a full lock of the vnode; the VOP_UNLOCK(9) should be used to release the lock (or vput(9) which is equivalent to calling VOP_UNLOCK(9) followed by vrele(9), all in one).
LOCKPARENT
This flag lets the namei() function return the parent (directory) vnode, ni_dvp in locked state, unless it is identical to ni_vp, in which case ni_dvp is not locked per se (but may be locked due to LOCKLEAF). If a lock is enforced, it should be released using vput(9) or VOP_UNLOCK(9) and vrele(9).
WANTPARENT
This flag allows the namei() function to return the parent (directory) vnode in an unlocked state. The parent vnode must be released separately by using vrele(9).
MPSAFE
With this flag set, namei() will conditionally acquire Giant if it is required by a traversed file system. MPSAFE callers should pass the results of NDHASGIANT() to VFS_UNLOCK_GIANT in order to conditionally release Giant if necessary.OBJ
Do not call vfs_object_create() for the returned vnode, even though it meets required criteria for VM support.
NOFOLLOW
Do not follow symbolic links (pseudo). This flag is not looked for by the actual code, which looks for FOLLOW. NOFOLLOW is used to indicate to the source code reader that symlinks are intentionally not followed.
SAVENAME
Do not free the pathname buffer at the end of the namei() invocation; instead, free it later in NDFREE() so that the caller may access the pathname buffer. See below for details.
SAVESTART
Retain an additional reference to the parent directory; do not free the pathname buffer. See below for details.
ALLOCATED ELEMENTS
The nameidata structure is composed of the following fields:
ni_startdir
In the normal case, this is either the current directory or the root. It is the current directory if the name passed in does not start with ‘/’ and we have not gone through any symlinks with an absolute path, and the root otherwise.
In this case, it is only used by lookup(), and should not be considered valid after a call to namei(). If SAVESTART is set, this is set to the same as ni_dvp, with an extra vref(9). To block NDFREE() from releasing ni_startdir, the NDF_NO_STARTDIR_RELE can be set.
ni_dvp
Vnode pointer to directory of the object on which lookup is performed. This is available on successful return if LOCKPARENT or WANTPARENT is set. It is locked if LOCKPARENT is set. Freeing this in NDFREE() can be inhibited by NDF_NO_DVP_RELE, NDF_NO_DVP_PUT, or NDF_NO_DVP_UNLOCK (with the obvious effects).
ni_vp
Vnode pointer to the resulting object, NULL otherwise. The v_usecount field of this vnode is incremented. If LOCKLEAF is set, it is also locked.
Freeing this in NDFREE() can be inhibited by NDF_NO_VP_RELE, NDF_NO_VP_PUT, or NDF_NO_VP_UNLOCK (with the obvious effects).
ni_cnd.cn_pnbuf
The pathname buffer contains the location of the file or directory that will be used by the namei operations. It is managed by the uma(9) zone allocation interface. If the SAVESTART or SAVENAME flag is set, then the pathname buffer is available after calling the namei() function.
To only deallocate resources used by the pathname buffer, ni_cnd.cn_pnbuf, then NDF_ONLY_PNBUF flag can be passed to the NDFREE() function. To keep the pathname buffer intact, the NDF_NO_FREE_PNBUF flag can be passed to the NDFREE() function.
FILES
src/sys/kern/vfs_lookup.c
SEE ALSO
uio(9), uma(9), VFS(9), VFS_UNLOCK_GIANT(9), vnode(9), vput(9), vref(9)
AUTHORS
This manual page was written by Eivind Eklund 〈eivind@FreeBSD.org〉 and later significantly revised by Hiten M. Pandya 〈hmp@FreeBSD.org〉.
BUGS
The LOCKPARENT flag does not always result in the parent vnode being locked. This results in complications when the LOCKPARENT is used. In order to solve this for the cases where both LOCKPARENT and LOCKLEAF are used, it is necessary to resort to recursive locking.
Non-MPSAFE file systems exist, requiring callers to conditionally unlock Giant.
MidnightBSD 0.3 September 21, 2005 MidnightBSD 0.3
|
http://www.midnightbsd.org/documentation/man/NDINIT.9.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Details
- Type:
New Feature
- Status: Closed
- Priority:
Blocker
- Resolution: Fixed
- Affects Version/s: 0.8.0, 0.8.1
-
- Component/s: controller, log, replication
-
Description
One proposal of this API is here -
Issue Links
- blocks
KAFKA-1074 Reassign partitions should delete the old replicas from disk
- Resolved
-
- relates to
-
KAFKA-1177 DeleteTopics gives Successful message even if the specified Topic is not present
- Open
Activity
The delete topic logic can follow the same logic in partition reassignment.
1. Create a ZK path to indicate that we want to delete a topic.
2. The controller registers a listener to the deleteTopic path and when the watcher is triggered:
2.1 Send stopReplica requests to each relevant broker.
2.2 Each broker then delete the local log directory.
2.3 Once the stopReplica request completes, the controller deletes the deleteTopic path and the delete topic command completes.
I've been doing a lot of manual resetting of data in Kafka and one thing I noticed is that clients don't always behave so well when I do that. So when you implement this you should probably also make sure that the current kafka clients behave well when a topic is removed, i.e. error or reset as appropriate.
I'll take this on, hoping to get a patch in this weekend.
Apologies, I began work on this jira before going on break. Now that I'm back, I should be able to wrap it up.
Hey Prashanth, how's this JIRA coming along ?
Prashanth, mind if I take a look at this ? I have some time this week.
Apologies for not getting to this. Neha, go ahead and run with it.
Here is a broad description of how delete topic can work in Kafka -
1. The delete topic tool writes to a /delete_topics/[topic] path
2. The controller's delete topic listener fires and does the following -
2.1 List the partitions for the topic to be deleted
2.2 For each partition, do the following -
2.2.1 Move the partition to OfflinePartition state. Take the leader offline. From this point on, all produce/consume requests for this partition will start failing
2.2.2 For every replica for a partition, first move it to OfflineReplica state (it is removed from isr) then to NonExistentReplica (send stop-replica request with delete flag on to each replica)
2.3 Delete the /brokers/topics/[topic] path from zookeeper
2.4 Delete the /delete_topics/[topic] path to signify completion of the delete operation
I'd also like to see an auto-delete feature, where by a topic can be automatically be deleted, after it has been garbage collected, and has no more messages. This could be set to happen automatically, after an expiration time. This may require exposing an api on each broker so a broker can ask if any other brokers have messages pending for a topic, before deciding the topic should be removed.
Delete topic admin path schema updated at
It makes sense to include the delete topic feature in the 0.8 beta release since most people might create test topics that would require cleanup?
Actually scratch 5 in "How topics are deleted". Topics are always deleted from zk.
Thanks for the patch! Some suggestions -
1. In controller, it is important to not let a long delete topics operation block critical state changes like elect leader. To make this possible, relinquish the lock between the deletes for individual topics
2. If you do relinquish the lock like I suggested above, you need to now take care of avoid leader elections for partitions being deleted
3. Since now you will handle topic deletion for individual topics, it might be worth changing the zookeeper structure for delete topics so status on individual topic deletes gets reported accordingly. One way to do this is to introduce a path to indicate that the admin tool has initiated delete operation for some topics (/admin/delete_topics_updated), and create child nodes under /admin/delete_topics, one per topic. As you complete individual topic deletion, you delete the /admin/delete_topics/<topic> path. Admin tool creates the /admin/delete_topics/<topic> path and updates /admin/delete_topics_updated. Controller only registers a data change watcher on /admin/delete_topics_updated. When this watcher fires, it reads the children of /admin/delete_topics and starts topic deletion.
4. On startup/failover, the controller registers a data change watch on /admin/delete_topics_updated, and then reads the list of topics under /admin/delete_topics.
5. Admin tool never errors out since it just adds to the list of deleted topics
On the broker side, there are a few things to be done correctly -
1. KafkaApis
After receiving stop replica request, request handler should reject produce/fetch requests for partitions to be deleted by returning PartitionBeingDeleted error code. Once the delete is complete, the partition can be removed from this list. In that case, it will return UnknownTopicOrPartition error code
2. ReplicaManager
2.1 Remove unused variable leaderBrokerId from makeFollower()
2.2 Fix the comment inside recordFollowerPosition to say "partition hasn't been created or has been deleted"
2.3 Let the partition do the delete() operation. This will ensure that the leaderAndIsrUpdateLock is acquired for the duration of the delete. This will avoid interleaving leader/isr requests with stop replica requests and simplify the reasoning of log truncate/highwatermark update operations
3. Partition - Introduce a new delete() API that works like this -
1. Acquire leaderIsrUpdateLock so that create log does not interfere with delete log. Also remove/add fetcher does not interfere with delete log.
2. Removes fetcher for the partition
3. Invoke delete() on the log. Be careful how current read/write requests will be affected.
4. LogManager
1. When deleteLogs() is invoked, remove logs from allLogs. This will prevent flush being invoked on the log to be deleted.
2. Invoke log.delete() on every individual log.
3. log.markDeletedWhile(_ => true) will leave an extra rolled over segment in the in memory segment list
5. Log
1. Log delete should acquire "lock" to prevent interleaving with append/truncate/roll/flush etc
Following steps need to be taken during log.delete()
2. Invoke log.close()
3. Invoke segmentList.delete(), where SegmentList.delete() only does contents.set(new Array[T](0))
4. Invoke segment.delete()
5. Update a flag deleted = true
Few questions to be thought about -
- Are any changes required to roll(). If deleted flag is true, then skip roll().
- Are any changes required to markDeletedWhile(). Same as roll. If deleted flag is true, skip
- Are any changes required to flush() ? This can be invoked either during roll or by append. It cannot be invoked by the flush thread since that is disabled for logs to be deleted. This needs to be handled by using lastOption.
- See what to do with truncateTo(). This is used during make follower in Partition. This won't interfere with delete since Partition's delete acquires the leaderIsrUpdateLock. Another place that uses truncateTo() is the handleOffsetOutOfRange on the follower. This won't interleave since the replica fetcher was already removed before attempting to delete the log
- See what to do with truncateAndStartWithNewOffset(). This won't interleave with delete log since the replica fetcher was already removed before attempting to delete the log
- What if the broker is writing from the log when stop replica is deleting it ? Since log.delete() acquires the "lock", either append starts before or after the delete. If it starts after, then the changed mentioned in #7 and #9 should be made.
- What if the broker is about to write to the log that is under deletion ? Same as above
- What if the broker is reading from the log that is being deleted ? It will get a ClosedChannelException, I think. This needs to be conformed. The test can run a consumer that is consuming data from beginning of a log and you can invoke delete topic.
- What if the broker about to read from the log that is being deleted ? It will try reading from a file channel that is closed. This will run into ClosedChannelException. Should we catch ClosedChannelException and log an appropriate error and send PartitionDeleted error code when that happens ?
- What happens to the partition entry from the high watermark file when it is being deleted ? When partition is removed from allPartitions, the next high watermark checkpoint removes the partition's entry from the high watermark file.
- What happens to requests in the purgatory when partition has been deleted ? When a partition has been removed from allPartitions, then the requests in the purgatory will send UnknownTopicOrPartitionCode back to the client.
6. Log.read()
val first = view.head.start
This needs to change to headOption. Return empty message set when this returns None
7. Log.flush()
segments.view.last.flush()
Need to change the above to segments.view.lastOption. If that returns None, then return without flushing.
8. SegmentList.delete()
contents.set(new Array[T](0))
9. Log.append()
Fix this to use lastOption - val segment = maybeRoll(segments.view.last)
If None, then return (-2,-2) to signify that the log was deleted
Replying to a few comments, will follow up with changes according to others:
On the controller side:
1. I think that the delete topics command will not take too long to complete, in any case it won't take any longer than Preferred Replica Election command. Both commands write to /admin zk path and trigger listeners that may send send some requests and update some zk paths. I believed that the reason for relinquishing the lock in ReassignPartitions listeners after every partition reassignment was that the controller waits for the new replicas to join the ISR, which could take long.
2. Hence I think that we should not relinquish the lock between deletion of two topics.
3. So maybe we don't need to use two separate zk paths? If we rerun the DeleteTopicsCommand, it should complain that the topics are absent in zookeeper if the topics were successfully deleted.
On the broker side:
4. LogManager:
1. deleteLogs() indeed removes the logs from allLogs.
2. delete() is invoked on every individual log.
3. Yes, following up on this.
5. Log:
1. The lock is acquired by all these functions, but I will double check if it needs to be acquired at the top level for our purpose.
3. Well, log.delete() takes care of deleting the individual segments.
Will make modifications to Log*, hopefully they will address all your comments.
Let's do some zookeeper math here to see how long it takes to delete one topic, 8 partitions from a 6 node kafka cluster -
- of zk ops operation during delete topic
1 val partitionAssignment = ZkUtils.getPartitionAssignmentForTopics(zkClient, topics.toSeq)
7 val brokers = ZkUtils.getAllBrokersInCluster(zkClient)
1 ZkUtils.getAllReplicasOnBroker(zkClient, topics.toSeq, brokers.map(_.id)) (This is a redundant read from zookeeper, so reuse the info read in step 1)
2 removeReplicaFromIsr -> getLeaderIsrAndEpochForPartition, conditionalUpdatePersistentPath
9 removeFromTopicsBeingDeleted -> readDataMaybeNull (1), deletePath (8)
20 zookeeper ops. With 10ms per op, (which is a what a zookeeper cluster that kafka consumers and brokers share does in best case), that is 200ms per topic
With 50 such topics, it is 10 seconds. That is the amount of time you are starving other partitions from being available!
What you can do, for simplicity purposes, is keep the existing long lock on the controller side for this patch. We can improve it later or in 0.8.1
Also, the log side of your patch does not acquire the lock. You used the delete APIs that were used by unit tests so far. So they don't deal with the issues I've mentioned above in my comments.
Regarding LogManager - Let's look at the modified version of your patch and see if that solves the problems I've outlined above wrt to interleaving other operations with delete log.
Thanks for the excellent explanation. Some of these zk operations will not be repeated for every topic, for example, ZkUtils.getAllBrokersInCluster(zkClient) or removeFromTopicsBeingDeleted. But anyways, it seems that the cost of ZK operations is even worse because removeReplicaFromIsr() makes 2 Zk operations for each replica, which would be responsible for 2*50*8*3(repl-factor) = 2400 zk operations.
I agree with you, let's optimize this after log deletion works correctly.
Similarly, preferred replica election will suffer from a very high number of zk operations since the callbacks will elect leader for every partition. So, we could relinquish the lock in preferred replica election too.
Thanks for the patch. Even though the patch is not big, it touches quite a few critical components such as controller, replica manager, and log. It will take some time to stabilize this. We probably should consider pushing this out of 0.8 so that we don't delay the 0,8 release too much. One quick comment:
1. KafkaControler.onTopicsDeletion(): Why do we need to read things like partitionAssignment and brokers from ZK? Could we just use the cached data in controller context?
Yes, I agree with you Jun. Attaching a temporary patch v2 for the records, which needs testing. Patch v2 reads the cached data from the controller context. We don't need to review this patch since Log has significantly changed in trunk, so I will need to rework that part.
Created reviewboard
against branch trunk
Delete topic is a pretty tricky feature and there are multiple ways to solve it. I will list the various approaches with the tradeoffs here. Few things to think about that make delete topic tricky -
1. How do you handle resuming delete topics during controller failover?
2. How do you handle re-creating topics if brokers that host a subset of the replicas are down?
3. If a broker fails during delete topic, how does it know which version of the topic it has logs for, when it restarts? This is relevant if we allow re-creating topics while a broker is down
Will address these one by one.
#1 is pretty straightforward to handle and can be achieved in a way similar to partition reassignment (through an admin path in zookeeper indicating a topic deletion that has not finished)
#2 is an important policy decision that can affect the complexity of the design for this feature. If you allow topics to be deleted while brokers are down, the broker needs a way to know that it's version of the topic is too old. This is mainly an issue since a topic can be re-created and written to, while a broker is down. We need to ensure that a broker does not join the quorum with an older version of the log. There are 2 ways to solve this problem that I could think off -
1. Do not allow topic deletion to succeed if a broker hosting a replica is down. Here, the controller keeps track of the state of each replica during topic deletion (TopicDeletionStarted, TopicDeletionSuccessful, TopicDeletionFailed) and only marks the topic as deleted if all replicas for all partitions of that topic are successfully deleted.
2. Allow a topic to be deleted while a broker is down and keep track of the "generation" of the topic in a fault tolerant, highly available and consistent log. This log can either be zookeeper or a Kafka topic. The main issue here is how many generations would we have to keep track off for a topic. In other words, can this "generation" information ever be garbage collected. There isn't a good bound on this since it is unclear when the failed broker will come back online and when a topic will be re-created. That would mean keeping this generation information for potentially a very long time and incurring overhead during recovery or bootstrap of generation information during controller or broker fail overs. This is especially a problem for use cases or tests that keep creating and deleting a lot of short lived topics. Essentially, this solution is not scalable unless we figure out an intuitive way to garbage collect this topic metadata. It would require us to introduce a config for controlling when a topic's generation metadata can be garbage collected. Note that this config is different from the topic TTL feature which controls when a topic, that is currently not in use, can be deleted. Overall, this alternative is unnecessarily complex for the benefit of deleting topics while a broker is down.
#3 is related to the policy decision made about #2. If a topic is not marked deleted successfully while a broker is down, the controller will automatically resume topic deletion when a broker restarts.
This patch follows the previous approach of not calling a topic deletion successful until all replicas have confirmed the deletion of local state for that topic. This requires the following changes -
1. TopicCommand issues topic deletion by creating a new admin path /admin/delete_topics/<topic>
2. The controller listens for child changes on /admin/delete_topic and starts topic deletion for the respective topics
3. The controller has a background thread that handles topic deletion. The purpose of having this background thread is to accommodate the TTL feature, when we have it. This thread is signaled whenever deletion for a topic needs to be started or resumed. Currently, a topic's deletion can be started only by the onPartitionDeletion callback on the controller. In the future, it can be triggered based on the configured TTL for the topic. A topic's deletion will be halted in the following scenarios -
- broker hosting one of the replicas for that topic goes down
- partition reassignment for partitions of that topic is in progress
- preferred replica election for partitions of that topic is in progress (though this is not strictly required since it holds the controller lock for the entire duration from start to end)
4. Topic deletion is resumed when -
- broker hosting one of the replicas for that topic is started
- preferred replica election for partitions of that topic completes
- partition reassignment for partitions of that topic completes
5. Every replica for a topic being deleted is in either of the 3 states -
- TopicDeletionStarted (Replica enters TopicDeletionStarted phase when the onPartitionDeletion callback is invoked. This happens when the child change watch for /admin/delete_topics fires on the controller. As part of this state change, the controller sends StopReplicaRequests to all replicas. It registers a callback for the StopReplicaResponse when deletePartition=true thereby invoking a callback when a response for delete replica is received from every replica)
- TopicDeletionSuccessful (deleteTopicStopReplicaCallback() moves replicas from TopicDeletionStarted->TopicDeletionSuccessful depending on the error codes in StopReplicaResponse)
- TopicDeletionFailed. (deleteTopicStopReplicaCallback() moves replicas from TopicDeletionStarted->TopicDeletionSuccessful depending on the error codes in StopReplicaResponse. In general, if a broker dies and if it hosted replicas for topics being deleted, the controller marks the respective replicas in TopicDeletionFailed state in the onBrokerFailure callback. The reason is that if a broker fails before the request is sent and after the replica is in TopicDeletionStarted state, it is possible that the replica will mistakenly remain in TopicDeletionStarted state and topic deletion will not be retried when the broker comes back up.)
6. The delete topic thread marks a topic successfully deleted only if all replicas are in TopicDeletionSuccessful state and it starts the topic deletion teardown mode where it deletes all topic state from the controllerContext as well as from zookeeper. This is the only time the /brokers/topics/<topic> path gets deleted.
On the other hand, if no replica is in TopicDeletionStarted state and at least one replica is in TopicDeletionFailed state, then it marks the topic for deletion retry.
7. I've introduced callbacks for controller-broker communication. Ideally, every callback should be of the following format (RequestOrResponse) => Unit. BUT since StopReplicaResponse doesn't carry the replica id, this is handled in a somewhat hacky manner in the patch. The purpose is to fix the approach of upgrading controller-broker protocols in a reasonable way before having delete topic upgrade StopReplica request in a one-off way. Will file a JIRA for that.
Several integration tests added for delete topic -
1. Topic deletion when all replica brokers are alive
2. Halt and resume topic deletion after a follower replica is restarted
3. Halt and resume topic deletion after a controller failover
4. Request handling during topic deletion
5. Topic deletion and partition reassignment in parallel
6. Topic deletion and preferred replica election in parallel
7. Topic deletion and per topic config changes in parallel
Updated reviewboard against branch trunk.
Asking for a more detailed review as the patch is somewhat tested and refactored to make the topic deletion logic easier to maintain and understand.
Updated reviewboard against branch trunk
Thanks for the reviews. This is a big patch, please do submit your review even after checkin, I will fix the issues in follow up JIRAs.
Can we have
this merged now that delete support is in?
Sriram,
You can check in that patch now. You probably would have to add an additional check to see whether a partition whose leader is to be moved to the preferred replica is in a topic to be deleted, while holding the controller lock. If so, skip leader balancing.
Sriram Subramanian It will not be enough to just drop the partitions that belong to topics being deleted from the preferred replica list. In addition that, I think we may also have to leave them out while computing what the preferred replica imbalance factor is.
During controller failover, we need to remove unneeded leaderAndISRPath that the previous controller didn't get a chance to remove.
|
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Graphics.Gloss
Description
Gloss hides the pain of drawing simple vector graphics behind a nice data type and a few display functions.
Getting something on the screen is as easy as:
import Graphics.Gloss main =
display(InWindow "Nice Window" (200, 200) (10, 10))
white(
Circle80)
Once the window is open you can use the following:
- Quit - esc-key.
- Move Viewport - left-click drag, arrow keys.
- Rotate Viewport - right-click drag, control-left-click drag, or home/end-keys.
- Zoom Viewport - mouse wheel, or page up/down-keys.
Animations can be constructed similarly using the
animate.
If you want to run a simulation based around finite time steps then try
simulate.
If you want to manage your own key/mouse events then use
play.
Gloss uses OpenGL under the hood, but you don't have to worry about any of that.
Gloss programs should be compiled with
-threaded, otherwise the GHC runtime
will limit the frame-rate to around 20Hz.
Release Notes: For 1.7.0: * Tweaked circle level-of-detail reduction code. * Increased frame rate cap to 100hz. Thanks to Doug Burke * Primitives for drawing arcs and sectors. Thanks to Thomas DuBuisson * IO versions of animate, simplate and play. For 1.6.0: Thanks to Anthony Cowley * Full screen display mode. For 1.5.0: * O(1) Conversion of ForeignPtrs to bitmaps. * An extra flag on the Bitmap constructor allows bitmaps to be cached in texture memory between frames. For 1.4.0: Thanks to Christiaan Baaij: * Refactoring of Gloss internals to support multiple window manager backends. * Support for using GLFW as the window library instead of GLUT. GLUT is still the default, but to use GLFW install gloss with: cabal install gloss --flags="GLFW -GLUT"
For more information, check out.
Synopsis
- module Graphics.Gloss.Data.Picture
- module Graphics.Gloss.Data.Color
- data Display
- display :: Display -> Color -> Picture -> IO ()
- animate :: Display -> Color -> (Float -> Picture) -> IO ()
- simulate :: forall model. Display -> Color -> Int -> model -> (model -> Picture) -> (ViewPort -> Float -> model -> model) -> IO ()
- play :: forall world. Display -> Color -> Int -> world -> (world -> Picture) -> (Event -> world -> world) -> (Float -> world -> world) -> IO ()
Documentation
module Graphics.Gloss.Data.Picture
module Graphics.Gloss.Data.Color
Arguments
Open a new window and display the given picture.
Use the following commands once the window is open:
- Quit - esc-key.
- Move Viewport - left-click drag, arrow keys.
- Rotate Viewport - right-click drag, control-left-click drag, or home/end-keys.
- Zoom Viewport - mouse wheel, or page up/down-keys.
Arguments
Open a new window and display the given animation.
Once the window is open you can use the same commands as with
display.
Arguments
Run a finite-time-step simulation in a window. You decide how the model is represented, how to convert the model to a picture, and how to advance the model for each unit of time. This function does the rest.
Once the window is open you can use the same commands as with
display.
|
http://hackage.haskell.org/package/gloss-1.7.5.2/docs/Graphics-Gloss.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Windows Search Overview
Windows Search is a desktop search platform that has instant search capabilities for most common file types and data types, and third-party developers can extend these capabilities to new file types and data types.
This topic is organized as follows:
- Introduction
- Technical Prerequisites
- Windows Search SDK Documentation
- History of Windows Search
- Additional Resources
- Related topics
Introduction
Windows Search is a standard component of Windows 7 and Windows Vista, and is enabled by default. Windows Search replaces Windows Desktop Search (WDS), which was available as an add-in for Windows XP and Windows Server 2003.
Windows Search is composed of three components:
Windows Search Service
The WSS organizes the extracted features of a collection of documents. The Windows Search Protocol enables a client to communicate with a server that is hosting a WSS, both to issue queries and to enable an administrator to manage the indexing server. When processing files, WSS analyzes a set of documents, extracts useful information, and then organizes the extracted information so that properties of those documents can be efficiently returned in response to queries.
A collection of documents that can be queried comprises a catalog, which is the highest-level unit of organization in Windows Search. A catalog represents a set of indexed documents that can be queried. A catalog consists of a properties table with the text or value and corresponding location (locale) stored in columns of the table. Each row of the table corresponds to a separate document in the scope of the catalog, and each column of the table corresponds to a property. A catalog may contain an inverted index (for quick word matching) and a property cache (for quick retrieval of property values).
The indexer process is implemented as a Windows service running in the LocalSystem account and is always running for all users (even if no user is logged in), which permits Windows Search to accomplish the following:
- Maintain one index that is shared among all users.
- Maintain security restrictions on content access.
- Process remote queries from client computers on the network.
The Search service is designed to protect the user experience and system performance when indexing. The following that stores the index.
Development Platform
The preferred way to access the Search APIs and create Windows Search applications is through a Shell data source. A Shell data source is a component that is used to extend the Shell namespace and expose items in a data store. A data store is a repository of data. A data store can be exposed to the Shell programming model as a container that uses a Shell data source. The items in a data store can be indexed by the Windows Search system using a protocol handler.
For example,. This interface permits you to set up the parameters of the search by using methods that create and modify search folders. If methods of this interface are not called, default values are used instead.
Accessing the Windows Search capability indirectly through the Shell data model is preferred because it provides access to full Shell functionality at the level of the Shell data model. For example, you can set the scope of a search to a library (which is a feature available in Windows 7 and later) to use the library folders as the scope of the query. Windows Search then aggregates the search results from those locations if they are in different indexes (if the folders are on different computers). The Shell data layer also creates a more complete view of items' properties, synthesizing some property values. It also provides access to search features for data stores that are not indexed by Windows Search. For example, you can search a Universal Serial Bus (USB) storage devices, portable device that uses the MTP protocol, or an File Transfer Protocol (FTP) server through the Shell data sources that provides access to those storage systems. Doing so ensures a better user experience.
Windows Search has a cache of property values that are programmatic (through OLE DB for example) and interpreted by the application's code rather than the Shell,.
User Interface
In Windows Vista and later, Windows Search is integrated into all Windows Explorer windows for instant access to search. This enables users to quickly search for files and items by file name, properties, and full-text contents. Results can also be filtered further to refine the search. Here are some more features of Windows Search:
- An instant search box in every window enables instant filtering of all items currently in view. Instant search boxes appear in the Start menu to search for programs or files, and in the upper-right corner of all Windows Explorer windows to filter the results shown. Instant search is also integrated into some other Windows features, such as Windows Media Player, to find related files.
- Documents can be tagged with keywords to group them by custom criteria that are defined by the user. Tags are metadata items that are assigned by the user or applications to make it easier to find files based on keywords that may not be in the item name or contents. For example, a set of pictures might be tagged as "Arizona Vacation 2009" to quickly retrieve later by searching for any of the included words.
- Enhanced column headers in Windows Explorer views enable sorting and grouping documents in different ways. For example, files can be sorted according to name, date modified, type, size, and tags. Documents can also be grouped according to any of these properties and each group can be filtered (hidden or displayed) as desired.
- Documents can be stacked according to name, date modified, type, size, and tags. Stacks include all documents that have the specified property and are located within any subfolder of the selected folder.
- Searches can be saved (to be retrieved later) by clicking the Save Search button in the search pane in Windows Explorer. The results will be dynamically repopulated based on the original criteria when the saved search is opened. For instructions, see Save Your Search Results.
- Preview handlers and thumbnail handlers enable users to preview documents in Windows Explorer without having to open the application that created them.
Technical Prerequisites
Before you start reading the Windows Search SDK documentation, you should have a fundamental understanding of the following concepts:
- How to implement a Shell data source.
- How to implement a handler.
- How to work in native code.
A Shell data source is a component that is used to extend the Shell namespace and expose items in a data store. In the past, the Shell data source was referred to as the Shell namespace extension. A handler is a Component Object Model (COM) object that provides functionality for a Shell item. For a list of handlers identified by the developer scenario you are trying to achieve, see "Overview of Handlers" in Windows Search as a Development Platform.
For more information about the Windows Search SDK interoperability assembly for working with COM objects that are exposed by Windows Search and other programs that use managed code, see Using Managed Code with Shell Data and Windows Search. However, note that filters, property handlers, and protocol handlers must be written in native code. This is due to potential common language runtime (CLR) versioning issues with the process that multiple add-ins run in. Developers who are new to C++ can get started with the Visual C++ Developer Center and Windows Development Getting Started.
SDK Download and Contents
In addition to meeting the listed technical prerequirements, you must also download the Windows SDK to get the Windows Search libraries. The Windows Search SDK Samples contain useful code samples and an interoperability assembly for developing with managed code. For more information on using the code samples, see Windows Search Code Samples.
Windows Search SDK Documentation
The contents of the Windows Search SDK documentation are as follows:
- Windows Search as a Development Platform
Outlines the main development scenarios in Windows Search. Provides a list of handlers identified by the development scenario you are trying to achieve, add-in installer guidelines, and implementation notes.
- Windows Search Developer's Guide
Provides explanations for Managing the Index, Querying the Index Programmatically, Extending the Index, and Extending Language Resources.
- Windows Search Reference
Documents the following categories of Windows Search interfaces: Protocol Handlers, Querying, Crawl Scope , Data Add-ins, Index Management, and Notifications. The reference documentation also includes Constants and Enumerations, Structures, Property Mappings, and the Saved Search File Format.
- Windows Search Code Samples
Lists and describes briefly the Search API code samples that are provided in the Windows 7 SDK. Most samples can be downloaded from MSDN Code Gallery. All samples are included in the Windows SDK.
- Federated Search in Windows
Describes Windows 7 support for search federation to remote data stores using OpenSearch technologies that enable users to access and interact with their remote data from within Windows Explorer.
- Related Search Technologies
Lists technologies related to Windows Search: Enterprise Search, SharePoint Enterprise Search, and legacy applications such as Windows Desktop Search 2.x and Platform SDK: Indexing Service.
- Windows Search Glossary
Defines essential terms used in Windows Search and Shell technologies.
History of Windows Search
Windows Search replaces Windows Desktop Search (WDS), which was available as an add-in for Windows XP and Windows Server 2003. WDS replaced the legacy Indexing Service from previous versions of Windows with enhancements to performance, usability, and extensibility. The new development platform supports requirements that produce a more secure and stable system. While the new querying platform is not compatible with Microsoft Windows Desktop Search (WDS) 2.x, filters and protocol handlers written for previous versions of WDS can be updated to work with Windows Search. Windows Search also supports a new property system. For information on filters, property handlers, and protocol handlers, see Extending the Index.
Windows Search is built into Windows Vista and later, and is available as a redistributable update to WDS 2.x, to support the following operating systems:
- 32-bit versions of Windows XP with Service Pack 2 (SP2).
- All x64-based versions of Windows XP.
- Windows Server 2003 with Service Pack 1 (SP1) and later.
- All x64-based versions of Windows Server 2003.
Systems running these operating systems must have Windows Search installed in order to run applications written for Windows Search. For more information, see KB article 917013: Description of Windows Desktop Search 3.01 and the Multilingual User Interface Pack for Windows Desktop Search 3.01.
Additional Resources
- For information about creating a Shell data source, see Implementing the Basic Folder Object Interfaces.
- For more information about ISearchFolderItemFactory and the DB folder data source, an overview of file type handlers (also known as Shell extension handlers and Search handlers), see Windows Search as a Development Platform.
- For community-supported message boards about as a Development Platform
- Languages Supported by Windows Search
- Using Managed Code with Shell Data and Windows Search
|
http://msdn.microsoft.com/en-us/library/aa965362(v=vs.85).aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
This articles explains how to make an application aware of the Battery Status when in use on a laptop/tablet.
PowerMode change comes from Intel documentation on devx. The current power status comes from MSDN in a longhorn article.
See Cancellable Thread Pool for why this is here. This is a very raw and ready post as it's just a couple of searches anyway. The only bit of my code really is the constructor which ensures you have results whenever you instantiate the class.
Run the application, and then attempt to generate as many PowerChange events as possible. With a modern ACPI capable desktop PC, you will probably be able to generate Suspend and Resume events. Attempting to change the ACLineStatus is not recommended.
PowerChange
Suspend
Resume
ACLineStatus
With a laptop/tablet, you should be able to generate the full range of events as long as your battery holds some charge.
The MSDN/Intel documentation does not indicate if these power events are also available from servers via UPS. If anyone has a development machine on a UPS, I would be interested to see a trace of the events. (The file is automatically saved when the application shuts down, see frmPowerDemo_Closed().)
frmPowerDemo_Closed()
There are two parts to this code, firstly detecting PowerChange events, and secondly, finding out the current status. To detect a PowerChange, you first need to use the Microsoft.Win32 namespace, and wire in the event.
Microsoft.Win32
//At the Top of the file
using Microsoft.Win32;
//In the Constructor or elsewhere
SystemEvents.PowerModeChanged
+= new PowerModeChangedEventHandler(SystemEvents_PowerModeChanged);
Next, it's a simple case of declaring your handler in the usual method.
private void SystemEvents_PowerModeChanged(object sender,
PowerModeChangedEventArgs e)
{
The PowerModeChanged event provides the initial information as to why you get the event, e.g., PowerModeChangedEvent.Suspend, .Resume and .StatusChange. However, you will usually require slightly more complex logic, e.g.:
PowerModeChanged
PowerModeChangedEvent.Suspend
.Resume
.StatusChange
//Get the current Status
PowerStatus ps = new PowerStatus();
switch (e.Mode)
{
case PowerModes.Resume:
if (ps.BatteryFlag & _BatteryStatus.Critical
!= _BatteryStatus.Critical)
{
//Start process that will use lots of CPU
The PowerStatus class is used to provide detail to the current state. It will automatically read the current power status when it is constructed, so all you need to do is read the values off. Of most value are the BatteryFlag and ACLineStatus fields. Both of these are enumerated to provide useful information, although the BatteryFlag is a bitmapped field.
PowerStatus
BatteryFlag
This code can be used in combination with Threading to provide applications that will throttle back the CPU usage as power becomes critical.
v0.1a 13th Sept 2004.
|
http://www.codeproject.com/script/Articles/View.aspx?aid=8273
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
iMeshWrapper Struct Reference
[Mesh support]
A mesh wrapper is an engine-level object that wraps around an actual mesh object (iMeshObject). More...
#include <iengine/mesh.h>
Detailed Description
A mesh wrapper is an engine-level object that wraps around an actual mesh object (iMeshObject).
Every mesh object in the engine is represented by a mesh wrapper, which keeps the pointer to the mesh object, its position, its name, etc.
Think of the mesh wrapper as the hook that holds the mesh object in the engine. An effect of this is that the i???State interfaces (e.g. iSprite3DState) must be queried from the mesh *objects*, not the wrappers!
Note that a mesh object should never be contained in more than one wrapper.
Main creators of instances implementing this interface:
- iEngine::CreateMeshWrapper()
- iEngine::LoadMeshWrapper()
- iEngine::CreatePortalContainer()
- iEngine::CreatePortal()
- iLoader::LoadMeshObject()
- CS::Geometry::GeneralMeshBuilder::CreateMesh()
Main ways to get pointers to this interface:
- iEngine::FindMeshObject()
- iMeshList::Get()
- iMeshList::FindByName()
- iMeshWrapperIterator::Next()
- iLoaderContext::FindMeshObject()
Main users of this interface:
Definition at line 263 of file mesh.h.
Member Function Documentation do delete on the object once you don't use it anymore.
- Deprecated:
- Deprecated in 2.1. Pass zbuf mode in render mesh call delete on the object once you have done using it.
Adds a (pseudo-)instance at the given position.
Returns the instance transform shadervar.
Set a given child mesh at a specific lod level.
Note that a mesh can be at several lod levels at once.
Create a LOD control for this mesh wrapper.
This is relevant only if the mesh is a hierarchical mesh. The LOD control will be used to select which children are visible and which are not. Use this to create static lod.
Destroy the LOD control for this mesh.
After this call the hierarchical mesh will act as usual.
Find a child mesh by name.
If there is a colon in the name then this function is able to search for children too. i.e. like mesh:childmesh:childmesh.
Get the specified draw callback.
Get the number of draw callbacks.
Get a specific extra render mesh.
Get the number of extra render meshes.
Gets the z-buffer mode of a specific extra rendermesh.
- Deprecated:
- Deprecated in 2.1. Obtain zbuf mode from render mesh
Get the parent factory.
Get flags for this meshwrapper. maximum distance at which this mesh will be rendered.
Get the maximum distance at which this mesh will be rendered.
If lod was not set using variables then it will return 0.
Get the iMeshObject.
Get the minimum distance at which this mesh will be rendered.
Get the minimum distance at which this mesh will be rendered.
If lod was not set using variables then it will return 0.
Get the movable instance for this object.
It is very important to call GetMovable()UpdateMove() after doing any kind of modification to this movable to make sure that internal data structures are correctly updated.
If this mesh is a portal container you can use GetPortalContainer() to get the portal container interface.
Get the radius of this mesh and all its children.
Get the render mesh list for this mesh wrapper and given view.
Get the render priority.
Get a very inaccurate bounding box of the object in screen space.
Returns -1 if object behind the camera or else the distance between the camera and the furthest point of the 3D box.
Get the LOD control for this mesh.
This will return 0 if this is a normal (hierarchical) mesh. Otherwise it will return an object with which you can control the static LOD of this object.
Get the shader variable context of the mesh object.
Get the bounding box of this object after applying a transformation to it.
This is really a very inaccurate function as it will take the bounding box of the object in object space and then transform this bounding box.
Get the bounding box of this object in world space.
This routine will cache the bounding box and only recalculate it if the movable changes.
Get the Z-buf drawing mode.
Do a hard transform of this object.
This transformation and the original coordinates are not remembered but the object space coordinates are directly computed (world space coordinates are set to the object space coordinates by this routine). Note that some implementations of mesh objects will not change the orientation of the object but only the position.
Note also that some mesh objects don't support HardTransform. You can find out by calling iMeshObject::SupportsHardTransform(). In that case you can sometimes still call HardTransform() on the factory.
Check if this object is hit by this world space vector.
Return the collision point in world space coordinates. This version can also return the material that was hit (this will only happen if 'do_material' is true). This is not supported by all meshes so this can return 0 even if there was a hit.
Check if this mesh is hit by this object space vector.
This will do a rough but fast test based on bounding box only. So this means that it might return a hit even though the object isn't really hit at all. Depends on how much the bounding box overestimates the object. This also returns the face number as defined in csBox3 on which face the hit occured. Useful for grid structures.
- See also:
- csHitBeamResult
Check if this object is hit by this object space vector.
Return the collision point in object space coordinates. This version is more accurate than HitBeamOutline. This version can also return the material that was hit (this will only happen if 'do_material' is true). This is not supported by all meshes so this can return 0 even if there was a hit.
- See also:
- csHitBeamResult
Check if this object is hit by this object space vector.
Outline check.
- See also:
- csHitBeamResult
This routine will find out in which sectors a mesh object is positioned.
To use it the mesh has to be placed in one starting sector. This routine will then start from that sector, find all portals that touch the sprite and add all additional sectors from those portals. Note that this routine using a bounding sphere for this test so it is possible that the mesh will be added to sectors where it really isn't located (but the sphere is).
If the mesh is already in several sectors those additional sectors will be ignored and only the first one will be used for this routine.
Placing a mesh in different sectors is important when the mesh crosses a portal boundary. If you don't do this then it is possible that the mesh will be clipped wrong. For small mesh objects you can get away by not doing this in most cases.
Get the scene node that this object represents.
Remove a draw callback.
Deletes a specific extra rendermesh.
Deletes a specific extra rendermesh.
Removes a (pseudo-)instance of the mesh.
Remove a child mesh from all lod levels.
The mesh is not removed from the list of child meshes however.
Reset minimum/maximum render range to defaults (i.e.
unlimited).).
Set the parent factory (this only sets a pointer).
Set some flags with the given mask for this mesh and all children.
- Parameters:
-
Enabling flags:
csRef<iMeshWrapper> someWrapper = ...; someWrapper->SetFlagsRecursive (CS_ENTITY_INVISIBLE | CS_ENTITY_NOCLIP);
Disabling flags:
csRef<iMeshWrapper> someWrapper = ...; someWrapper->SetFlagsRecursive (CS_ENTITY_INVISIBLE | CS_ENTITY_NOCLIP, 0);
- Remarks:
- To set flags non-recursive, use GetFlags().Set().
Set the maximum distance at which this mesh will be rendered.
By default this is 0.
Set the maximum distance at which this mesh will be rendered.
This version uses a variable. By default this is 1000000000.0.
Set the iMeshObject.
Set the minimum distance at which this mesh will be rendered.
By default this is 0.
Set the minimum distance at which this mesh will be rendered.
This version uses a variable. By default this is -1000000000.0.
The renderer will render all objects in a sector based on this number.
Low numbers get rendered first. High numbers get rendered later. Z-buf drawing mode to use for this object. 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/api-2.0/structiMeshWrapper.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Tweet Published Apr 26, 2011 | The Sencha Dev Team | Guide | Medium Last Updated Jul 11, 2011 This Guide is most relevant to Ext JS, 4.x. Guide guides/grid/README.js is not currently available Share this post: Leave a reply
6 Comments
Olivier Pons3 years ago
That’s a very nice tutorial, but is there any way to handle key (up / press / down) events?
If the user press the “del” key => suppress current record;
If the user press the “enter” key => edit current record;
If the user press the “insert” key => create new record.
How would you do this?
DK3 years ago
I used this code as an example ... my jsp includes ext-debug.js ... I keep getting “Uncaught TypeError: Cannot call method ‘substring’ of undefined” on line 5981 —>(if (namespace === from || namespace.substring(0, from.length) === from) {) What is it complaining about ?
Bo3 years ago
tooltip for grid - trying to utilize MVC to create tooltip (e.g.);
care to show example of how that might look?
slemmon3 years ago
@Olivier - Check out this page:
It’s for older Ext versions, but has a KeyMap section that I think might help you out.
Craig P3 years ago
I tried to work this in to the existing MVC tutorial.
As for the paging, either docked on in-line scrolling: it didn’t work. Surprise surprise.
Tirumalasetti3 years ago
I’m new to Ext Js. Starting working on Ext Js 4 since couple of days.
Pagination doesn’t work the below mentioned code. I mean store does not reflect with pageSize=n (Eg; 1,2,3,...etc).
Ext.create(‘Ext.data.Store’, {
model: ‘User’,
autoLoad: true,
pageSize: 4,
proxy: {
type: ‘ajax’,
url : ‘data/users.json’,
reader: {
type: ‘json’,
root: ‘users’,
totalProperty: ‘total’
}
}
});
Ext.create(‘Ext.grid.Panel’, {
store: ‘User’,
columns: ...,
dockedItems: [{
xtype: ‘pagingtoolbar’,
store: userStore, // same store GridPanel is using
dock: ‘bottom’,
displayInfo: true
}]
});
Wasted couple of days to come to this conclusion. After googling i got the partly pagination solution with PaginationMemoryProxy.js. Again if i want to use Ajax proxy this doesn’t work out. Please fix this issue so that other folks should not be effected.
Leave a comment:Commenting is not available in this channel entry.
|
http://www.sencha.com/learn/the-grid-component/?_escaped_fragment_=/guide/grid-section-paging
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
How can i search nd compare strings using parsing
This is a discussion on Searching and Comparing Strings Using Parsing within the C++ Programming forums, part of the General Programming Boards category; How can i search nd compare strings using parsing...
How can i search nd compare strings using parsing
Your question is a little vague... mind elaborating? This might help:
Also, post any code that's relevant to the question (eg. Attempts at it)
Without more details of what you want, we can't answer adequately. Here is a potential solution, however:
-Prelude-PreludeCode:#include <iostream> #include <string> #include <cstdlib> namespace { const std::string to_find = "arbitrary pattern"; } int main() { std::string search_buf; std::string::size_type found; // Fill search_buf with the data to search found = search_buf.find ( to_find ); if ( found != search_buf.npos ) std::cout<<"\""<< to_find <<"\" was found!"<<std::endl; else std::cout<<"\""<< to_find <<"\" was not found."<<std::endl; return EXIT_SUCCESS; }
My best code is written with the delete key.
|
http://cboard.cprogramming.com/cplusplus-programming/25375-searching-comparing-strings-using-parsing.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
class my_bidirectional_iterator : public bidirectional_iterator<double> { ... };This declares my_bidirectional_iterator to be a Bidirectional Iterator whose value type is double and whose distance type is ptrdiff_t. If Iter is an object of class my_bidirectional_iterator, then iterator_category(Iter) will return bidirectional_iterator_tag(), value_type(Iter) will return (double*) 0, and distance_type(Iter) will return (ptrdiff_t*) 0.
[1] It is not required that a Bidirectional Iterator inherit from the base bidirectional_iterator. It is, however, required that the functions iterator_category, distance_type, and value_type be defined for every Bidirectional Iterator. (Or, if you are using the iterator_traits mechanism, that iterator_traits is properly specialized for every Bidirectional Iterator.) Since those functions are defined for the base bidirectional_iterator, the easiest way to ensure that are defined for a new type is to derive that class from bidirectional_iterator and rely on the derived-to-base standard conversion of function arguments.
|
http://idlebox.net/2006/apidocs/sgi-stl-v3.3.zip/bidirectional_iterator.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
Wireshark is a powerful open source tool used to dissect Ethernet packets. Have you ever wondered what it takes to implement your own custom dissector? Furthermore, have you attempted to learn Wireshark's API and found it difficult to understand? This article will attempt to demystify the development of your very own protocol dissector. This article uses Amin Gholiha's "A Simple IOCP Server/Client class" [^] as a basis for dissection, thus producing the AMIN protocol.
The Wireshark developer's guide [^] features a section on setting up the Win32 environment, which I found to be invaluable. This section will paraphrase much of the information found there.
If you do not have VS2005/VS2003 etc., you will need to download and install "Visual C++ 2005 Express Edition"[^].
You must download and install the Platform SDK Server 2003 R2[^].
This guide will not go into great details about the Cygwin package. In short, it allows Wireshark to be compiled on Windows and Linux – which is quite a feat. Download the Cygwin installer and start it.
At the "Select Packages" page, you will need to select some additional packages which are not installed by default. Navigate to the required Category/Package row, and click on the "Skip" item in the "New" column so it shows a version number for:
After clicking the Next button several times, the setup will then download and install the selected packages (this may take a while).
Get the Python 2.4 installer and install Python into the default location. Note: Python 2.5 doesn't work out of the box, so avoid it.
Subversion is not required to build Wireshark, but it is required to follow this article. Subversion allows you to grab the latest source to work with.
My personal choice for a subversion client is TortoiseSVN[^]. Download and install TortoiseSVN[^]. You may need to reboot after the installation to enable the context menu options. This is a pretty nifty feature that lets you right-click on folders in the "explorer" view and grab a source tree.
You may be wondering at this point why you are getting the entire source tree just to make a dissector. The source tree contains the "plugins" directories that contain examples and a place to build your dissector.
Open C:\Wireshark\config.nmake using Notepad or your favorite text editor. The following sections must be updated:
# "Microsoft Visual Studio 6.0" - RECOMMENDED
# Visual C++ 6.0, _MSC_VER 1200, msvcrt.dll (version 6)
#MSVC_VARIANT=MSVC6
# "Microsoft Visual Studio .NET (2002)" - WORKS
# Visual C++ 7.0, _MSC_VER 1300, msvcr70.dll
#MSVC_VARIANT=MSVC2002
# "Microsoft .Net Framework SDK Version 1.0" - WORKS
# needs additional Platform SDK installation
# Visual C++ 7.0, _MSC_VER 1300, msvcr70.dll
#MSVC_VARIANT=DOTNET10
# "Microsoft Visual Studio .NET 2003" - WORKS
# Visual C++ 7.1, _MSC_VER 1310, msvcr71.dll
#MSVC_VARIANT=MSVC2003
# "Microsoft .Net Framework SDK Version 1.1" - WORKS
# needs additional Platform SDK installation
# Visual C++ 7.1, _MSC_VER 1310, msvcr71.dll
#MSVC_VARIANT=DOTNET11
# "Microsoft Visual Studio 2005" - WORKS
# Visual C++ 8.0, _MSC_VER 1400, msvcr80.dll
MSVC_VARIANT=MSVC2005 <------ This is my compiler. All others are commented out.
# "Microsoft Visual C++ 2005 Express Edition" - WORKS
# needs additional Platform SDK installation
# Visual C++ 8.0, _MSC_VER 1400, msvcr80.dll
#MSVC_VARIANT=MSVC2005EE
# "Microsoft .Net Framework 2.0 SDK" - WORKS
# needs additional Platform SDK installation
# Visual C++ 8.0, _MSC_VER 1400, msvcr80.dll
#MSVC_VARIANT=DOTNET20
Your paths may be slightly different depending on the compiler you use. In the example zip file, you will find a step1.bat, step2.bat, and a step3.bat. I made these batch files to simplify this step. I know there are more fancy ways of constructing the batch files, so please feel free to submit your own.
From the C:\WireShark directory, execute:
C:\wireshark> Nmake –f Makefile.nmake verify_tools
Your output should look like:
cl: /cygdrive/c/Programme/Microsoft Visual Studio 8/VC/BIN/cl
link: /cygdrive/c/Programme/Microsoft Visual Studio 8/VC/BIN/link
nmake: /cygdrive/c/Programme/Microsoft Visual Studio 8/VC/BIN/nmake
bash: /usr/bin/bash
bison: /usr/bin/bison
flex: /usr/bin/flex
env: /usr/bin/env
grep: /usr/bin/grep
/usr/bin/find: /usr/bin/find
perl: /usr/bin/perl
env: /usr/bin/env
C:/python24/python.exe: /cygdrive/c/python24/python.exe
sed: /usr/bin/sed
unzip: /usr/bin/unzip
wget: /usr/bin/wget
If something is missing, you may have to repeat the Cygwin install.
If you have closed your cmd.exe, you will have to reopen it and execute Step 8. You can use the step1, step2, step3 batch files to simplify the process. From C:\Wireshark, execute:
nmake –f Makefile.nmake setup (This step may take a little while to complete.)
nmake –f Makefile.nmake distclean
If you have closed your cmd.exe, you will have to reopen it and execute Step 8. You can use the step1, step2, step3 batch files to simplify the process.
nmake –f Makefile.nmake all (this step will take quite a while to complete)
You're done; provided the above steps worked!
I like the idea of distributing my version to friends. You can easily create an installer by doing the following:
nmake –f Makefile.nmake packaging
To launch Wireshark, you simply can run C:\wireshark\wireshark-gtk2\wireshark.exe and check if it starts. I found that I needed to download and install Wireshark before my build worked. This is probably due to not having WinPcap installed prior. You can download and install Wireshark from here.
Note: If you are using Visual Studio 2005, you will have to use your build of Wireshark to test the dissector. For some reason, compiling dissector plug-ins with Visual Studio 2005 prevents their use in the mainstream release of Wireshark. If you are using a different compiler, you may not have to use the compiled version of Wireshark.
Amin was nice enough to code up an excellent IOCP client/server example. His code uses a very simple protocol to transmit data via TCP. This article is designed for the beginner, so we will only be dissecting the text messages sent from the client/server. The file transfer messages are a tad more complex, so we will ignore those for now.
All of Amin's packets are prefixed with a four (4) byte long value which indicates the size of the package. This value is presented in network order over the wire, or, MSB. Network order means that the bytes are ordered in such a way that the most significant bytes are first. For example, let's say your length is 12 decimal. This would be 0x0000000c in hex (if you are storing the value in a long). If you read this value off the "wire", it would appear as 0c 00 00 00; this is "Network" order. You may be wondering why the bytes are reversed. If you think about it as an array of bytes, instead of a long, each byte is written in order. In this case, byte[0] is 0x0c, so it's written first. This is an important concept to understand.
When we use Wireshark to dissect the packet, it is important to understand that 2 and 4 byte integer values are always written in "Network" order, or LSB. In other protocols, the order may be Host – which would be LSB.
Note: The length accounts for the bytes to follow, meaning that the four (4) bytes which represent the length value are not included. For example, if you sent a packet that has 12 characters of text, the length would be 12+1 = 13 (the one byte is the type), as opposed to 12+1+4 = 17.
Amin follows the length prefix with a byte that indicates the "type" of the packet he is delivering. We will only be dealing with type 0x00, which is text. Note: a single byte is not subject to byte order because it's a single byte.
This field is simply a null terminated string which can be entered into Amin's MFC GUI. The default string is 'ABCDEFGHIJKLMNOPQRST123456789'.
You can download the Simple IOCP Client/Server written by Amin here. Note: This article does not discuss the basic use of Wireshark. This article assumes you have used it before and understand how to use it in a basic sense. Here is what Wireshark looks like without a dissector for the AMIN protocol:
Notice that we are simply given a field called "Data". Within the data portion, we can recognize our AMIN protocol based on the "1f 00 00 00 00" package length. The type follows, and after that is the "ABC.." ASCII.
The Wireshark source tree contains a directory called Plugins, which provides a reasonable amount of examples. However, it was impossible to find a really "simple" example to use. So, in combination with the H223 dissector, random examples from the internet, and the developer guide, I have prepared a simple example and placed it in the source zip file. The example can be found in the AMIN\ directory.
In order to compile your own protocol, you must create a set of files to compile your dissector. The easiest thing to do is take the following files from the AMIN\ directory in the source zip file.
Once you have copied the files into C:\Wireshark\plugins\yourprotocolname, you can begin editing them.
You can use a text editor of your choice to open packet-yourprotocol.c. Let's take it line by line:
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#include <span class="code-keyword"><stdio.h></span>
#include <span class="code-keyword"><glib.h></span>
#include <span class="code-keyword"><epan/packet.h></span>
#include <span class="code-keyword"><string.h></span>
All dissectors use some standard headers. You need the config.h, glib.h, and packet.h for sure.
#define PROTO_TAG_AMIN "AMIN"
I like to avoid hard coding constants when I can.
/* Wireshark ID of the AMIN protocol */
static int proto_amin = -1;
This value is very important to Wireshark. Wireshark uses this to identify our protocol.
/* These are the handles of our subdissectors */
static dissector_handle_t data_handle=NULL;
static dissector_handle_t amin_handle;
The dissector handle is what Wireshark uses to reference this dissector.
void dissect_amin(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree);
This is a forward declaration of our dissection function. We will pass this function to a registration function later on.
static int global_amin_port = 999;
This is the port that Wireshark will use to determine if the packet belongs to the AMIN protocol.
static const value_string packettypenames[] = {
{ 0, "TEXT" },
{ 1, "SOMETHING_ELSE" },
{ 0, NULL }
};
Here is where we add some optional text strings representing packet types. You can define many of these depending on the needs of your own protocol. It adds a level of detail that makes the dissector look well thought out.
static gint hf_amin = -1;
static gint hf_amin_header = -1;
static gint hf_amin_length = -1;
static gint hf_amin_type = -1;
static gint hf_amin_text = -1;
/* These are the ids of the subtrees that we may be creating */
static gint ett_amin = -1;
static gint ett_amin_header = -1;
static gint ett_amin_length = -1;
static gint ett_amin_type = -1;
static gint ett_amin_text = -1;
These allow us to attach IDs to the subcomponents of our protocol.
void proto_reg_handoff_amin(void)
{
static gboolean initialized=FALSE;
if (!initialized) {
data_handle = find_dissector("data");
amin_handle = create_dissector_handle(dissect_amin, proto_amin);
dissector_add("tcp.port", global_amin_port, amin_handle);
}
}
This function is called to register our protocol. Notice how the port and dissector handle are passed.
void proto_register_amin (void)
{
/* A header field is something you can search/filter on.
*
* We create a structure to register our fields. It consists of an
* array of hf_register_info structures, each of which are of the format
* {&(field id), {name, abbrev, type, display, strings, bitmask, blurb, HFILL}}.
*/
static hf_register_info hf[] = {
{ &hf_amin,
{ "Data", "amin.data", FT_NONE, BASE_NONE, NULL, 0x0,
"AMIN PDU", HFILL }},
{ &hf_amin_header,
{ "Header", "amin.header", FT_NONE, BASE_NONE, NULL, 0x0,
"AMIN Header", HFILL }},
{ &hf_amin_length,
{ "Package Length", "amin.len", FT_UINT32, BASE_DEC, NULL, 0x0,
"Package Length", HFILL }},
{ &hf_amin_type,
{ "Type", "amin.type", FT_UINT8, BASE_DEC, VALS(packettypenames), 0x0,
"Package Type", HFILL }},
{ &hf_amin_text,
{ "Text", "amin.text", FT_STRING, BASE_NONE, NULL, 0x0,
"Text", HFILL }}
};
The array above defines what elements we will be displaying. These declarations are simply a definition Wireshark uses to determine the data type, when we later dissect the packet.
static gint *ett[] = {
&ett_amin,
&ett_amin_header,
&ett_amin_length,
&ett_amin_type,
&ett_amin_text
};
The array above simply attaches IDs to our definitions. Notice the 1:1 relationship of the hf_ and ett_ data types.
hf_
ett_
proto_amin = proto_register_protocol ("AMIN Protocol", "AMIN", "amin");
proto_register_field_array (proto_amin, hf, array_length (hf));
proto_register_subtree_array (ett, array_length (ett));
register_dissector("amin", dissect_amin, proto_amin);
The above code registers our protocol. In most of the examples I saw, they check to see if proto_amin is initialized already. However, a developer from the mailing list, "Jaap", emailed me saying it was not needed and confused the initalization process. The subsequent calls register our protocol and data handles.
proto_amin
static void
dissect_amin(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree)
{
The dissect function is used to actually dissect and display the packet details.
First, some points are initialized to trees/items:
proto_item *amin_item = NULL;
proto_item *amin_sub_item = NULL;
proto_tree *amin_tree = NULL;
proto_tree *amin_header_tree = NULL;
guint16 type = 0;
This next call checks to see if the INFO column is displaying our "AMIN" tag. If it's not, then we supply it.
if (check_col(pinfo->cinfo, COL_PROTOCOL))
col_set_str(pinfo->cinfo, COL_PROTOCOL, PROTO_TAG_AMIN);
/* Clear out stuff in the info column */
if(check_col(pinfo->cinfo,COL_INFO)){
col_clear(pinfo->cinfo,COL_INFO);
}
//Here we check to see if the INFO column is present. If it is we output
//which ports the packet came from and went to. Also, we indicate the type
//of packet.
// This is not a good way of dissecting packets. The tvb length should
// be sanity checked so we aren't going past the actual size.
type = tvb_get_uint8( tvb, 4 ); // Get the type byte
if (check_col(pinfo->cinfo, COL_INFO)) {
col_add_fstr(pinfo->cinfo, COL_INFO, "%d > %d Info Type:[%s]",
pinfo->srcport, pinfo->destport,
val_to_str(type, packettypenames, "Unknown Type:0x%02x"));
}
If there is a "tree" requested, we handle that request.
if (tree) { /* we are being asked for details */
guint32 offset = 0;
guint32 length = 0;
This call adds our tree to the main dissection tree.
amin_item = proto_tree_add_item(tree, proto_amin, tvb, 0, -1, FALSE);
amin_tree = proto_item_add_subtree(amin_item, ett_amin);
amin_header_tree = proto_item_add_subtree(amin_item, ett_amin);
amin_sub_item = proto_tree_add_item( amin_tree, hf_amin_header,
tvb, offset, -1, FALSE );
Here, we add our own subtree, so we can have a "header" collapsible branch. Following the subtree, we copy out the length bytes to our data type. The order of the long value representing length is "network" order. If someone has a more appropriate tvb_get call to read the length value, it would be appreciated.
tvb_get
amin_header_tree = proto_item_add_subtree(amin_sub_item, ett_amin);
//We use tvb_memcpy to get our length value out (Host order)
tvb_memcpy(tvb, (guint8 *)&length, offset, 4);
proto_tree_add_uint(amin_header_tree, hf_amin_length, tvb, offset, 4, length);
//We increment the offset to get past the 4 bytes indicating length
offset+=4;
//Here we submit the type parameter to the tree.
proto_tree_add_item(amin_header_tree, hf_amin_type, tvb, offset, 1, FALSE);
type = tvb_get_guint8( tvb, offset ); // Get our type byte
offset+=1;
//If the type is TEXT, we add the text to the tree.
if( type == 0 )
{
proto_tree_add_item( amin_tree, hf_amin_text, tvb, offset, length-1, FALSE );
}
If your command line window is still open, you can use that, or use step1/2/3.bat to arrive at the c:\wireshark\plugins\yourprotocol directory. If you are building my source code, you should be at c:\wireshark\plugins\amin. From the "prepared" command line, see Step 8. Execute:
nmake -f Makefile.nmake distclean
nmake -f Makefile.nmake all
If the build succeeds, you should have a yourprotocol.dll. Simply copy this file to your C:\WireShark\wireshark-gtk2\plugins\0.99.7-YOUR-BUILD directory. There should be a bunch of other dissector DLLs present there. You may need to copy it to your c:\program files\wireshark\plugins\0.99.7-YOUR-BUILD directory if you installed it there. The bottom line is to copy it in the appropriate directory based on how you are launching Wireshark. At this point, you should be able to launch Wireshark and dissect packets. A good test is to enter "yourprotocol" in the "filter" area of Wireshark and see if a green or red background forms. A green background, just like in the screenshots of this article, indicates that the dissector loaded correctly.
This is a screen capture of the AMIN protocol being dissected by Wireshark.
I found an immense amount of examples in a searchable form at CodeBase[^]. Without this site, it would have been a much more difficult task locating functions for dissection. Someone really should spend time documenting the Wireshark API in a more formal manner.
A coworker informed me that CodeBase is a pay service. I was able to get into CodeBase beta without having to authenticate. Use this link to do the same (use the search box to find undocumented methods).
After you've defined your data type map, you can use the '.' operator to limit the packets being displayed. For example, amin.type==0 will only show packets where the type equals zero.
There are far more complex examples that can be found in the plugins directory. My suggestion is to check out the H223 dissector. Topics such as TCP segmentation are covered, as well as managing states across combinations of packets (i.e., watching a SIP session throughout multiple packets).
If you wish to locally capture packets using Wireshark, i.e., 127.0.0.1, you must perform a few extra steps. I found this link which helps solve the problem. The required steps, per the site suggestion, are:
arp -s 10.0.0.10 55-55-55-55-55-55
then:
route add 10.0.0.10 10.0.0.10 mask 255.255.255.255
You can then test the capture by executing:
telnet 10.0.0.10
Once the complexity has been removed from designing dissectors, it's quite easy to make your very own protocol present in Wireshark. In addition, the guys at Wireshark greatly appreciate those who take the time to legally reverse engineer protocols for inclusion within the Wireshark distribution.
static int intialized=FALSE
static gboolean
#include <gmodule.h>
if(proto_amin == -1){}
if(check_col(pinfo->cinfo, COLINFO)){
if(tree)
packettypename[]
{0,NULL}
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
mt.exe -manifest pluginName.dll.manifest -outputresource:pluginName.dll;2
foo
AMIN
PLUGINS= \
../../plugins/AMIN/AMIN.dll \
Section "Dissector Plugins" SecPlugins
File "..\..\plugins\AMIN\AMIN.dll"
nmake –f Makefile.nmake packaging
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
C# 6: First reactions
|
http://www.codeproject.com/Articles/19426/Creating-Your-Own-Custom-Wireshark-Dissector?fid=433815&df=90&mpp=25&noise=3&prof=True&sort=Position&view=None&spc=None&select=3268275&fr=1
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Java - Interfaces:
Example:
Let us look at an example that depicts encapsulation:
/* File name : NameOfInterface.java */ import java.lang.*; //Any number of import statements public interface NameOfInterface { //Any number of final, static fields //Any number of abstract method declarations\ }
Interfaces have the following properties:.
/* File name : MammalInt.java */ public class MammalInt implements Animal{ public void eat(){ System.out.println("Mammal eats"); } public void travel(){ System.out.println("Mammal travels"); } public int noOfLegs(){ return 0; } public static void main(String args[]){ MammalInt m = new MammalInt(); m.eat(); m.travel(); } }
This would produce the following result:
Mammal eats Mammal travels interfaces.
An interface can extend another interface, similarly to the way that a class can extend another class..
/:
package java.util; public interface EventListener {}.
|
http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=java&file=java_interfaces.htm
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
How to: Populate the Module Catalog from XAML
Overview
This topic describes how to build a XAML-based module catalog for a solution that uses the Composite Application Library. The module catalog contains the metadata for modules and modules groups in the application; it can be populated in different ways and from different sources. For example, it can be populated from a XAML file. An advantage of having a XAML catalog is that declaring object elements in XAML instantiates the corresponding .NET Framework object through its default constructor. build a XAML-based module catalog.
To build a XAML-based module catalog
- Add a new .xaml file, named ModulesCatalog.xaml, in your Shell project.
- The root element of this .xaml file should be a ModuleCatalog instance. In addition to specifying the default namespaces, this root element must also specify the Modularity namespace, as shown in the following code.
- Add a ModuleInfoGroup child element to the ModuleCatalog root element for each module group you have.
- Set the corresponding properties to each ModuleInfoGroup. The properties defined by the ModuleInfoGroup class are the following:
- Ref. The content of this property indicates the location from where the modules in the module group should be obtained.
- InitializationMode. This property specifies how all the group modules are going to be initialized. The possible values for this property are: WhenAvailable and OnDemand.
- Add ModuleInfo objects to the catalog. ModuleInfo objects can be registered within a group or without a group. Note the following:
- If you want to register modules in a group, put one ModuleInfo element inside the ModuleInfoGroup tags for each module that the module group contains.
- If you want to register modules without a group, put the ModuleInfo element inside the ModuleCatalog tags.
Each ModuleInfo instance has the following properties:
- ModuleName. This property specifies the logical name of the module.
- ModuleType. This property specifies the type of the module.
- Ref. The content of this property indicates the location from where the modules in the module group should be obtained. This property should be set if the module is not inside a module group.
- InitializationMode. This property specifies how all the group modules are going to be initialized. The possible values for this property are WhenAvailable and OnDemand. The default value is WhenAvailable.
- DependsOn. This property can be set to a list of the modules names that this module depends on.
The following code shows the XAML-based Module Catalog implementation included in the Remote Modularity QuickStart.
<Modularity:ModuleCatalog <Modularity:ModuleInfoGroup <Modularity:ModuleInfo </Modularity:ModuleInfoGroup> <Modularity:ModuleInfoGroup <Modularity:ModuleInfo <Modularity:ModuleInfo.DependsOn> <sys:String>ModuleW</sys:String> </Modularity:ModuleInfo.DependsOn> </Modularity:ModuleInfo> <Modularity:ModuleInfo </Modularity:ModuleInfo> </Modularity:ModuleInfoGroup> <!-- Module info without a group --> <Modularity:ModuleInfo </Modularity:ModuleCatalog>
Outcome
A module catalog, which contains metadata for all the application's modules, is created and populated from a XAML file.
More Information
For information about other ways that you can populate the module catalog, see the following topics:
- How to: Populate the Module Catalog from Code
- How to: Populate the Module Catalog from a Configuration File or a Directory in WPF
For more information related to working with modules, see the following topics:
- How to: Load Modules On Demand
- How to: Define Dependencies Between Modules
- How to: Prepare a Module for Remote Downloading
For a complete list of How-to topics included with the Composite Application Guidance, see Development Activities.
Home page on MSDN | Community site
|
http://msdn.microsoft.com/en-us/library/ff921151(d=printer,v=pandp.20).aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.