Mercurial Will Beam You in The Heaven of Merging

I didn’t understood why mercurial should be fantastic and why merging is soo much easier with hg compared to subversion. This week changed my point of view.

In subversion I couldn’t keep in mind the command to copy a revision difference e.g. into the current trunk. I nearly always was forced to do a manual and error-prone merging. I am probably too stupid for that. Now that I am using hg for some of my free time projects and even partially at work I get very comfortable with hg. Especially the fast and network independent commits are awesome when working with hg. The IDE integration – in my case NetBeans 6.X – is good although working with svn/hg on the command line is faster for me.

Then, this week, I read an article about merging and I started to understand how merging works (much easier) with hg.

Lets get started. Assume your users or your QA found an issue in your current release. In hg it is easy to fix such an issue.

  1. go back to the state where you want to fix that issue or stay at the tip:
    hg update -C <oldRevisionNumber>

    Make sure you commited your changes before. Otherwise the -C won’t make backup files of changes!

  2. now create a named branch for the issue:
    hg branch issueXY

    optionally do:

    hg commit -m "start working on issueXY"
  3. NOW FIX THE ISSUE (add, edit, rename, delete files) in the code and commit:
    hg commit -m "fixed issueXY"

    optionally add

    --close-branch
  4. Go back where you want to have the fixed code. Most of the time this will be ‘default’ (svn users known as ‘trunk’) but it also could be the releaseXY branch:
    hg update default
  5. Now do the actually merging
    hg merge issueXY
  6. … and commit the merge:
    hg commit -m "merged fix of issueXY into development sources"

This is very straightforward and worked very well in practice! In my case I had to merge manually 4 files, but for that kdiff3 popped up. You can change the diff application, of course and you could also use NetBeans for that. So merging was really easy and done within 2 minutes! In the fix I changed over 25 files and mercurial even recognized refactored classes (i.e. renamed files)!

I would have never thought that merging could be that easy … until I did it myself. So, try it out! Only 6 steps into the heaven of mergurial!

Hints:

  • You can always go back to the issue via:
    hg update -C issueXY
  • Before commiting things check that you are on the correct branch:
    hg branch
  • Get all branches:
    hg branches
  • View 5 commits of the logs if glog does not work for you:
    hg log -l 5
  • The steps 4 until 6 could be applied to several branches, of course

UPDATE: check out also the comments of the reposting at dzone!

UPDATE: check out the new blog post with other merging strategies. (Although I prefer this one here)

UPDATE: If you want to try it yourself. Just follow these lines:

$ mkdir hg-test
$ cd hg-test/
/hg-test$ hg init
/hg-test$ echo -e 'Hello Work\nrelease 1.0' > hello.txt
/hg-test$ hg add hello.txt
/hg-test$ hg commit -m "initial release"

## released version. now further development

/hg-test$ echo -e 'Hello Work\ndev' > hello.txt
/hg-test$ hg commit -m "further dev"

## go back to released version to fix issue1

/hg-test$ hg update -C 0
1 Dateien aktualisiert, 0 Dateien zusammengeführt, 0 Dateien entfernt, 0 Dateien ungelöst

/hg-test$ hg branch issue1
Arbeitsverzeichnis wurde als Zweig issue1 markiert / working directory was marked as branch issue1

/hg-test$ echo -e 'Hello World\nrelease 1.0' > hello.txt
/hg-test$ hg commit -m "fixed to world"
neuer Kopf erzeugt / new head created

## go back to dev version and apply the issue

/hg-test$ hg update default
1 Dateien aktualisiert, 0 Dateien zusammengeführt, 0 Dateien entfernt, 0 Dateien ungelöst

/hg-test$ hg branches
issue1                         2:50560e696c8b
default                        1:10c88c9e32dd

/hg-test$ more hello.txt
Hello Work
dev

### now look how easy it is to apply the change! ###
### And only the atomic issue will change the code: 'Work' to 'World' ###

/hg-test$ hg merge issue1
merging hello.txt
0 Dateien aktualisiert, 1 Dateien zusammengeführt, 0 Dateien entfernt, 0 Dateien ungelöst
(Zweig-Zusammenführung, vergesse nicht 'hg commit' auszuführen)

/hg-test$ more hello.txt
Hello World
dev

/hg-test$ hg commit -m "merged issue1"

Google Adwords API (sandbox): The specified client email does not exist.

If you encounter the following for the sandbox:

The specified client email does not exist. Your client accounts may not exist because either this is your first time using the sandbox or the sandbox database has been cleaned. Please remove the clientEmail from the request header and call the getClientAccounts method from AccountService to ensure that your client accounts are created and do exist.

Do what they mean 😉 !!

1. Run your application once with an empty clientId/clientMail (so use “” in the Java API) … then you will get an error, which is okay

2. Run your app a second time. But then specify the correct clientId (sth. like “client_1+youraccount@gmail.com”) and all should work fine.

How to Test Apache Solr(J)?


public class SolrSearchTest extends AbstractSolrTestCase {

 private SolrServer server;

 @Override
 public String getSchemaFile() {
    return "solr/conf/schema.xml";
 }

 @Override
 public String getSolrConfigFile() {
    return "solr/conf/solrconfig.xml";
 }

 @Before
 @Override
 public void setUp() throws Exception {
    super.setUp();
    server = new EmbeddedSolrServer(h.getCoreContainer(), h.getCore().getName());
 }

 public testFirstTry() {
    // e.g. add some docs via solrJ
    server.add(createDoc(entity1));
    server.add(createDoc(entity2));
    server.add(createDoc(entity3));
    server.add(createDoc(entity4));
    server.add(createDoc(entity5));

    // now query
    ArrayList myEntities = new ArrayList();
    SolrQuery query = new SolrQuery("text:peter").setQueryType("standard");
    QueryResponse rsp = server.query(query);
    SolrDocumentList docs = rsp.getResults();
    for (SolrDocument sd : docs) {
       myEntities.add(readDoc(sd));
    }

    assertEquals("peter", myEntities.get(0).getText());
    assertEquals(5, rsp.getResults().getNumFound());
 }
}

Another approach is documented here.

Scaled Linked Image in Vaadin

In vaadin you can easily embed images either via:

Embedded logoEmbed = new Embedded("yourText",new ThemeResource("../yourtheme/img/logo.png"));
logoEmbed.setType(Embedded.TYPE_IMAGE);

or if you need a linked image do the following:

Link iconLink = new Link();
iconLink.setIcon(new ExternalResource(urlAsString));
iconLink.setResource(new ThemeResource("../yourtheme/img/logo.png"));

But how can you scale that external image? This is simple if you let the browser do it for you. In Java do:

iconLink.setStyleName("mylogo");

and then you will need to change the following style in your custom style.css to your needs:

.mylogo a img { width: 67px; }

My Links for Apache Solr 1.4

Here is my Solr/Lucene Link list. Last update: Oct’ 2010

Solr

Feature and Get Started Overview

Query

Multiple Cores

Facetting/Navigators

Grouping/Field Collapsing

Result Highlighting

Config Xml

  • Caching -> performance boost: set HashDocSet to 0.005 of all documents!

Statistics with the StatsComponent

Updating/Indexing

Replication for Solr >1.4

  • See SOLR-561 for more information.
  • Scaling article
  • Dashboard via solr/admin/replication/index.jsp
  • index version via solr/replication?command=details (if we would use ?indexversion this would always return 0?)
  • linux script to monitor health of replication
  • bugs: SOLR-1781 (and SOLR-978)

Scaling Solr

SolrJ

Get source via:

Tips and Tricks

  • If you have heavy commits (‘realtime upates’) don’t miss to read this thread about ‘Tuning Solr caches with high commit rates (NRT)’ from Peter Sturge

Lucene

Lucene FAQ

Did you mean

Highlighting

When to prefer Lucene over Solr? Or should I use Hibernate Search?

Db4o via Maven

I couldn’t find the correct maven deps for db4o if you use transparent activation … so here you are:

<dependencies>
 <dependency>
    <groupId>com.db4o</groupId>
    <artifactId>db4o-full-java5</artifactId>
    <version>${db4o.version}</version>
 </dependency>

 <dependency>
    <groupId>com.db4o</groupId>
    <artifactId>db4o-tools-java5</artifactId>
    <version>${db4o.version}</version>
    <scope>compile</scope>
 </dependency>

 <dependency>
    <groupId>com.db4o</groupId>
    <artifactId>db4o-taj-java5</artifactId>
    <version>${db4o.version}</version>
    <scope>compile</scope>
 </dependency>

 <dependency>
    <groupId>com.db4o</groupId>
    <artifactId>db4o-instrumentation-java5</artifactId>
    <version>${db4o.version}</version>
    <scope>compile</scope>
 </dependency>

 </dependencies>

 <repositories>
    <repository>
      <id>db4o</id>
      <name>Db4o</name>
      <url>https://source.db4o.com/maven/</url>
    </repository>
 </repositories>

To use TA while build time you need the following snippet in your pom.xml:

<plugin>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>1.3</version>
    <dependencies>
        <!-- for the regexp -->
        <dependency>
            <groupId>org.apache.ant</groupId>
            <artifactId>ant-nodeps</artifactId>
            <version>1.7.1</version>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
    </dependencies>
    <executions>
        <execution>
            <phase>compile</phase>
            <configuration>
                <tasks>
                    <!-- Setup the path -->
                    <!-- use maven.compile.classpath instead db4o.enhance.path -->

                    <!-- Define enhancement tasks -->
                    <typedef resource="instrumentation-def.properties"
                             classpathref="maven.compile.classpath"
                             loaderref="db4o.enhance.loader" />

                    <!-- Enhance classes which include the @Db4oPersistent annotation -->
                    <!--
                    <typedef name="annotation-filter"
                             classname="tacustom.AnnotationClassFilter"
                             classpathref="maven.compile.classpath"
                             loaderref="db4o.enhance.loader" /> -->

                    <typedef name="native-query"
                             classname="com.db4o.nativequery.main.NQAntClassEditFactory"
                             classpathref="maven.compile.classpath"
                             loaderref="db4o.enhance.loader" />

                    <!-- Instrumentation -->
                    <db4o-instrument classTargetDir="target/classes">
                        <classpath refid="maven.compile.classpath" />
                        <sources dir="target/classes">
                            <include name="**/*.class" />
                        </sources>

                        <!-- <jars refid="runtime.fileset"/> -->

                        <!-- Optimise Native Queries -->
                        <native-query-step />

                        <transparent-activation-step>
                            <!-- <annotation-filter /> -->
                            <regexp pattern="^de\.timefinder\.data" />
                            <!-- <regexp pattern="^enhancement\.model\." /> -->
                        </transparent-activation-step>
                    </db4o-instrument>
                </tasks>
            </configuration>
            <goals>
                <goal>run</goal>
            </goals>
        </execution>
    </executions>
</plugin>

And you will need to configure db4o ala

config.add(new TransparentActivationSupport());

// configure db4o to use instrumenting classloader
config.reflectWith(new JdkReflector(Db4oHelper.class.getClassLoader()));
config.diagnostic().addListener(new DiagnosticListener() {

   @Override
   public void onDiagnostic(Diagnostic dgnstc) {
      System.out.println(dgnstc.toString());
   }
});

Thanks to ptrthomas! … without his nice explanation I woudn’t got it working.

<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.3</version>
<dependencies>
<!– for the regexp –>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.7.1</version>
</dependency>
</dependencies>
<executions>
<execution>
<phase>compile</phase>
<configuration>
<tasks>
<!– http://ptrthomas.wordpress.com/2009/03/08/why-you-should-use-the-maven-ant-tasks-instead-of-maven-or-ivy/ –>
<!–<echo>NOW</echo>–>

<!– TODO get jar –>
<!–
<typedef resource=”org/apache/maven/artifact/ant/antlib.xml” uri=”urn:maven-artifact-ant”
classpath=”lib/maven-ant-tasks.jar”/>

<condition property=”maven.repo.local” value=”${maven.repo.local}” else=”${user.home}/.m2/repository”>
<isset property=”maven.repo.local”/>
</condition>

<artifact:localRepository id=”local.repository” path=”${maven.repo.local}”/>

<artifact:pom file=”pom.xml” id=”maven.project”/>

<artifact:dependencies pathId=”compile.classpath” filesetId=”compile.fileset” useScope=”compile”>
<pom refid=”maven.project”/>
<localRepository refid=”local.repository”/>
</artifact:dependencies>

<artifact:dependencies pathId=”runtime.classpath” filesetId=”runtime.fileset” useScope=”runtime”>
<pom refid=”maven.project”/>
<localRepository refid=”local.repository”/>
</artifact:dependencies>
–>
<!– Setup the path –>
<!– use maven.compile.classpath instead db4o.enhance.path –>

<!– Define enhancement tasks –>
<typedef resource=”instrumentation-def.properties”
classpathref=”maven.compile.classpath”
loaderref=”db4o.enhance.loader” />

<!– Enhance classes which include the @Db4oPersistent annotation –>
<!–
<typedef name=”annotation-filter”
classname=”tacustom.AnnotationClassFilter”
classpathref=”maven.compile.classpath”
loaderref=”db4o.enhance.loader” /> –>

<typedef name=”native-query”
classname=”com.db4o.nativequery.main.NQAntClassEditFactory”
classpathref=”maven.compile.classpath”
loaderref=”db4o.enhance.loader” />

<!– Instrumentation –>
<db4o-instrument classTargetDir=”target/classes” jarTargetDir=”target/”>
<classpath refid=”maven.compile.classpath” />
<sources dir=”src/main/java”>
<include name=”**/*.class” />
</sources>

<!–
TODO runtime.fileset
–>

<!– <jars refid=”runtime.fileset”/> –>

<!– Optimise Native Queries –>
<native-query-step />

<transparent-activation-step>
<!– <annotation-filter /> –>
<regexp pattern=”^de\.timefinder\.jetwick\.data” />
<!– <regexp pattern=”^enhancement\.model\.” /> –>
</transparent-activation-step>
</db4o-instrument>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.3</version>
<dependencies>
<!– for the regexp –>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.7.1</version>
</dependency>
</dependencies>
<executions>
<execution>
<phase>compile</phase>
<configuration>
<tasks>
<!– http://ptrthomas.wordpress.com/2009/03/08/why-you-should-use-the-maven-ant-tasks-instead-of-maven-or-ivy/ –>
<!–<echo>NOW</echo>–>

<!– TODO get jar –>
<!–
<typedef resource=”org/apache/maven/artifact/ant/antlib.xml” uri=”urn:maven-artifact-ant”
classpath=”lib/maven-ant-tasks.jar”/>

<condition property=”maven.repo.local” value=”${maven.repo.local}” else=”${user.home}/.m2/repository”>
<isset property=”maven.repo.local”/>
</condition>

<artifact:localRepository id=”local.repository” path=”${maven.repo.local}”/>

<artifact:pom file=”pom.xml” id=”maven.project”/>

<artifact:dependencies pathId=”compile.classpath” filesetId=”compile.fileset” useScope=”compile”>
<pom refid=”maven.project”/>
<localRepository refid=”local.repository”/>
</artifact:dependencies>

<artifact:dependencies pathId=”runtime.classpath” filesetId=”runtime.fileset” useScope=”runtime”>
<pom refid=”maven.project”/>
<localRepository refid=”local.repository”/>
</artifact:dependencies>
–>
<!– Setup the path –>
<!– use maven.compile.classpath instead db4o.enhance.path –>

<!– Define enhancement tasks –>
<typedef resource=”instrumentation-def.properties”
classpathref=”maven.compile.classpath”
loaderref=”db4o.enhance.loader” />

<!– Enhance classes which include the @Db4oPersistent annotation –>
<!–
<typedef name=”annotation-filter”
classname=”tacustom.AnnotationClassFilter”
classpathref=”maven.compile.classpath”
loaderref=”db4o.enhance.loader” /> –>

<typedef name=”native-query”
classname=”com.db4o.nativequery.main.NQAntClassEditFactory”
classpathref=”maven.compile.classpath”
loaderref=”db4o.enhance.loader” />

<!– Instrumentation –>
<db4o-instrument classTargetDir=”target/classes” jarTargetDir=”target/”>
<classpath refid=”maven.compile.classpath” />
<sources dir=”src/main/java”>
<include name=”**/*.class” />
</sources>

<!–
TODO runtime.fileset
–>

<!– <jars refid=”runtime.fileset”/> –>

<!– Optimise Native Queries –>
<native-query-step />

<transparent-activation-step>
<!– <annotation-filter /> –>
<regexp pattern=”^de\.timefinder\.jetwick\.data” />
<!– <regexp pattern=”^enhancement\.model\.” /> –>
</transparent-activation-step>
</db4o-instrument>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>   </plugin>

Matchstick Graph Editor

Recently I created  a specialized graph editor for matchstick graphs like the Harboth graph. It is not a fancy application but it works:

Try it out and load the data of the harboth graph into your editor!

Question: What is the minimal 1, 2 and 3-degree-matchstick graph in 2D?

Requirements:

  • In a 3 degree graph every node has exactly 3 edges
  • It must be a planar graph (edges can only intersect at the nodes)
  • Length of each edge is the same
  • Be the first and link to your matchstick.dat file(s)
  • Think about solutions for 1, 2, 3 and 4 degree matchstick graphs in 3D (easier than 2D)

Prizes: Win Experience and be listed here as a winner 😉 !

Conclusion

Getting starting and done was easy and fast within some hours. But debugging and writing tests in JavaFX (1.2 or 1.3) with NetBeans is not that good as described earlier :-/

backlink to original article from karussell

Memory Efficient XML Processing not only with DOM

How can I efficiently parse large xml files which can be several GB large? With SAX? Hmmh, well, yes: you can! But this is somewhat ugly. If you prefer a better maintable approach you should definitely try joost which does not load the entire xml file into memory but is quite similar to xslt.

But how can I do this with DOM or even better dom4j, if you only have 50 MB or even less RAM? Well, this is not always possible, but under some circumstances you can do this with a small helper class. Read on!

E.g.you have the xml file

<products>
  <product id="1"> CONTENT1 .. </product>
  <product id="2"> CONTENT2 .. </product>
  <product id="3"> CONTENT3 .. </product>
  ...
</products>

Then you can parse it product by product via:

List<String> idList = new ArrayList<String>();
ContentHandler productHandler =
         new GenericXDOMHandler("/products/product") {
  public void writeDocument(String localName, Element element)
        throws Exception {
    // use DOM here
    String id = element.getAttribute("id");
    idList.add(id)
  }
}
GenericXDOMHandler.execute(new File(inputFile), productHandler);

How does this work? Every time the SAX handler detects the <product> element it will read the product tree (which is quite small) into RAM and call the writeDocument function. Technically we have added a listener to all the product elements with that and are waiting for ‘events’ from our GenericXDOMHandler. The code was developed for my xvantage project but is also used in production code on big files:


import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Attr;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.xml.sax.Attributes;
import org.xml.sax.ContentHandler;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.XMLReader;
import org.xml.sax.helpers.DefaultHandler;
import org.xml.sax.helpers.XMLReaderFactory;

/**
 * License: http://en.wikipedia.org/wiki/Public_domain
 * This software comes without WARRANTY about anything! Use it at your own risk!
 *
 * Reads an xml via sax and creates an Element object per document.
 *
 * @author Peter Karich, peathal 'at' yahoo 'dot' de
 */
public abstract class GenericXDOMHandler extends DefaultHandler {

 private Document factory;
 private Element current;
 private List<String> rootPath;
 private int depth = 0;

 public GenericXDOMHandler(String forEachDocument) {
  rootPath = new ArrayList<String>();
  for (String str : forEachDocument.split("/")) {
    str = str.trim();
    if (str.length() > 0)
    rootPath.add(str);
  }

  if (rootPath.size() < 2)
    throw new UnsupportedOperationException("forEachDocument"+
       +" must have at least one sub element in it."
       + "E.g. /root/subPath but it was:" + rootPath);
 }

 @Override
 public void startDocument() throws SAXException {
  try {
    factory = DocumentBuilderFactory.newInstance().
         newDocumentBuilder().newDocument();
  } catch (Exception e) {
    throw new RuntimeException("can't get DOM factory", e);
  }
 }

 @Override
 public void startElement(String uri, String local,
      String qName, Attributes attrs) throws SAXException {

  // go further only if we add something to our sub tree (defined by rootPath)
  if (depth + 1 < rootPath.size()) {
    current = null;
    if (rootPath.get(depth).equals(local))
      depth++;

    return;
  } else if (depth + 1 == rootPath.size()) {
    if (!rootPath.get(depth).equals(local))
      return;
  }

  if (current == null) {
    // start a new subtree
    current = factory.createElement(local);
  } else {
    Element childElement = factory.createElement(local);
    current.appendChild(childElement);
    current = childElement;
  }

  depth++;

  // Add every attribute.
  for (int i = 0; i < attrs.getLength(); ++i) {
    String nsUri = attrs.getURI(i);
    String qname = attrs.getQName(i);
    String value = attrs.getValue(i);
    Attr attr = factory.createAttributeNS(nsUri, qname);
    attr.setValue(value);
    current.setAttributeNodeNS(attr);
  }
 }

 @Override
 public void endElement(String uri, String localName,
     String qName) throws SAXException {

  if (current == null)
    return;

  Node parent = current.getParentNode();

  // leaf of subtree
  if (parent == null)
    current.normalize();

  if (depth == rootPath.size()) {
    try {
      writeDocument(localName, current);
    } catch (Exception ex) {
      throw new RuntimeException("Exception"+
        +" while writing one element of path:" + rootPath, ex);
    }
  }

  // climb up one level
  current = (Element) parent;
  depth--;
 }

 @Override
 public void characters(char buf[], int offset, int length)
       throws SAXException {
  if (current != null)
    current.appendChild(factory.createTextNode(
       new String(buf, offset, length)));
 }

 public abstract void writeDocument(String localName, Element element)
 throws Exception {
 }

 public static void execute(File inputFile,
     ContentHandler handler)
     throws SAXException, FileNotFoundException, IOException {

   execute(new FileInputStream(inputFile), handler);
 }

 public static void execute(InputStream input,
     ContentHandler handler)
     throws SAXException, FileNotFoundException, IOException {

   XMLReader xr = XMLReaderFactory.createXMLReader();
   xr.setContentHandler(handler);
   InputSource iSource = new InputSource(new InputStreamReader(input, "UTF-8"));
   xr.parse(iSource);
 }
}

PS: It should be simple to adapt this class to your needs; e.g. using dom4j instead of DOM. You could even register several paths and not only one rootPath via a BindingTree. For an implementation of this look at my xvantage project .

PPS: If you want to process xpath expressions in the writeDocument method be sure that this is not a performance bottleneck with the ordinary xpath engine! Because the method could be called several times. In my case I had several thousand documents, but jaxen solved this problem!

PPPS: If you want to handle xml writing and reading (‘xml serialization’) from Java classes check this list out!

Reply via JavaFX on: Shadow Motion Effect in 5 Lines Of jQuery

I took the opportunity to see how easy or difficult it is to implement a “shadow motion effect” in JavaFX. The effect is described in a post from Lam Nguyen and he implements it in jQuery in 5 lines.

What do we need to do this in JavaFX? Is this possible at all?

Yes it is possible and it was easy, fast (~10 min) and IMHO it looks nice. BTW: It is not only an animation – you can drag it:

1. Create a new JavaFX project in NetBeans and choose ‘Drag and Drop’.

2. Then adapt the DragBehaviour to the following. The necessary changes are minimal and marked with HERE:

 // HERE: change 1 line
 public var targetGroup: Group;

 public var targetWidth: Number;
 public var targetHeight: Number;
 public var maxX = 200.0;
 public var maxY = 200.0;
 var startX = 0.0;
 var startY = 0.0;

 init {
 // HERE: +1 line
 var target = targetGroup.content[0];

 target.onMousePressed = function (e: MouseEvent): Void {
   startX = e.sceneX - target.translateX;
   startY = e.sceneY - target.translateY;
 }

 target.onMouseDragged = function (e: MouseEvent): Void {
 var tx = e.sceneX - startX;

 // HERE +7 lines
 var cloned = Duplicator.duplicate(target);
 insert cloned into targetGroup.content;
 FadeTransition {
   node: cloned fromValue: 1 toValue: 0 duration: 0.5s repeatCount: 1
   action: function () {
      delete cloned from targetGroup.content;
   } }.play();

 if (tx < 0)
    tx = 0;

 if (tx > maxX - targetWidth)
    tx = maxX - targetWidth;

 target.translateX = tx;
 var ty = e.sceneY - startY;
 if (ty < 0)
   ty = 0;

 if (ty > maxY - targetHeight)
   ty = maxY - targetHeight;

 target.translateY = ty;
 } // onMouseDragged
 } // init

3. Drag it yourself  or try it out now: